CN108173897B - Request message processing method and device - Google Patents

Request message processing method and device Download PDF

Info

Publication number
CN108173897B
CN108173897B CN201611118173.9A CN201611118173A CN108173897B CN 108173897 B CN108173897 B CN 108173897B CN 201611118173 A CN201611118173 A CN 201611118173A CN 108173897 B CN108173897 B CN 108173897B
Authority
CN
China
Prior art keywords
server
messages
request
request messages
chat room
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611118173.9A
Other languages
Chinese (zh)
Other versions
CN108173897A (en
Inventor
李淼
石鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yunzhong Rongxin Network Technology Co ltd
Original Assignee
Beijing Yunzhong Rongxin Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yunzhong Rongxin Network Technology Co ltd filed Critical Beijing Yunzhong Rongxin Network Technology Co ltd
Priority to CN201611118173.9A priority Critical patent/CN108173897B/en
Publication of CN108173897A publication Critical patent/CN108173897A/en
Application granted granted Critical
Publication of CN108173897B publication Critical patent/CN108173897B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1014Server selection for load balancing based on the content of a request
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms

Abstract

The invention discloses a request message processing method and device, and belongs to the field of internet. The method is applied to a load balancing server of a request message processing system and comprises the following steps: receiving n first request messages sent by a plurality of clients in a first processing period; distributing the n first request messages to the at least one user server; receiving m second request messages sent by the at least one user server in a second processing period, wherein the m second request messages are allowed to pass through by the at least one user server based on a threshold value of the number of messages which can be determined by the at least one user server in a third processing period; and distributing the second request messages carrying the same chat room ID in the m second request messages to the same chat room server according to the chat room identity ID carried in the second request messages. The invention reduces the impact of an excessive number of request messages sent to at least one chat room server on the performance of the chat room server.

Description

Request message processing method and device
Technical Field
The present invention relates to the field of internet, and in particular, to a method and an apparatus for processing a request message.
Background
With the continuous development of internet technology, live broadcast applications are more and more, and users can send request messages to interact with a main broadcast in the live broadcast process. When there are many users participating in live broadcasting in a chat room, there may be a case where there are many request messages to be sent, and since the number of messages that can be displayed by the client is limited, the server needs to discard a certain number of request messages.
In the related art, after receiving request messages sent by a plurality of clients, a load balancing server usually sends the request messages of the same chat room to the same chat room server, the chat room server calculates the total number of the received request messages in real time, and when the total number exceeds a threshold value of the number of the passing messages in a processing period, the request messages exceeding the threshold value of the number of the passing messages are discarded, so that the number of the messages displayed by the clients is kept within an acceptable range of users.
In the process of implementing the invention, the inventor finds that the prior art has at least the following problems:
when the number of request messages sent to a chat room server is very large, it may cause the server to degrade or even crash.
Disclosure of Invention
In order to solve the problem that in the prior art, when the number of request messages sent to a chat room server is very large, the performance of the server is reduced and even crashes, embodiments of the present invention provide a method and an apparatus for processing request messages. The technical scheme is as follows:
in one aspect, a request message processing method is provided, which is applied to a load balancing server of a request message processing system, where the request message processing system includes: the system comprises a client, the load balancing server, at least one user server and at least one chat room server; the method comprises the following steps:
receiving n first request messages sent by a plurality of clients in a first processing period, wherein n is an integer greater than or equal to 2;
distributing the n first request messages to the at least one user server;
receiving m second request messages sent by the at least one user server in a second processing period, wherein the m second request messages are allowed to pass through by the at least one user server based on a threshold value of the number of messages which can be determined by the at least one user server in a third processing period;
and distributing the second request messages carrying the same chat room ID in the m second request messages to the same chat room server according to the chat room identity ID carried in the second request messages.
Optionally, the distributing the n first request messages to the at least one user server includes:
taking the user ID carried in each first request message as a key of a target load balancing algorithm to obtain n values;
and distributing the n first request messages to corresponding user servers in the at least one user server according to the n values.
Optionally, the distributing, according to the chat room identity ID carried in the second request message, the second request messages carrying the same chat room ID in the m second request messages to the same chat room server includes:
taking the chat room ID carried in each second request message as a key of a target load balancing algorithm to obtain m values;
and distributing the m second request messages carrying the same chat room ID to the same chat room server according to the m values.
Optionally, the sum of the threshold values of the number of messages that can be passed by all the user servers in the request message processing system is smaller than the threshold value of the number of messages that can be carried by any one of the chat room servers.
In a second aspect, a request message processing method is provided, which is applied to a first user server of a request message processing system, where the request message processing system includes: the system comprises a client, a load balancing server, at least one user server and at least one chat room server, wherein the first user server is any one of the at least one user server; the method comprises the following steps:
receiving p first request messages sent by the load balancing server in one processing cycle in a third processing cycle, wherein the p first request messages are determined in n first request messages sent by a plurality of clients and received by the load balancing server in the first processing cycle, p is smaller than or equal to n, and n is an integer larger than or equal to 2;
when the total number p of the first request messages is greater than the threshold value of the number of the accessible messages in the third processing period, discarding q first request messages according to a preset message discarding strategy, wherein q is the difference value between p and the threshold value of the number of the accessible messages, and taking the rest request messages as second request messages;
when the total number p of the first request messages is not larger than the threshold value of the number of the passing messages, taking the p first request messages as second request messages;
and sending the second request message to the load balancing server.
Optionally, the method further comprises:
and when the performance parameter of any chat room server is lower than a preset parameter threshold, adjusting the threshold of the number of the messages which can pass through the third processing period.
Optionally, the method further comprises:
monitoring a performance parameter of the first user server;
and sending alarm information for prompting capacity expansion when the performance parameter of the first user server is lower than a preset parameter threshold value.
In a third aspect, a request message processing apparatus is provided, which is applied to a load balancing server of a request message processing system, where the request message processing system includes: the system comprises a client, the load balancing server, at least one user server and at least one chat room server; the device comprises:
a first receiving module, configured to receive n first request messages sent by multiple clients in a first processing cycle, where n is an integer greater than or equal to 2;
a first distribution module, configured to distribute the n first request messages to the at least one user server;
a second receiving module, configured to receive m second request messages sent by the at least one subscriber server in a second processing cycle, where the m second request messages are allowed to pass through, which is determined by the at least one subscriber server based on a threshold of the number of messages that can pass through in a third processing cycle;
and the second distribution module is used for distributing the second request messages carrying the same chat room ID in the m second request messages to the same chat room server according to the chat room identity ID carried in the second request messages.
Optionally, the first distribution module is specifically configured to:
taking the user ID carried in each first request message as a key of a target load balancing algorithm to obtain n values;
and distributing the n first request messages to corresponding user servers in the at least one user server according to the n values.
In a fourth aspect, a request message processing apparatus is provided, which is applied to a first user server of a request message processing system, and the request message processing system includes: the system comprises a client, a load balancing server, at least one user server and at least one chat room server, wherein the first user server is any one of the at least one user server; the device comprises:
a receiving module, configured to receive, in a third processing cycle, p first request messages sent by the load balancing server, where the p first request messages are determined from n first request messages sent by multiple clients and received by the load balancing server in the first processing cycle, p is smaller than or equal to n, and n is an integer greater than or equal to 2;
a processing module, configured to discard q first request messages according to a preset message discarding policy when a total number p of the first request messages is greater than a threshold of the number of the passing messages in the third processing period, where q is a difference between p and the threshold of the number of the passing messages, and use remaining request messages as second request messages;
the processing module is further used for taking the p first request messages as second request messages when the total number p of the first request messages is not larger than a threshold value of the number of the passing messages;
and the sending module is used for sending the second request message to the load balancing server.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
the method and the device for processing the request message provided by the embodiment of the invention distribute the first request message to the at least one user server through the load balancing server, pre-screen the distributed first request message through the threshold value of the number of messages of the at least one user server, reduce the number of the second request message finally distributed to the at least one chat room server to a certain extent, relatively improve the performance of the at least one chat room server, and reduce the influence of the excessive number of the request message sent to the at least one chat room server on the performance of the at least one chat room server.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of a request message processing system according to a request message processing method provided in some embodiments of the present invention;
fig. 2 is a flowchart of a request message processing method according to an embodiment of the present invention;
fig. 3 is a flowchart of another request message processing method according to an embodiment of the present invention;
fig. 4A is a flowchart of another request message processing method according to an embodiment of the present invention;
fig. 4B is a schematic diagram of a request message processing procedure according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a request message processing apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of another request message processing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of another request message processing apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of another request message processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Referring to fig. 1, a schematic diagram of a request message processing system according to a request message processing method provided in some embodiments of the present invention is shown. The request message processing system can be applied to the field of live broadcast, and can comprise: client 110, load balancing server 120, at least one user server 130, and at least one chat room server 140.
The client 110 is installed on a terminal capable of sending messages, such as a smart phone, a computer, a multimedia player, an e-reader, a wearable device, and the like, and may be a live client. The load balancing server 120, the user server 130, and the chat room server 140 may each be a server, a server cluster consisting of several servers, or a cloud computing service center.
Connections may be established between the client 110 and the load balancing server 120, between the load balancing server 120 and the at least one user server 130, and between the load balancing server 120 and the at least one chat room server 140, through wired or wireless networks. The client 110 may issue a message; the load balancing server 120 may distribute the received messages to the at least one server such that each server of the at least one server is presented with the distributed messages as a balanced state as a whole; in the request message processing system, the load balancing server 120 may receive a message sent by the client 110 and distribute the received message to at least one user server 130, or may receive a message sent by at least one user server 130 and distribute the message to at least one chat room server 140; each subscriber server 130 can pre-screen messages distributed thereto by the load balancing server 120, and the pre-screened messages can be redistributed to at least one chat room server 140 by the load balancing server 120.
Fig. 2 is a flowchart of a request message processing method according to an embodiment of the present invention, where the method may be applied to a load balancing server in the request message processing system shown in fig. 1, and the request message processing system includes: the system comprises a client, a load balancing server, at least one user server and at least one chat room server. As shown in fig. 2, the method may include:
step 201, the load balancing server receives n first request messages sent by a plurality of clients in a first processing period, where n is an integer greater than or equal to 2.
Step 202, the load balancing server distributes the n first request messages to at least one user server.
Step 203, the load balancing server receives m second request messages sent by the at least one subscriber server in the second processing period, where the m second request messages are allowed to pass through by the at least one subscriber server based on the threshold of the number of messages that can be determined by the third processing period.
And step 204, the load balancing server distributes the second request messages carrying the same chat room ID in the m second request messages to the same chat room server according to the chat room ID carried in the second request messages.
In summary, in the request message processing method provided in the embodiment of the present invention, the load balancing server distributes the first request message to the at least one user server, and the pre-screening is performed on the distributed first request message by using the threshold value of the number of messages that can pass through the at least one user server, so that the number of the second request messages that are finally distributed to the at least one chat room server is reduced to a certain extent, and when the number of the request messages is too large, the impact on the performance of the chat room server is reduced, and the crash of the chat room server is avoided.
Fig. 3 is a flowchart of another request message processing method according to an embodiment of the present invention, where the method is applied to a first user server in the request message processing system shown in fig. 1, and the request message processing system includes: the system comprises a client, a load balancing server, at least one user server and at least one chat room server, wherein the first user server is any one of the at least one user server. As shown in fig. 3, the method may include:
step 301, the first user server receives p first request messages sent by the load balancing server in a third processing period.
The p first request messages are determined by the load balancing server in n first request messages sent by the plurality of clients and received in the first processing period, wherein p is smaller than or equal to n, and n is an integer larger than or equal to 2.
Step 302, when the total number p of the first request messages is greater than the threshold value of the number of the passing messages in the third processing period, the first user server discards q first request messages according to a preset message discarding strategy, where q is a difference value between p and the threshold value of the number of the passing messages, and the remaining request messages are used as second request messages, and step 304 is executed.
Step 303, when the total number p of the first request messages is not greater than the threshold value of the number of the passing messages, the first user server takes the p first request messages as second request messages, and step 304 is executed.
Step 304, the first user server sends the second request message to the load balancing server.
In summary, the request message processing method provided in the embodiment of the present invention performs pre-screening on the distributed first request message by using the threshold value of the number of messages of the first user server, so as to reduce the number of second request messages finally distributed to at least one chat room server to a certain extent, reduce the influence on the performance of the chat room server when the number of request messages is too large, and avoid the crash of the chat room server.
Fig. 4A is a flowchart of a request message processing method according to an embodiment of the present invention. Since in practical applications, the at least one user server in the request message processing system generally includes a plurality of user servers, and the at least one chat room server generally includes a plurality of chat room servers, the embodiment of the present invention is illustrated in fig. 4A by taking a first user server as any one of the at least one user server 130 and a first chat room server as any one of the at least one chat room server 140 as an example. The actions of the other user servers may reference the first user server and the actions of the other chat room servers may reference the first chat room server, as shown in fig. 4A, the method may comprise:
step 401, the load balancing server receives n first request messages sent by a plurality of clients in a first processing period, where n is an integer greater than or equal to 2.
Optionally, each of the plurality of clients may send a plurality of messages. Therefore, the n first request messages received by the load balancing server in the first processing period are a set of a plurality of messages sent by each of the plurality of clients.
For example, assuming that in one processing cycle, the load balancing server receives first request messages sent by 3 clients, namely, client 1, client 2, and client 3, where client 1 sends 2 first request messages, client 2 sends 3 first request messages, and client 3 sends 5 first request messages, the first request messages received by the load balancing server have 2+3+ 5-10, that is, n-10.
Optionally, the first request message may be a text message or a picture message, and the like, which is not specifically limited in the embodiment of the present invention.
Step 402, the load balancing server distributes the n first request messages to at least one user server.
Optionally, an implementable method for a load balancing server to distribute n first request messages to at least one user server may include:
step A1: and the load balancing server takes the user ID carried in each first request message as a key of a target load balancing algorithm to obtain n values.
In practical applications, there may be multiple clients sending messages in the same processing cycle, and the user Identity (english: Identity; abbreviated as ID) carried in the first request message is used to uniquely identify the user, that is, identify the user sending the message.
In this step, the user ID carried in each first request message is used as a key of the target load balancing algorithm to send the message sent by the same user to the same user server, so as to process the message sent by the same user.
Step B1: and the load balancing server distributes the n first request messages to the corresponding user server in the at least one user server according to the n values.
Optionally, the load balancing server may employ a load balancing algorithm to distribute the n first request messages to corresponding user servers of the at least one user server. For example, the load balancing algorithm may be a consistent hashing algorithm. And taking the user ID carried in each first request message as a key of a target load balancing algorithm, and distributing the n first request messages to at least one user server by using a consistent hash algorithm. In this case, since the user ID of the same user does not change, the message sent by the same user is distributed to the same user server each time the user server number does not change.
Illustratively, the same assumption as that made in step 401 is that, in a certain processing cycle, the load balancing server receives 1000 first request messages sent by multiple clients, and that, in the request message processing system, 5 user servers are included, and each user server can be distributed with 200 messages through distribution by the load balancing server.
Step 403, when the total number p of the first request messages is greater than the threshold value of the number of the passing messages in the third processing period, the first user server discards q first request messages according to a preset message discarding policy, where q is a difference between p and the threshold value of the number of the passing messages, and takes the remaining request messages as second request messages, and then step 405 is executed.
After receiving the p first request messages sent by the load balancing server, the first user server needs to judge whether the total number of the first request messages is greater than the threshold value of the number of the passing messages in the third processing period, and determines whether to discard the first request messages according to the judgment result.
For example, assuming that the number threshold of the messages that can be passed by the first user server is 50 every 20 ms, the first user server receives 200 first request messages in 20 ms, and 200 is greater than 50, so p is 200, and q is 200-50, then 200-50 is discarded, and the remaining 50 messages are used as the second request messages.
The threshold value of the number of the messages which can pass through in the third processing period is the maximum value of the total number of the messages which can pass through in a certain processing period and is preset for the first user server, and the distributed first request messages are pre-screened by setting the maximum value, so that the number of the second request messages which are finally distributed to the at least one chat room server is reduced, the performance of the at least one chat room server is relatively improved, and the influence of the excessive number of the request messages sent to the at least one chat room server on the performance of the at least one chat room server is reduced.
The threshold value of the number of the messages that can pass through in the third processing period can be set according to the performance of the first user server and the corresponding chat room server. Alternatively, a management component of Java management Extensions (JMX MBean) client tool may be used to set the parameter, set an IP (chinese: internet protocol) address of a server node of a first user server to be set, a corresponding Java management extension port number, and a corresponding threshold value of the number of available messages to the JMX MBean client tool, and then operate the JMX MBean client tool when the request message processing system is running, so as to complete setting of the threshold value of the number of available messages of the corresponding first user server.
In practical applications, the threshold value of the number of messages in the third processing period may also be adjusted according to actual use conditions, and the embodiment of the present invention does not specifically limit the threshold value.
Optionally, the threshold of the number of passing messages in the third processing cycle of each user server included in the request message processing system may be equal or unequal, for example: the threshold number of passable messages in the third processing period of each user server may be set to 200.
The preset message dropping policy may be various, for example, the preset message dropping policy of the first user server may include a plurality of phase policies, the plurality of phase policies correspond to a plurality of phases one to one, and in each phase, the corresponding phase policy may perform different processing on messages with different priorities. For example, assuming that the threshold of the number of messages that can be passed by the first subscriber server in the third processing period is 200, the message priority may include a high priority, a medium priority and a low priority, the third processing period is divided into four phases according to the number of messages, and the division range and the corresponding phase policy of the four phases are [0, 100 ]: high, medium and low priority messages are all passed, (100, 150: high and medium priority messages are all passed, low priority messages are prohibited from being passed, (150, 200: high priority messages are all passed, medium and low priority messages are prohibited from being passed, (200, + ∞) all priority messages are prohibited from being passed.
The specific implementation procedure of the message dropping policy may be:
in a third processing cycle, the following steps are performed:
a2, when receiving a first request message, the first user server obtains k, where k is equal to the number of passed first request messages + 1.
B2, the first user server judges whether k satisfies 0 and k is less than or equal to 100. C2 is executed when k satisfies 0 ≦ k ≦ 100, and D2 is executed when k does not satisfy 0 ≦ k ≦ 100.
C2, the first user server allows the first request message to pass through. A2 is executed.
D2, judging whether k is more than 100 and less than or equal to 150. E2 is executed when k satisfies 100 < k.ltoreq.150, and F2 is executed when k does not satisfy 100 < k.ltoreq.150.
E2, when the first request message has high priority or medium priority, the first user server allows the first request message to pass through, and when the first request message has low priority, the first user server discards the first request message. A2 is executed.
F2, judging whether k is more than 150 and less than or equal to 200. G2 is executed when k satisfies 150 < k.ltoreq.200, and H2 is executed when k does not satisfy 150 < k.ltoreq.200, i.e., k > 200.
G2, when the first request message has high priority, the first user server allows the first request message to pass through, and when the first request message has medium priority or low priority, the first user server discards the first request message. A2 is executed.
H2, the first user server discards the first request message. A2 is executed.
Steps A2-H2 are repeated until a third processing cycle is completed.
It should be noted that the above examples of message discarding are only illustrative and are not intended to limit the embodiments of the present invention. In practical applications, the message dropping policy may be more complex, and the embodiment of the present invention does not limit the policy.
And step 404, when the total number p of the first request messages is not larger than the threshold value of the number of the passing messages, the first user server takes the p first request messages as second request messages, and step 405 is executed.
When the total number of the first request messages is not greater than the threshold value of the number of the passing messages, which indicates that all the first request messages are the passing messages, the first request messages can be used as the second request messages without discarding any messages.
For example, assuming that the total number of the first request messages is 40 and the threshold number of passable messages is 50, since the total number of the first request messages 40 is less than the threshold number of passable messages 50, the 40 messages can be used as the second request messages without discarding any one message.
Step 405, the first user server sends the second request message to the load balancing server.
Illustratively, the same assumption is made as in the above embodiment, that is, assuming that the request message processing system includes 5 user servers, and the threshold of the number of passable messages in the third processing period of each first user server is 50, the total number of the second request messages that need to be sent to the at least one load balancing server is 5 × 50 — 250.
And step 406, the load balancing server distributes the second request messages carrying the same chat room ID in the m second request messages to the same chat room server according to the chat room ID carried in the second request message.
The m second request messages are allowed to pass through by the at least one subscriber server based on the threshold number of messages that can be determined by the message number in the third processing period.
After receiving the second request message, the load balancing server may distribute, according to the chat room ID carried in the second request message, the second request message that carries the same chat room ID among the m second request messages to the same chat room server, which may be implemented as follows:
step A3: and the load balancing server takes the chat room ID carried in each second request message as a key of a target load balancing algorithm to obtain m values.
In this step, the chat room ID carried in each second request message is used as a key of the target load balancing algorithm, so that the messages sent to the same chat room can be distributed to the same chat room server, and the messages in the same chat room can be conveniently processed, for example: and (4) processing the related data by taking the chat room as a unit, such as counting the number of uplink messages per second to execute a message discarding strategy.
Step B3: and according to the m values, the load balancing server distributes the m second request messages carrying the same chat room ID to the same chat room server.
Similarly to step B1, the load balancing server may employ a load balancing algorithm to distribute the m second request messages to corresponding chat room servers in the at least one chat room server, which is not described herein again.
Illustratively, assuming that the request message processing system includes 3 chat room servers, and the total number of the second request messages is 250 messages in a certain processing cycle, where the total number of messages sent to chat room server 1 is 120 messages, the total number of messages sent to chat room server 2 is 80 messages, and the total number of messages sent to chat room server 3 is 50 messages, the load balancing server may distribute the corresponding 120 messages to chat room server 1, the corresponding 80 messages to chat room server 2, and the corresponding 50 messages to chat room server 3.
Step 407, when the total number r of the second request messages is greater than the threshold value of the number of the passing messages in the fourth processing period, the first chat room server discards s second request messages, where s is a difference value between r and the threshold value of the number of the passing messages, and takes the remaining request messages as third request messages, and executes step 409.
After the load balancing server distributes the second request messages carrying the same chat room ID in the m second request messages to the same chat room server, the corresponding chat room server (namely a certain first chat room server) calculates the total number r of the received second request messages in real time, judges whether the total number r of the second request messages is larger than the threshold value of the number of the accessible messages in the fourth processing period, and determines whether to discard the second request messages according to the judgment result. And when the total number r exceeds the threshold value of the number of the passing messages in the fourth processing period, discarding the request messages exceeding the threshold value of the number of the passing messages, namely discarding s third requests, wherein s is the difference value between r and the threshold value of the number of the passing messages, and taking the rest request messages as third request messages. The implementation process of this step may refer to step 403 accordingly, and will not be described herein again.
Optionally, the threshold of the number of passable messages of the first chat room server is smaller than the threshold of the number of bearable messages of the corresponding chat room server, so that the first request messages sent by the plurality of clients can be further screened within a range that can be processed by the corresponding chat room server at the first chat room server.
Illustratively, the total number of the second request messages sent by the load balancing server to a certain first chat room server is 200, the threshold value of the number of the messages that can be passed by the first chat room server is 100, the threshold value of the number of the messages that can be carried by the first chat room server is 300, 200 < 300, the total number of the second request messages sent by the load balancing server is within the processable range of the chat room server, the chat room server can work normally, and 100 < 200, the chat room server can discard 100 of the 200 second request messages sent by the load balancing server, and take the remaining 100 request messages as third request messages, thereby realizing further screening of the first request messages sent by the plurality of clients.
Step 408, when the total number r of the second request messages is not larger than the threshold value of the number of the passing messages, the first chat room server takes the r second request messages as third request messages, and step 409 is executed.
When the total number r of the third request messages is not greater than the threshold value of the number of the passing messages, it indicates that all the second request messages are passing messages, and the second request messages can be used as the third request messages without discarding any messages. The implementation process of this step may refer to step 404, which is not described herein again.
Step 409, displaying the third request message through the client.
And finally, the first request message sent by the client needs to be displayed through all the clients participating in the live broadcast interaction of the corresponding chat room. The second request message is filtered, mainly to ensure that the number of messages finally displayed by the client is kept within the acceptable range of the user. Therefore, the third request message processed by the request message processing system is finally displayed by the client. Specifically, according to the chat room ID carried in the third request message, the third request message belonging to the same chat room is displayed through all clients participating in the live broadcast interaction of the chat room.
It should be noted that the first processing cycle, the second processing cycle, the third processing cycle, and the fourth processing cycle may all refer to a time period having a certain fixed duration. The first processing period is a period for the load balancing server to process a request message sent by the client; the second processing period is a period for the load balancing server to process the request message sent by the user server; the third processing period is a period for the user server to process the request message sent by the load balancing server; the fourth processing period is a period for the chat room server to process the request message sent by the load balancing server.
Optionally, the durations of the processing periods may be all equal or may not be equal. For example, the fixed duration may be 1 second or 20 milliseconds. In practical applications, the fixed duration may be adjusted according to practical situations, and the embodiment of the present invention is not particularly limited thereto. Alternatively, the timing of each processing cycle may be implemented by a timer, when the timer starts to count, the cycle starts, and when the counted time length of the timer is equal to the time length of one cycle, the timer is reset, and the timing of the next cycle starts.
Fig. 4B is a schematic diagram of a request message processing procedure according to an embodiment of the present invention. As shown in fig. 4B, a request message processing system includes: 1 load balancing server, 4 clients, 1 user server and 3 chat room servers. In a processing cycle, the client 1 sends 3 first request messages, namely a message 1, a message 2 and a message 3, to a first chat room (corresponding to the chat room server 1); client 2 sends 4 first request messages, message 4, message 5, message 6 and message 7, to the second chat room (corresponding to chat room server 2); client 3 has sent 5 first request messages, message 8, message 9, message 10, message 11, and message 12, to the third chat room (corresponding to chat room server 3); client 4 sent 8 first request messages, message 13, message 14, message 15, message 16, message 17, message 18, message 19, and message 20, respectively, to the first chat room; according to the sending time, the messages are as follows in sequence: message 1, message 4, message 8, message 13, message 2, message 5, message 9, message 14, message 10, message 15, message 6, message 11, message 16, message 3, message 7, message 12, message 17, message 18, message 19, and message 20. Messages sent by client 1 have a low priority, messages sent by client 2 have a high priority, messages sent by client 3 have a medium priority, messages 13, 14, 15, 16 and 17 among messages sent by client 4 have a low priority, and messages 18, 19 and 20 have a high priority. The passing message threshold in the processing cycle of the user server is 11, and the preset message discarding policy is as follows: when the total number k of the messages passed by the user server meets the condition that k is more than or equal to 0 and less than or equal to 5, the user server allows all the messages to pass; when k is more than 5 and less than or equal to 8, allowing the messages with high priority and medium priority to pass through by the user server, and discarding the messages with low priority; when k satisfies 8 < k ≦ 11, the user server allows only the message with high priority to pass through, and discards the message with low priority and the message with medium priority; when k is more than 11, all messages are forbidden to pass through, and all corresponding messages are discarded. After the load balancing server receives the first request message sent by each client, the first request message can be distributed according to the user ID carried by the first request message, because the system has only one user server, the message sent by each client is sent to the user server by the load balancing server, and after the pre-screening of the user servers, a total of 11 second request messages are sent to the load balancing server, which are respectively: message 1, message 4, message 8, message 13, message 2, message 5, message 9, message 10, message 6, message 7, and message 18. Then, the load balancing server distributes the received second request message to each chat room server according to the chat room ID carried by the second request message: 2 and 2 first messages respectively sent by clients 1 and 4 in the second request message are distributed to a chat room server 1; distributing 4 first messages sent by the client 2 in the second request message to the chat room server 2; the 3 first messages sent by the client 3 in the second request message are distributed to the chat room server 3. That is, the number of messages distributed by the load balancing server to chat room server 1, chat room server 2 and chat room server 3 is 4, 4 and 3, respectively. Then, each chat room server screens the second request message again according to the corresponding message discarding strategy, and displays the screened third request message of each chat room server on the corresponding client (not shown in the figure). It should be noted that, in practical applications, the messages of one processing cycle may be hundreds or thousands, and the assumed number of messages in fig. 4B is only a schematic illustration for clarity.
As can be seen from the request message processing process shown in fig. 4B, 20 messages sent by the client distribute the first request message to at least one user server through the load balancing server, and the total number of the second request messages finally sent to the chat room server is 11 by pre-screening the distributed first request message through the message number threshold in the processing cycle of the user server, where 3 chat room servers are distributed with 4, and 3 second request messages, respectively. In the related art, since there is no process in which the user server performs pre-screening on the messages sent by the client, the total number of messages sent to the chat room server is 20, and in an extreme case, in one processing cycle, one chat room server in fig. 4B receives 20 messages at most, compared with the related art, the load of the chat room server is relatively reduced, and the influence on the performance of the corresponding chat room server can be reduced. This effect can be especially pronounced when the number of request messages is excessive, and even a crash of the chat room server can be avoided.
In practical application, the load balancing server for distributing the message to the chat room server and the load balancing server for distributing the message to the user server may be the same or different, and this is not limited in the embodiment of the present invention. In the embodiment of the present invention, the request message processing method is described by taking the same two as an example. When the load balancing server that distributes messages to the chat room servers is not the same as the load balancing server that distributes messages to the user servers, one way of accomplishing this is: at least two load balancing servers are arranged in the request message processing system, wherein one of the load balancing servers is used for receiving a first request message sent by a client and distributing the first request message to a user server, and the other load balancing server is used for receiving a second request message sent by the user server and distributing the second request message to a chat room server.
In the embodiment of the invention, each server can also monitor the working condition of the server in real time and send out prompt information when the performance of the server is abnormal so as to prompt corresponding maintenance processing. Optionally, the performance of the server is abnormal, which may be that the performance parameter of the server is lower than a preset parameter threshold, where the preset parameter threshold may be set according to an actual situation, and the embodiment of the present invention does not specifically limit the performance of the server. In the embodiment of the present invention, there are various ways to maintain the processor, and the embodiment of the present invention takes the following two aspects as examples:
in a first aspect, each chat room server may detect, in real time, a performance parameter, such as a performance parameter of a Central Processing Unit (CPU) or a size of a memory, where the performance parameter may reflect performance of the chat room server. When the performance parameter of any chat room server is lower than the preset parameter threshold value, an alarm message is sent out to prompt the maintenance of the chat room server so as to ensure the performance of the chat room server. The maintenance mode can be realized in various ways, and the embodiment of the invention is described by taking the following three examples:
the first implementation mode is to expand the capacity of the server of the chat room.
When the performance parameter of any chat room server is lower than the preset parameter threshold, a method of expanding the capacity of the chat room server can be adopted to ensure that the performance of the chat room server meets the system processing requirement, namely, the number of the chat room servers in the request message processing system is increased.
Illustratively, assume that a chat room server is originally in the request message processing system, the total number of messages that need to be processed by the chat server is 50. After adding one chat room server to the request message processing system, the load balancing server distributes the 50 messages according to the chat server IDs, and finally, one chat room server is distributed with 30 messages and the other chat room server is distributed with 20 messages. The number of request messages processed by each chat room server is correspondingly reduced compared to before capacity expansion, thereby reducing the load on each chat room server.
In a second implementation manner, the threshold of the number of the passing messages of the user server is adjusted.
Optionally, the threshold of the number of the passing messages of the user server may be adjusted, and the pre-screening of the messages may be implemented by the adjusted threshold of the number of the passing messages. In practical application, the threshold value of the number of the accessible messages of at least one user server can be reduced, so that the user server can filter more request messages, the screening capability is enhanced, and correspondingly, the request messages received by the chat room servers are relatively reduced, thereby reducing the load of each chat room server.
For example, assuming that there are two user servers and one chat room server in the request message processing system, before the threshold of the number of passing messages of each user server is adjusted, the threshold of the number of passing messages of each user server is 50, the total number of passing messages is 100, that is, the total number of messages that can be finally sent to the chat room server is 100; after the threshold of the number of passable messages of each user server is adjusted, and the threshold of the number of passable messages of each user server is 40, the total number of passable messages is 80, that is, the total number of messages which can be finally sent to the chat room server is 80. Therefore, after the threshold value of the number of the messages which can pass through the user server is adjusted, the number of the messages filtered by the user server is increased by 20 messages on the original basis, and correspondingly, the number of the request messages received by the chat room server is reduced by 20 messages on the original basis, so that the load of the chat room server is reduced.
Optionally, one implementation manner of adjusting the threshold of the number of passing messages of the user server is as follows: and setting the IP address of the server node of the user server to be modified, the corresponding Java management extension port number and the corresponding threshold value of the number of the passing messages into the JMX MBean client tool, and then operating the JMX MBean client tool when the request message processing system operates, thereby completing the modification.
In practical applications, there may be multiple ways to adjust the threshold of the number of the messages that can pass through the user server, and the embodiment of the present invention does not limit the method.
In a second aspect, each user server may monitor its performance parameters in real time, such as: the performance parameters of the CPU or the size of the memory, etc., which may reflect the performance of the user server. And when the performance parameter of any user server is lower than a preset parameter threshold value, sending alarm information to prompt maintenance of the user server so as to ensure the performance of the user server. The maintenance mode can be realized in various modes, and the embodiment of the invention is described by taking one of the following as an example: and expanding the capacity of the user server.
When the performance parameter of any user server is lower than the preset parameter threshold, a method of expanding the capacity of the user server can be adopted to ensure that the performance of the user server meets the system processing requirement, namely, the number of the user servers in the request message processing system is increased.
The method for detecting the performance parameters of the server can be that a program is preset in the corresponding server, and the performance parameters of the server are detected in real time through a running program. In practical application, the implementation manners of detecting the server performance parameter and adjusting the threshold of the number of the messages that can pass through the first user server may be various, and the embodiment of the present invention does not limit the implementation manners.
It should be noted that the sum of the threshold values of the number of messages that can be passed through by all the user servers in the request message processing system is smaller than the threshold value of the number of messages that can be carried by any chat room server. The threshold of the number of bearable messages of the chat room server refers to the maximum value of the total number of messages that can be processed by the chat server, the maximum value is a critical value of the total number of messages that can be processed by the chat room server, and once the total number of messages sent to the chat room server exceeds the critical value, the performance of the chat server may be greatly reduced or even crashed.
For example, assuming that there are two user servers and one chat room server in the request message processing system, and the threshold of the number of bearable messages of the chat room server is 100, the sum of the threshold of the number of passable messages of the two user servers in the system needs to be less than 100. And the chat server may crash as soon as the total number of messages sent to the chat room server exceeds 100, for example, the total number of messages sent is 101.
It should be noted that, the order of the steps of the request message processing method provided in the embodiment of the present invention may be appropriately adjusted, and the steps may also be increased or decreased according to the situation, for example, step 403 and step 404 are parallel steps, and step 404 may not be executed when step 403 is executed, and any method that can be easily considered by those skilled in the art within the technical scope disclosed in the present invention should be included in the protection scope of the present invention, and therefore, no further description is given.
In summary, in the request message processing method provided in the embodiment of the present invention, the load balancing server distributes the first request message to the at least one user server, and the first request message distributed is pre-screened by the threshold value of the number of messages that can pass through the first user server, so that the number of the second request messages that are finally distributed to the at least one chat room server is reduced to a certain extent, and when the number of the request messages is too large, the impact on the performance of the chat room server is reduced, and the crash of the chat room server is avoided.
Fig. 5 is a schematic structural diagram of a request message processing apparatus according to an embodiment of the present invention, where the request message processing apparatus is applied to a load balancing server of the request message processing system shown in fig. 1, and the request message processing system includes: the system comprises a client, a load balancing server, at least one user server and at least one chat room server. As shown in fig. 5, the request message processing device 50 may include:
a first receiving module 501, configured to receive n first request messages sent by multiple clients in a first processing cycle, where n is an integer greater than or equal to 2.
A first distribution module 502 for distributing the n first request messages to at least one user server.
A second receiving module 503, configured to receive m second request messages sent by the at least one subscriber server in a second processing cycle, where the m second request messages are allowed to pass through, which is determined by the at least one subscriber server based on a threshold of the number of messages that can pass through in a third processing cycle.
A second distributing module 504, configured to distribute, according to the chat room identity ID carried in the second request message, the second request messages carrying the same chat room ID in the m second request messages to the same chat room server.
In summary, the request message processing apparatus provided in the embodiment of the present invention distributes the first request message to the at least one user server through the first distribution module, and pre-filters the distributed first request message through the threshold of the number of messages that can pass through the at least one user server, so as to reduce the number of second request messages that are finally distributed to the at least one chat room server to a certain extent, and when the number of request messages is too large, reduce the influence on the performance of the chat room server, and avoid the crash of the chat room server.
The first distribution module 502 is specifically configured to:
and taking the user ID carried in each first request message as a key of a target load balancing algorithm to obtain n values.
And distributing the n first request messages to the corresponding user server in the at least one user server according to the n values.
The second distribution module 504 is specifically configured to:
and taking the chat room ID carried in each second request message as a key of a target load balancing algorithm to obtain m values.
And distributing the m second request messages carrying the same chat room ID to the same chat room server according to the m values.
Optionally, the sum of the threshold number of messages that can be passed by all the user servers in the request message processing system is less than the threshold number of messages that can be carried by any chat room server.
In summary, the request message processing apparatus provided in the embodiment of the present invention distributes the first request message to the at least one user server through the first distribution module, and pre-filters the distributed first request message through the threshold of the number of messages that can pass through the at least one user server, so as to reduce the number of second request messages that are finally distributed to the at least one chat room server to a certain extent, and when the number of request messages is too large, reduce the influence on the performance of the chat room server, and avoid the crash of the chat room server.
Fig. 6 is a schematic structural diagram of another request message processing apparatus according to an embodiment of the present invention, where the request message processing apparatus is applied to a first user server of the request message processing system shown in fig. 1, and the request message processing system includes: the system comprises a client, a load balancing server, at least one user server and at least one chat room server, wherein the first user server is any one of the at least one user server. As shown in fig. 6, the request message processing device 60 may include:
a receiving module 601, configured to receive p first request messages sent by the load balancing server in a third processing cycle, where the p first request messages are determined by n first request messages sent by multiple clients and received by the load balancing server in the first processing cycle, p is smaller than or equal to n, and n is an integer greater than or equal to 2.
A processing module 602, configured to discard q first request messages according to a preset message discarding policy when the total number p of the first request messages is greater than a threshold of the number of the passing messages in the third processing period, where q is a difference between p and the threshold of the number of the passing messages, and use the remaining request messages as second request messages.
The processing module 602 is further configured to use p first request messages as second request messages when the total number p of the first request messages is not greater than the threshold value of the number of the passing messages.
A sending module 603, configured to send the second request message to the load balancing server.
In summary, the request message processing apparatus provided in the embodiment of the present invention performs pre-screening on the distributed first request message in the first user server through the processing module, so as to reduce the number of second request messages that are finally distributed to at least one chat room server to a certain extent, reduce the influence on the performance of the chat room server when the number of request messages is too large, and avoid the crash of the chat room server.
As shown in fig. 7, the request message processing device 60 may further include:
the adjusting module 604 is configured to adjust the threshold of the number of messages that can pass through the third processing period when the performance parameter of any chat room server is lower than the preset parameter threshold.
As shown in fig. 8, the request message processing device 60 may further include:
a monitoring module 605 configured to monitor performance parameters of the first user server.
The warning module 606 is configured to send warning information prompting capacity expansion when the performance parameter of the first user server is lower than a preset parameter threshold.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In summary, the request message processing apparatus provided in the embodiment of the present invention performs pre-screening on the distributed first request message in the first user server through the processing module, so as to reduce the number of second request messages that are finally distributed to at least one chat room server to a certain extent, reduce the influence on the performance of the chat room server when the number of request messages is too large, and avoid the crash of the chat room server.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A request message processing method is applied to a load balancing server of a request message processing system, and the request message processing system comprises: the system comprises a client, the load balancing server, at least one user server and at least one chat room server; the method comprises the following steps:
receiving n first request messages sent by a plurality of clients in a first processing period, wherein n is an integer greater than or equal to 2;
distributing the n first request messages to the at least one user server;
receiving m second request messages sent by the at least one user server in a second processing period, where the m second request messages are allowed to pass through as determined by the at least one user server based on a message number threshold in a third processing period, where the third processing period is a period in which the user server processes the request messages sent by the load balancing server;
and distributing the second request messages carrying the same chat room ID in the m second request messages to the same chat room server according to the chat room identity ID carried in the second request messages.
2. The method of claim 1, wherein said distributing said n first request messages to said at least one subscriber server comprises:
taking the user ID carried in each first request message as a key of a target load balancing algorithm to obtain n values;
and distributing the n first request messages to corresponding user servers in the at least one user server according to the n values.
3. The method of claim 1,
the distributing, according to the chat room identity ID carried in the second request message, the second request message carrying the same chat room ID in the m second request messages to the same chat room server includes:
taking the chat room ID carried in each second request message as a key of a target load balancing algorithm to obtain m values;
and distributing the m second request messages carrying the same chat room ID to the same chat room server according to the m values.
4. The method according to any one of claims 1 to 3,
the sum of the threshold value of the number of the messages which can be passed by all the user servers in the request message processing system is less than the threshold value of the number of the messages which can be carried by any one chat room server.
5. A request message processing method, applied to a first user server of a request message processing system, the request message processing system comprising: the system comprises a client, a load balancing server, at least one user server and at least one chat room server, wherein the first user server is any one of the at least one user server; the method comprises the following steps:
receiving p first request messages sent by the load balancing server in a third processing period, where the p first request messages are determined from n first request messages sent by multiple clients and received by the load balancing server in the first processing period, p is less than or equal to n, and n is an integer greater than or equal to 2, where the third processing period is a period in which the user server processes the request messages sent by the load balancing server;
when the total number p of the first request messages is greater than the threshold value of the number of the accessible messages in the third processing period, discarding q first request messages according to a preset message discarding strategy, wherein q is the difference value between p and the threshold value of the number of the accessible messages, and taking the rest request messages as second request messages;
when the total number p of the first request messages is not larger than the threshold value of the number of the passing messages, taking the p first request messages as second request messages;
and sending the second request message to the load balancing server.
6. The method of claim 5, further comprising:
and when the performance parameter of any chat room server is lower than a preset parameter threshold, adjusting the threshold of the number of the messages which can pass through the third processing period.
7. The method of claim 5, further comprising:
monitoring a performance parameter of the first user server;
and sending alarm information for prompting capacity expansion when the performance parameter of the first user server is lower than a preset parameter threshold value.
8. A request message processing apparatus, applied to a load balancing server of a request message processing system, the request message processing system comprising: the system comprises a client, the load balancing server, at least one user server and at least one chat room server; the device comprises:
a first receiving module, configured to receive n first request messages sent by multiple clients in a first processing cycle, where n is an integer greater than or equal to 2;
a first distribution module, configured to distribute the n first request messages to the at least one user server;
a second receiving module, configured to receive m second request messages sent by the at least one subscriber server in a second processing period, where the m second request messages are allowed to pass through, which is determined by the at least one subscriber server based on a message number threshold in a third processing period, where the third processing period is a period in which the subscriber server processes the request messages sent by the load balancing server;
and the second distribution module is used for distributing the second request messages carrying the same chat room ID in the m second request messages to the same chat room server according to the chat room identity ID carried in the second request messages.
9. The apparatus of claim 8, wherein the first distribution module is specifically configured to:
taking the user ID carried in each first request message as a key of a target load balancing algorithm to obtain n values;
and distributing the n first request messages to corresponding user servers in the at least one user server according to the n values.
10. A request message processing apparatus, applied to a first user server of a request message processing system, the request message processing system comprising: the system comprises a client, a load balancing server, at least one user server and at least one chat room server, wherein the first user server is any one of the at least one user server; the device comprises:
a receiving module, configured to receive p first request messages sent by the load balancing server in a third processing period, where the p first request messages are determined in n first request messages sent by multiple clients and received by the load balancing server in the first processing period, p is smaller than or equal to n, and n is an integer greater than or equal to 2, where the third processing period is a period in which the user server processes the request messages sent by the load balancing server;
a processing module, configured to discard q first request messages according to a preset message discarding policy when a total number p of the first request messages is greater than a threshold of the number of the passing messages in the third processing period, where q is a difference between p and the threshold of the number of the passing messages, and use remaining request messages as second request messages;
the processing module is further used for taking the p first request messages as second request messages when the total number p of the first request messages is not larger than a threshold value of the number of the passing messages;
and the sending module is used for sending the second request message to the load balancing server.
CN201611118173.9A 2016-12-07 2016-12-07 Request message processing method and device Active CN108173897B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611118173.9A CN108173897B (en) 2016-12-07 2016-12-07 Request message processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611118173.9A CN108173897B (en) 2016-12-07 2016-12-07 Request message processing method and device

Publications (2)

Publication Number Publication Date
CN108173897A CN108173897A (en) 2018-06-15
CN108173897B true CN108173897B (en) 2020-09-08

Family

ID=62526322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611118173.9A Active CN108173897B (en) 2016-12-07 2016-12-07 Request message processing method and device

Country Status (1)

Country Link
CN (1) CN108173897B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101534261A (en) * 2009-04-10 2009-09-16 阿里巴巴集团控股有限公司 A method, device and system of recognizing spam information
CN101938508A (en) * 2009-07-01 2011-01-05 中国电信股份有限公司 Method and system for shortening time delay in peer-to-peer network streaming media live broadcast system
CN102333040A (en) * 2011-10-28 2012-01-25 中国科学院计算技术研究所 Method and system for controlling instant congestion of server
CN102752669A (en) * 2011-04-19 2012-10-24 中国电信股份有限公司 Transfer processing method and system for multi-channel real-time streaming media file and receiving device
CN104980472A (en) * 2014-04-10 2015-10-14 腾讯科技(深圳)有限公司 Network traffic control method and device
CN105868247A (en) * 2015-12-15 2016-08-17 乐视网信息技术(北京)股份有限公司 Information display method and apparatus
CN105916057A (en) * 2016-04-18 2016-08-31 乐视控股(北京)有限公司 Video barrage display method and device
CN105959392A (en) * 2016-06-14 2016-09-21 乐视控股(北京)有限公司 Page view control method and device
CN106161219A (en) * 2016-09-29 2016-11-23 广州华多网络科技有限公司 Message treatment method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101534261A (en) * 2009-04-10 2009-09-16 阿里巴巴集团控股有限公司 A method, device and system of recognizing spam information
CN101938508A (en) * 2009-07-01 2011-01-05 中国电信股份有限公司 Method and system for shortening time delay in peer-to-peer network streaming media live broadcast system
CN102752669A (en) * 2011-04-19 2012-10-24 中国电信股份有限公司 Transfer processing method and system for multi-channel real-time streaming media file and receiving device
CN102333040A (en) * 2011-10-28 2012-01-25 中国科学院计算技术研究所 Method and system for controlling instant congestion of server
CN104980472A (en) * 2014-04-10 2015-10-14 腾讯科技(深圳)有限公司 Network traffic control method and device
CN105868247A (en) * 2015-12-15 2016-08-17 乐视网信息技术(北京)股份有限公司 Information display method and apparatus
CN105916057A (en) * 2016-04-18 2016-08-31 乐视控股(北京)有限公司 Video barrage display method and device
CN105959392A (en) * 2016-06-14 2016-09-21 乐视控股(北京)有限公司 Page view control method and device
CN106161219A (en) * 2016-09-29 2016-11-23 广州华多网络科技有限公司 Message treatment method and device

Also Published As

Publication number Publication date
CN108173897A (en) 2018-06-15

Similar Documents

Publication Publication Date Title
US10171363B2 (en) Traffic control method and apparatus
US20190373052A1 (en) Aggregation of scalable network flow events
CN107347198B (en) Speed limiting method, speed limiting control node and speed limiting equipment
US11671402B2 (en) Service resource scheduling method and apparatus
CN108173812B (en) Method, device, storage medium and equipment for preventing network attack
CN107592284B (en) Device and method for preventing DoS/DDoS attack
RU2666289C1 (en) System and method for access request limits
CN108696364B (en) Request message processing method, chat room message server and chat room system
US10505976B2 (en) Real-time policy filtering of denial of service (DoS) internet protocol (IP) attacks and malicious traffic
CN107426241B (en) Network security protection method and device
WO2023050901A1 (en) Load balancing method and apparatus, device, computer storage medium and program
CN109672711B (en) Reverse proxy server Nginx-based http request processing method and system
US10476746B2 (en) Network management method, device, and system
CN105162823A (en) Virtual machine management method and device
CN110247893B (en) Data transmission method and SDN controller
CN112671813B (en) Server determination method, device, equipment and storage medium
CN111787362A (en) Message processing method and device
CN106790610B (en) Cloud system message distribution method, device and system
CN108173897B (en) Request message processing method and device
CN105634932B (en) Message pushing method, device, system and computer readable storage medium
US20190036793A1 (en) Network service implementation method, service controller, and communications system
US9967163B2 (en) Message system for avoiding processing-performance decline
Garcia et al. An intrusion-tolerant firewall design for protecting SIEM systems
CN111836020B (en) Code stream transmission method and device in monitoring system and storage medium
CN109462639B (en) Port expansion equipment management method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant