WO2024032094A1 - 即时通讯***及方法 - Google Patents

即时通讯***及方法 Download PDF

Info

Publication number
WO2024032094A1
WO2024032094A1 PCT/CN2023/096600 CN2023096600W WO2024032094A1 WO 2024032094 A1 WO2024032094 A1 WO 2024032094A1 CN 2023096600 W CN2023096600 W CN 2023096600W WO 2024032094 A1 WO2024032094 A1 WO 2024032094A1
Authority
WO
WIPO (PCT)
Prior art keywords
server
client
servers
long link
data
Prior art date
Application number
PCT/CN2023/096600
Other languages
English (en)
French (fr)
Inventor
章维
李伦
蒋永鑫
Original Assignee
深圳市星卡软件技术开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市星卡软件技术开发有限公司 filed Critical 深圳市星卡软件技术开发有限公司
Publication of WO2024032094A1 publication Critical patent/WO2024032094A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/141Setup of application sessions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]

Definitions

  • the present disclosure relates to the field of communication technology, and in particular, to an instant messaging system and method.
  • nginx proxy servers are generally used to do this. Load balancing can achieve performance improvement. Data requests are first accessed to the nginx proxy server, and then forwarded by the nginx server to the corresponding server. In this way, each server processes data according to the weight, thereby sharing the pressure of other servers. This method requires the nginx proxy server. For forwarding, the configuration cost of business optimization is relatively high, and the effect on improving server performance is very limited.
  • the purpose of this disclosure is to provide an instant messaging system and method to reduce the configuration cost of business optimization and improve server performance under high concurrency conditions.
  • An instant messaging system includes multiple first servers and multiple second servers. Each first server establishes a long link with each second server respectively; each first server also A long link is established with at least one client; each first server is connected to a different client; for each first server, the current first server is used to listen to a data transmission request from the connected first client.
  • the data transmission request is sent to the target second server; wherein, the data transmission request carries the data to be transmitted and the identification information of the client to be accessed; the target second server is determined by polling; the target second server is used to transmit the data Logical processing is performed to obtain the return data, and based on the identification information of the client to be accessed, the return data is sent to the first server connected to the client to be accessed, and the return data is sent to the client to be accessed through the first server connected to the client to be accessed.
  • a long link between each first server and at least one client is established in the following manner: for each client, the current client is used to send a long link registration request to the designated first server, by specifying the first server Send the long link registration request to the designated second server; wherein the long link registration request carries the identification information of the current client; the designated second server determines through polling; the designated second server is used to compare the identification information of the current client with The current client performs binding and returns a binding success message to the current client by specifying the first server. To establish a long link between the current client and the specified first server.
  • each first server is provided with multiple client connection points; the designated first server is determined by: determining the number of idle connection points of unconnected clients corresponding to each first server; assigning the largest number of idle connection points The first server is determined as the designated first server.
  • the system also includes a third server; the third server is configured to receive the first long link registration request of each first server, to register each first server, and to save the first long link registration request carried in each first long link registration request.
  • the communication address of a server; the third server is used to receive the second long link registration request of each second server to register each second server, and send the communication address of each first server to each second server, To establish a long link between each first server and each second server.
  • the third server is also configured to, if it receives a new long link registration request from the new first server, register the new first server, and save the communication of the new first server carried in the new long link registration request. address, sending the communication address of the new first server to each second server to establish a long link between the new first server and each second server.
  • the third server is also configured to check the number of times each first server sends the preset data within the first preset time every first preset time interval; wherein, each first server sends the preset data every second interval Send the preset data to the third server at a preset time; the second preset time is shorter than the first preset time; if the number of transmissions corresponding to the target first server is zero, it is determined that the target first server is offline.
  • the third server is also configured to log out of the target first server after determining that the target first server is offline, and send a message to each second server to instruct each second server to cancel the communication with the target first server. long link.
  • the third server is also used to check the current running environment after starting the process.
  • each first server runs multiple sub-processes; each client establishes a long link with each first server through its corresponding sub-process.
  • An instant messaging method includes: for each first server, when the first server currently monitors a data transmission request from a connected first client, it sends the data transmission request to the target first server.
  • Two servers wherein, the data transmission request carries the data to be transmitted and the identification information of the client to be accessed; the target second server is determined through polling; the target second server performs logical processing on the data to be transmitted to obtain the return data, based on the information of the client to be accessed.
  • the identification information is used to send the return data to the first server connected to the client to be accessed, and the return data is sent to the client to be accessed through the first server connected to the client to be accessed.
  • An instant messaging system and method provided by embodiments of the present disclosure include multiple first servers and multiple second servers. Each first server establishes a long link with each second server respectively; each first server The server also establishes a long link with at least one client; each first server is connected to a different client; for each first server, the current first server is used to monitor data from the connected first client.
  • the data transfer request The request is sent to the target second server; the data transmission request carries the data to be transmitted and the identification information of the client to be accessed; the target second server is determined through polling; the target second server is used to logically process the data to be transmitted and return Data, according to the identification information of the client to be accessed, the return data is sent to the first server connected to the client to be accessed, and the return data is sent to the client to be accessed through the first server connected to the client to be accessed.
  • each first server can directly connect to at least one client. In the case of high concurrency, you only need to add servers as needed. There is no need to forward through the nginx proxy server, which can reduce the configuration cost of business optimization.
  • the first server and the client, and the first server and the second server are all connected through long links, which can effectively improve server performance.
  • Figure 1 is a schematic structural diagram of an instant messaging system provided by an embodiment of the present disclosure
  • Figure 2 is a schematic structural diagram of another instant messaging system provided by an embodiment of the present disclosure.
  • Figure 3 is a flow chart of an instant messaging method provided by an embodiment of the present disclosure.
  • Figure 4 is a flow chart of another instant messaging method provided by an embodiment of the present disclosure.
  • IM refers to Instant Messaging
  • IM service is a service that provides instant messaging.
  • Common application scenarios include text chat, voice message sending, file transfer, etc.
  • High performance and scalability have always been important criteria for measuring a server. With the increase in the number of customers and business volume, the memory pressure of the server that provides instant messaging is increasing, and the CPU operating pressure reaches the limit, resulting in slow communication speed and low communication efficiency; in high concurrency situations, that is, at one point in time, In scenarios where many requests are received at the same time, general business optimization and additional server configuration costs are huge and the effect is very limited.
  • embodiments of the present disclosure provide an instant messaging system and method. This technology can be applied in instant messaging scenarios, especially in instant messaging scenarios under high concurrency conditions.
  • an instant messaging system disclosed in an embodiment of the present disclosure is first introduced in detail. As shown in Figure 1, it includes multiple first servers 10 and multiple second servers 11. Each first server The server 10 establishes a long link with each second server 11 respectively; each first server 10 also establishes a long link with at least one client 12; each first server 10 is connected to a different client 12.
  • the above long link can also be called a long connection, which means that multiple data packets can be sent continuously on one connection.
  • both parties of the long link usually send link detection packets.
  • each client 12 usually only one first server 10 is connected, and each first server 10 can be connected to one or more clients 12, and for the client 12 and the first Long links are established between servers 10; long links are established between each first server 10 and each second server 11.
  • the current first server is configured to send the data transmission request to the target second server when it listens to the data transmission request from the connected first client; wherein, the data transmission request carries the data to be transmitted. data and identification information of the client to be accessed; the target second server is determined through polling; the target second server is used to logically process the data to be transmitted to obtain return data, and sends the return data to the client to be accessed based on the identification information of the client to be accessed.
  • the first server connected by the client sends the return data to the client to be accessed through the first server connected to the client to be accessed.
  • the above-mentioned current first server can be any one of multiple first servers 10, and each first server 10 can receive data transmission requests from the connected clients 12; the above-mentioned first client can be the current first server. Any one of at least one connected client; specifically, for the current first server, the current first server can monitor the port connected to the client, and when it monitors the data transmission request sent by the first client, it can The data transmission request is forwarded to the target second server, and the target second server is usually determined from multiple second servers in a polling manner. For example, there are three second servers, numbered No. 1, No. 2, and No. 3. If the last visited second server was No. 2, then according to the polling method, the target second server visited this time is determined sequentially as No.
  • the target second server parses it from the received data transmission request. out the identification information of the client to be accessed, perform logical processing on the data to be transmitted carried in the data transmission request, obtain the return data, and send the return data to the first server 10 connected to the client to be accessed corresponding to the identification information,
  • the first server 10 connected through the client to be accessed sends the return data to the corresponding client to be accessed, completing the data transmission between the two clients; for example, the first client corresponds to client No. 7, which sends the returned data to the client to be accessed.
  • the identification information of the client to be accessed carried in the data transmission request issued by the currently connected first server corresponds to client No. 1.
  • the current first server forwards the data transmission request to the target second server, and the target second server transmits the data
  • the data to be transmitted in the request is logically processed, and the identification information carried is found to be the identification of client No. 1.
  • the first server connected to client No. 1 forwards the logically processed return data to client No. 1.
  • the above-mentioned instant messaging system includes multiple first servers and multiple second servers.
  • Each first server has a long link established with each second server.
  • Each first server also has at least one client.
  • a long link is established; each first server is connected to a different client; for each first server, the current first server is used to send a data transmission request when listening to a data transmission request from the connected first client.
  • each first server can directly connect to at least one client. In the case of high concurrency, you only need to add servers as needed. There is no need to forward through the nginx proxy server, which can reduce the configuration cost of business optimization. Moreover, the first server and the client, and the first server and the second server are all connected through long links, which can effectively improve server performance.
  • a long link between each first server and at least one client is established in the following manner: for each client, the current client is used to send a long link registration request to the designated first server, by specifying the first server Send the long link registration request to the designated second server; wherein the long link registration request carries the identification information of the current client; the designated second server determines through polling; the designated second server is used to compare the identification information of the current client with The current client performs binding and returns a binding success message to the current client by specifying the first server to establish a long link between the current client and the specified first server.
  • each client can send a long link registration request to a first server among multiple first servers.
  • the first server can be a randomly selected first server, or it can be a preset one.
  • a server forwards the received long link registration request to a designated second server.
  • a polling method can also be used to determine the designated second server from multiple second servers.
  • the designated second server registers the received long link.
  • the request is logically processed, the identification information is bound to the current client, stored in the memory, and then the successful binding result is returned to the current client through the designated first server, so that the current client is established with the designated first server. long link.
  • each first server is provided with multiple client connection points; the designated first server is determined by: determining the number of idle connection points of unconnected clients corresponding to each first server; assigning the largest number of idle connection points The first server is determined as the designated first server.
  • each first server may be connected to more clients or may be connected to fewer clients.
  • the idle connection points of unconnected clients corresponding to each first server can be calculated based on the number of connection points of connected clients corresponding to each first server. quantity, the current client automatically sends a long link registration request to the first server with the largest number of idle connection points.
  • the system also includes a third server; the third server is configured to receive the first long link registration request of each first server, to register each first server, and to save the first long link registration request carried in each first long link registration request.
  • the communication address of one server; the third server is used to receive the second long link registration request of each second server to register each third server.
  • the second server sends the communication address of each first server to each second server to establish a long link between each first server and each second server.
  • the third server is mainly responsible for the registration of each first server and each second server, and providing long links to all first servers and all second servers.
  • the instant messaging system can be compared to an express company.
  • three parts need to be built, namely the administrative center, the dispatch center and the post station.
  • the post station is mainly responsible for sending and delivering express delivery.
  • the dispatch center is mainly responsible for receiving the express delivery sent by the post station, and then performs a series of processing, and then sends it to the post station, which then delivers it. to the recipient.
  • the administrative center is only responsible for management. There are multiple stations and dispatch centers, and they can be directly connected after arrangements are made. If a new station is registered, the administrative center will notify all dispatch centers. In the same way, if a post station ceases to operate, the administrative center will notify all dispatch centers that they will not deliver any packages to the post station.
  • the overall architecture of this solution is determined to be mainly divided into three parts: the third server (corresponding to the administrative center), the first server (corresponding to the post station), and the second server (corresponding to the dispatch center).
  • service A corresponds to the above-mentioned third server
  • service B corresponds to the above-mentioned first server
  • service C corresponds to the above-mentioned second server.
  • service refers to a series of programs deployed on the server to achieve a certain series of functions. Deploy service A, service B and service C on the server respectively. Since service A only handles the communication when the service is started, it does not require a cluster and can be deployed separately. Service B and service C can increase or decrease servers according to the business volume. The quantity generally depends on the number of user connections.
  • the server configuration is 8-core 16G
  • service B and service C are evenly distributed based on the number of CPU cores.
  • Another client, client 7 and client 1 can complete data communication through the path corresponding to the thick solid line.
  • service A, service B and service C processes are started first. After the service B and service C processes are started, a long link request is initiated to the service A process to register itself. After service A receives the registration of each service B, Store the communication addresses of all service B in the memory. After service A receives the registration of each service C, it sends the communication addresses of all service B in the memory to each service C. Each service C receives all the communication addresses of service B. Connect each service B after the communication address, so that each service B and each service C have a long link.
  • the third server is also configured to, if it receives a new long link registration request from the new first server, register the new first server, and save the communication of the new first server carried in the new long link registration request. address, sending the communication address of the new first server to each second server to establish a long link between the new first server and each second server.
  • service A receives a new long link registration request sent by new service B, it can register The new service B saves its corresponding communication address. At the same time, service A will notify all service C and send the new communication address to all service C. After each service C receives the new communication address, it will communicate with each service C respectively. New service B establishes a long link.
  • the third server is also configured to check the number of times each first server sends the preset data within the first preset time every first preset time interval; wherein, each first server sends the preset data every second interval Send the preset data to the third server at a preset time; the second preset time is shorter than the first preset time; if the number of transmissions corresponding to the target first server is zero, it is determined that the target first server is offline.
  • Both the above-mentioned first preset time and the second preset time can be set according to actual needs, and the first preset time is usually longer than the second preset time; in actual implementation, they can be set separately for the first server and the third server.
  • the scheduled task can be executed once every second preset time, for example, once every 15 seconds, and send a preset data to the third server.
  • the content of the preset data is fixed.
  • the server executes it every first preset time interval. For example, it executes it every 50 seconds.
  • the target first server can be any one of multiple first servers. Under normal circumstances, the ping number of each first server will be greater than 0, and there is a certain fault tolerance space. .
  • the third server is also configured to log out of the target first server after determining that the target first server is offline, and send a message to each second server to instruct each second server to cancel the communication with the target first server. long link.
  • the third server will log out of the target first server and notify all second servers. Each second server will no longer connect after receiving the notification that the target first server has been logged out. The first target server to go offline.
  • the third server is also used to check the current running environment after starting the process.
  • Service A corresponds to the third server. After the service A process is started, the operating environment will be monitored and signal monitoring will be created. After the service B process and the service C process are started, Become a daemon process and monitor the status of service B and service C.
  • Service A is mainly responsible for the registration of service B and service C, and arranging long links for all service B and all service C; service B is mainly responsible for listening on the port, collecting client long link registration and data requests, and is also responsible for receiving service C's Return results and return data to the client.
  • Service C is mainly responsible for listening to the port, accepting the request data sent by Service B, performing logical processing, and obtaining the return result and sending it to Service B.
  • the client is mainly used to request a long link from Service B and determine the registration result. If the registration is successful, it sends request data to the listening port of Service B and monitors the return result. If the registration fails, it repeats the steps of requesting a long link from Service B. .
  • each first server runs multiple sub-processes; each client establishes a long link with each first server through its corresponding sub-process.
  • service B When service B is initialized, multiple service B sub-processes are usually created. As long as the memory is large enough, more sub-processes will be created. Each sub-process will be responsible for a client connection.
  • One service B can create tens of thousands of sub-processes, which is equivalent to every Service B can support tens of thousands of client connections. Since service B is only responsible for client long connections and data transmission and does not perform logical processing, service B's performance is very efficient.
  • performance improvement is mainly achieved through nginx load balancing.
  • This method is implemented through nginx proxy and forwarding.
  • a request comes in, accesses the nginx server, and is forwarded by nginx to the corresponding server.
  • Each server processes data according to the weight, thereby sharing the pressure of other servers.
  • This solution improves server performance by establishing the number of connections. Each time a server is added, a corresponding number of connections are created on the server. The servers are linked through long links to achieve the effect of improving server performance. This solution saves With nginx forwarding, communication is faster, and each service is linked through long links, so the amount of data processed can be significantly increased.
  • the instant messaging system provided in this embodiment adopts the form of a distributed IM service architecture, which can support high concurrency, exponentially improve server performance, and improve service stability. Compared with other IM servers, it is more scalable and can easily be deployed in a distributed manner. Several cheap servers can be used to form a server cluster, such as multiple first servers and multiple second servers. This can greatly improve service performance and reduce costs at the same time. If one of the servers goes down, other servers will continue to work, effectively reducing the impact of the outage. While ensuring high performance, it also improves stability.
  • the embodiment of the present disclosure provides another instant messaging method, as shown in Figure 4.
  • the method includes:
  • Step S402 For each first server, when the current first server listens to the data transmission request from the connected first client, it sends the data transmission request to the target second server; wherein, the data transmission request carries the data to be transmitted. Data and identification information of the client to be accessed; the target second server is determined through polling;
  • Step S404 The target second server performs logical processing on the data to be transmitted to obtain return data.
  • the return data is sent to the first server connected to the client to be accessed, through the first server connected to the client to be accessed. Send the return data to the client to be accessed.
  • each first server the current first server is used to send the data transmission request to the target second server when monitoring the data transmission request from the connected first client; wherein, the data transmission request It carries the identification information of the data to be transmitted and the client to be accessed; the target second server is determined through polling; the target second server is used to logically process the data to be transmitted to obtain the return data, and returns the data according to the identification information of the client to be accessed.
  • the data is sent to the first server connected to the client to be accessed, and the return data is sent to the client to be accessed through the first server connected to the client to be accessed.
  • each first server can directly connect to at least one client.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)

Abstract

本公开提供了一种即时通讯***及方法,当前第一服务器用于将监听到的来自第一客户端的数据传输请求发送至目标第二服务器;目标第二服务器用于对其中携带的待传输数据进行逻辑处理得到返回数据,根据其中携带的待访问客户端的标识信息,将返回数据发送至待访问客户端连接的第一服务器,通过待访问客户端连接的第一服务器将返回数据发送至待访问客户端。该***中,每个第一服务器可以直接连接至少一个客户端,在高并发情况下,只需要按需增加服务器即可,不需要通过nginx代理服务器进行转发,从而可以降低业务优化的配置成本,并且第一服务器与客户端之间、第一服务器与第二服务器之间均通过长链接方式连接,从而可以有效提升服务器性能。

Description

即时通讯***及方法
相关申请的交叉引用
本公开要求于2022年08月12日提交中国国家知识产权局的申请号为202210964562.2、名称为“即时通讯***及方法”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及通讯技术领域,尤其是涉及一种即时通讯***及方法。
背景技术
即时通讯的常见应用场景有文字聊天、语音消息发送、文件传输等。随着客户数和业务量的增加,服务器的请求和响应数也会逐渐增加,提供即时通讯的服务器的压力越来越大,相关技术中,在高并发情况下,一般是通过nginx代理服务器做负载均衡从而达到性能提升,数据请求先访问到nginx代理服务器,再由nginx服务器转发到对应的服务器,这样每台服务器根据权重来处理数据,从而达到分担其他服务器压力,该方式要通过nginx代理服务器进行转发,业务优化的配置成本较大,而且对服务器性能的提升效果非常有限。
发明内容
本公开的目的在于提供一种即时通讯***及方法,以降低高并发情况下,业务优化的配置成本,提升服务器性能。
本公开实施例提供的一种即时通讯***,包括多个第一服务器和多个第二服务器,每个第一服务器分别与每个第二服务器之间建立有长链接;每个第一服务器还与至少一个客户端之间建立有长链接;每个第一服务器连接不同的客户端;针对每个第一服务器,当前第一服务器用于当监听到来自所连接的第一客户端的数据传输请求时,将数据传输请求发送至目标第二服务器;其中,数据传输请求中携带有待传输数据和待访问客户端的标识信息;目标第二服务器通过轮询方式确定;目标第二服务器用于对待传输数据进行逻辑处理得到返回数据,根据待访问客户端的标识信息,将返回数据发送至待访问客户端连接的第一服务器,通过待访问客户端连接的第一服务器将返回数据发送至待访问客户端。
进一步的,每个第一服务器与至少一个客户端之间的长链接通过以下方式建立:针对每个客户端,当前客户端用于向指定第一服务器发送长链接注册请求,通过指定第一服务器将长链接注册请求发送至指定第二服务器;其中,长链接注册请求中携带有当前客户端的标识信息;指定第二服务器通过轮询方式确定;指定第二服务器用于将当前客户端的标识信息与当前客户端进行绑定,通过指定第一服务器将绑定成功消息返回至当前客户端, 以建立当前客户端与指定第一服务器之间的长链接。
进一步的,每个第一服务器设置有多个客户端连接点;指定第一服务器通过下述方式确定:确定每个第一服务器对应的未连接客户端的空闲连接点数量;将空闲连接点数量最多的第一服务器确定为指定第一服务器。
进一步的,***还包括第三服务器;第三服务器用于接收每个第一服务器的第一长链接注册请求,以注册每个第一服务器,保存每个第一长链接注册请求中携带的第一服务器的通讯地址;第三服务器用于接收每个第二服务器的第二长链接注册请求,以注册每个第二服务器,将每个第一服务器的通讯地址发送至每个第二服务器,以建立每个第一服务器与每个第二服务器之间的长链接。
进一步的,第三服务器还用于,如果接收到新的第一服务器的新的长链接注册请求,注册新的第一服务器,保存新的长链接注册请求中携带的新的第一服务器的通讯地址,将新的第一服务器的通讯地址发送至每个第二服务器,以建立新的第一服务器与每个第二服务器之间的长链接。
进一步的,第三服务器还用于,每间隔第一预设时间,检查每个第一服务器在第一预设时间内发送预设数据的发送次数;其中,每个第一服务器每间隔第二预设时间向第三服务器发送预设数据;第二预设时间短于第一预设时间;如果目标第一服务器对应的发送次数为零,确定目标第一服务器下线。
进一步的,第三服务器还用于,当确定目标第一服务器下线后,注销目标第一服务器,向每个第二服务器发送消息,以指示每个第二服务器取消与目标第一服务器之间的长链接。
进一步的,第三服务器还用于,当启动进程后,检查当前运行环境。
进一步的,每个第一服务器中运行有多个子进程;每个客户端通过各自对应的子进程与每个第一服务器之间建立长链接。
本公开实施例提供的一种即时通讯方法,方法包括:针对每个第一服务器,当前第一服务器当监听到来自所连接的第一客户端的数据传输请求时,将数据传输请求发送至目标第二服务器;其中,数据传输请求中携带有待传输数据和待访问客户端的标识信息;目标第二服务器通过轮询方式确定;目标第二服务器对待传输数据进行逻辑处理得到返回数据,根据待访问客户端的标识信息,将返回数据发送至待访问客户端连接的第一服务器,通过待访问客户端连接的第一服务器将返回数据发送至待访问客户端。
本公开实施例提供的一种即时通讯***及方法,包括多个第一服务器和多个第二服务器,每个第一服务器分别与每个第二服务器之间建立有长链接;每个第一服务器还与至少一个客户端之间建立有长链接;每个第一服务器连接不同的客户端;针对每个第一服务器,当前第一服务器用于当监听到来自所连接的第一客户端的数据传输请求时,将数据传输请 求发送至目标第二服务器;其中,数据传输请求中携带有待传输数据和待访问客户端的标识信息;目标第二服务器通过轮询方式确定;目标第二服务器用于对待传输数据进行逻辑处理得到返回数据,根据待访问客户端的标识信息,将返回数据发送至待访问客户端连接的第一服务器,通过待访问客户端连接的第一服务器将返回数据发送至待访问客户端。该***中,每个第一服务器可以直接连接至少一个客户端,在高并发情况下,只需要按需增加服务器即可,不需要通过nginx代理服务器进行转发,从而可以降低业务优化的配置成本,并且第一服务器与客户端之间、第一服务器与第二服务器之间均通过长链接方式连接,从而可以有效提升服务器性能。
附图说明
为了更清楚地说明本公开具体实施方式或现有技术中的技术方案,下面将对具体实施方式或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的一种即时通讯***的结构示意图;
图2为本公开实施例提供的另一种即时通讯***的结构示意图;
图3为本公开实施例提供的一种即时通讯方法的流程图;
图4为本公开实施例提供的另一种即时通讯方法的流程图。
具体实施方式
下面将结合实施例对本公开的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
IM是指即时通讯(Instant Messaging),IM服务就是提供即时通讯的服务。常见应用场景有文字聊天、语音消息发送、文件传输等。高性能和可扩展一直是衡量一个服务器的重要标准。随着客户数和业务量的增加,提供即时通讯的服务器的内存压力越来越大,CPU运行压力达到极限,导致通讯速度慢、通讯效率低;在高并发情况下,即在一个时间点,同时收到了很多请求的场景下,一般的业务优化和增加服务器配置成本巨大而且效果非常有限。基于此,本公开实施例提供了一种即时通讯***及方法,该技术可以应用于即时通讯场景中,尤其可以应用于高并发情况下的即时通讯场景中。
为便于对本实施例进行理解,首先对本公开实施例所公开的一种即时通讯***进行详细介绍,如图1所示,包括多个第一服务器10和多个第二服务器11,每个第一服务器10分别与每个第二服务器11之间建立有长链接;每个第一服务器10还与至少一个客户端12之间建立有长链接;每个第一服务器10连接不同的客户端12。
上述长链接也可以称为长连接,指在一个连接上可以连续发送多个数据包,在连接保持期间,如果没有数据包发送,长链接的双方通常会发链路检测包。在实际实现时,对于每个客户端12来说,通常只会连接一个第一服务器10,而每个第一服务器10可以连接一个或多个客户端12,并且,对客户端12和第一服务器10之间进行长链接;对每个第一服务器10和每个第二服务器11之间均进行长链接。
针对每个第一服务器10,当前第一服务器用于当监听到来自所连接的第一客户端的数据传输请求时,将数据传输请求发送至目标第二服务器;其中,数据传输请求中携带有待传输数据和待访问客户端的标识信息;目标第二服务器通过轮询方式确定;目标第二服务器用于对待传输数据进行逻辑处理得到返回数据,根据待访问客户端的标识信息,将返回数据发送至待访问客户端连接的第一服务器,通过待访问客户端连接的第一服务器将返回数据发送至待访问客户端。
上述当前第一服务器可以是多个第一服务器10中的任意一个,每个第一服务器10都可以接收来自所连接的客户端12的数据传输请求;上述第一客户端可以是当前第一服务器所连接的至少一个客户端中的任意一个;具体的,对于当前第一服务器来说,当前第一服务器可以监听连接客户端的端口,当监听到第一客户端发送的数据传输请求时,可以将该数据传输请求转发至目标第二服务器,通常采用轮询方式从多个第二服务器中确定该目标第二服务器,比如,有三个第二服务器,编号分别为1号、2号和3号,如果上一次被访问的第二服务器为2号,则按照轮询方式,依序确定的本次被访问的目标第二服务器即为3号;目标第二服务器从接收到的数据传输请求中解析出所携带的待访问客户端的标识信息,将数据传输请求中携带的待传输数据进行逻辑处理后,得到返回数据,将返回数据发送至该标识信息对应的待访问客户端连接的第一服务器10,通过待访问客户端连接的第一服务器10将返回数据发送至对应的待访问客户端,完成了两个客户端之间的数据传输;比如,第一客户端对应7号客户端,其向所连接的当前第一服务器发出的数据传输请求中携带的待访问客户端的标识信息对应为1号客户端,当前第一服务器将该数据传输请求转发给目标第二服务器,目标第二服务器对数据传输请求中的待传输数据进行逻辑处理,查询到携带的标识信息为1号客户端的标识,通过1号客户端连接的第一服务器将逻辑处理后的返回数据转发至1号客户端。
上述即时通讯***,包括多个第一服务器和多个第二服务器,每个第一服务器分别与每个第二服务器之间建立有长链接;每个第一服务器还与至少一个客户端之间建立有长链接;每个第一服务器连接不同的客户端;针对每个第一服务器,当前第一服务器用于当监听到来自所连接的第一客户端的数据传输请求时,将数据传输请求发送至目标第二服务器;其中,数据传输请求中携带有待传输数据和待访问客户端的标识信息;目标第二服务器通 过轮询方式确定;目标第二服务器用于对待传输数据进行逻辑处理得到返回数据,根据待访问客户端的标识信息,将返回数据发送至待访问客户端连接的第一服务器,通过待访问客户端连接的第一服务器将返回数据发送至待访问客户端。该***中,每个第一服务器可以直接连接至少一个客户端,在高并发情况下,只需要按需增加服务器即可,不需要通过nginx代理服务器进行转发,从而可以降低业务优化的配置成本,并且第一服务器与客户端之间、第一服务器与第二服务器之间均通过长链接方式连接,从而可以有效提升服务器性能。
进一步的,每个第一服务器与至少一个客户端之间的长链接通过以下方式建立:针对每个客户端,当前客户端用于向指定第一服务器发送长链接注册请求,通过指定第一服务器将长链接注册请求发送至指定第二服务器;其中,长链接注册请求中携带有当前客户端的标识信息;指定第二服务器通过轮询方式确定;指定第二服务器用于将当前客户端的标识信息与当前客户端进行绑定,通过指定第一服务器将绑定成功消息返回至当前客户端,以建立当前客户端与指定第一服务器之间的长链接。
在实际实现时,每个客户端可以向多个第一服务器中的某个第一服务器发送长链接注册请求,该某个第一服务器可以是随机选择的第一服务器,也可以是按照预设规则选择的第一服务器;在实际实现时,对于当前客户端来说,当前客户端可以向指定第一服务器发送长链接注册请求,其中携带有该当前客户端对应的唯一的标识信息,指定第一服务器将接收到的长链接注册请求转发至指定第二服务器,通常也可以采用轮询方式从多个第二服务器中确定该指定第二服务器,该指定第二服务器对接收到的长链接注册请求进行逻辑处理,将标识信息与当前客户端进行绑定,保存在内存中,然后将绑定成功的结果通过指定第一服务器返回至当前客户端,这样当前客户端就与指定第一服务器建立了长链接。
进一步的,每个第一服务器设置有多个客户端连接点;指定第一服务器通过下述方式确定:确定每个第一服务器对应的未连接客户端的空闲连接点数量;将空闲连接点数量最多的第一服务器确定为指定第一服务器。
作为一种优选方式,考虑到每个第一服务器通常会设置有多个客户端连接点,每个第一服务器可能连接了较多的客户端,也可能连接了较少的客户端,为了充分利用每个第一服务器的性能,均衡使用每个第一服务器,可以根据每个第一服务器对应的已连接客户端的连接点数量,计算出每个第一服务器对应的未连接客户端的空闲连接点数量,当前客户端自动向空闲连接点数量最多的第一服务器发送长链接注册请求。
进一步的,***还包括第三服务器;第三服务器用于接收每个第一服务器的第一长链接注册请求,以注册每个第一服务器,保存每个第一长链接注册请求中携带的第一服务器的通讯地址;第三服务器用于接收每个第二服务器的第二长链接注册请求,以注册每个第 二服务器,将每个第一服务器的通讯地址发送至每个第二服务器,以建立每个第一服务器与每个第二服务器之间的长链接。
该第三服务器主要负责每个第一服务器和每个第二服务器的注册,并对所有第一服务器和所有第二服务器进行长链接,比如,可以将即时通讯***比作一个快递公司。首先需要建三个部分,分别为行政中心、调拨中心和驿站,驿站主要负责寄件和派送快递,调拨中心主要负责接收驿站送的快递,然后进行一系列处理,然后再送到驿站,由驿站再送到收件人手上。而行政中心只是负责管理,有多个驿站和多少个调拨中心,安排好之后就可以直接对接。如果有新的驿站注册了,行政中心就会通知所有调拨中心。同理如果有驿站不干了,也会由行政中心通知所有调拨中心就不会给该驿站派送任何包裹。
从上面的例子可以看出,行政中心只需要一个就够了,调拨中心和驿站可以是多个的,每多一个所能支撑的业务也会成倍增加。基于该示例,确定的本方案的总体架构主要分为3部分:第三服务器(对应行政中心),第一服务器(对应驿站),第二服务器(对应调拨中心)。
具体的,参见图2所示的另一种即时通讯***的结构示意图,其中,服务A(未示出)对应上述第三服务器,服务B对应上述第一服务器、服务C对应上述第二服务器,其中,服务指将一系列程序,部署在服务器上,从而实现的某一系列功能。将服务A、服务B和服务C分别部署在服务器上,由于服务A只处理服务启动时的通讯,不需求集群,单独部署一台即可,服务B和服务C可根据业务量增加或减少服务器数量,一般是看用户连接数,假设服务器配置是8核16G,那么可以配置服务A固定1个,服务B启动4个,服务C启动4个。这样按照一个服务1万的连接数计算,所支撑的用户连接数大概在4万左右,然后基于CPU核数来平均分配服务B和服务C,可以看到每个客户端都有很多的路由到达另外一个客户端,客服端7与客户端1可以经由粗实线对应的路径完成数据通讯。
在长链接时,首先服务A、服务B和服务C进程启动,在服务B和服务C进程启动后向服务A进程发起长链接请求注册以自己,服务A收到每个服务B的注册后,把所有服务B的通讯地址保存在内存中,服务A收到每个服务C的注册后,把内存中所有服务B的通讯地址发给每个服务C,每个服务C收到所有的服务B的通讯地址后连接每个服务B,这样每个服务B和每个服务C就进行了长链接。
进一步的,第三服务器还用于,如果接收到新的第一服务器的新的长链接注册请求,注册新的第一服务器,保存新的长链接注册请求中携带的新的第一服务器的通讯地址,将新的第一服务器的通讯地址发送至每个第二服务器,以建立新的第一服务器与每个第二服务器之间的长链接。
在实际实现时,如果服务A接收到新的服务B发送的新的长链接注册请求,可以注册 该新的服务B,保存其对应的通讯地址,同时,服务A会通知所有的服务C,将新的通讯地址发送至所有的服务C,每个服务C收到新的通讯地址后,分别与新的服务B建立长链接。
进一步的,第三服务器还用于,每间隔第一预设时间,检查每个第一服务器在第一预设时间内发送预设数据的发送次数;其中,每个第一服务器每间隔第二预设时间向第三服务器发送预设数据;第二预设时间短于第一预设时间;如果目标第一服务器对应的发送次数为零,确定目标第一服务器下线。
上述第一预设时间和第二预设时间均可以根据实际需求进行设置,且第一预设时间通常长于第二预设时间;在实际实现时,可以为第一服务器和第三服务器分别设置定时任务,对于第一服务器来说,可以每间隔第二预设时间执行一次,比如,每15秒执行一次,向第三服务器发送一个预设数据,该预设数据的内容固定ping,第三服务器每间隔第一预设时间执行一次,比如,每50秒执行一次,检查每个第一服务器在50秒内发ping的请求次数,如果某个目标第一服务器的请求次数为0,就可以认为该目标第一服务器下线了,其中,该目标第一服务器可以是多个第一服务器中的任意一个,正常情况下,每个第一服务器的ping数会大于0,有一定的容错空间。
进一步的,第三服务器还用于,当确定目标第一服务器下线后,注销目标第一服务器,向每个第二服务器发送消息,以指示每个第二服务器取消与目标第一服务器之间的长链接。
如果目标第一服务器下线,第三服务器会注销掉该目标第一服务器,并通知所有的第二服务器,每个第二服务器接收到目标第一服务器被注销掉的通知后,将不再连接下线的目标第一服务器。
进一步的,第三服务器还用于,当启动进程后,检查当前运行环境。
比如,参见图3所示的一种即时通讯方法的流程图,服务A对应第三服务器,服务A进程启动后,会进行运行环境监察,创建信号监听,启动服务B进程和服务C进程后,变为守护进程,监听服务B和服务C状态。该服务A主要负责服务B和服务C的注册,并安排所有服务B和所有服务C的进行长链接;服务B主要负责监听端口,收集客户端长链接注册和数据请求,还负责接收服务C给的返回结果,并返回数据给客户端。服务C主要负责监听端口,接受服务B发送的请求数据,进行逻辑处理后,得到返回结果,并发送至服务B。客户端主要用于向服务B请求长链接,判断注册结果,如果注册成功,向服务B的监听端口发送请求数据,并监听返回结果,如果注册失败,则重复执行向服务B请求长链接的步骤。
进一步的,每个第一服务器中运行有多个子进程;每个客户端通过各自对应的子进程与每个第一服务器之间建立长链接。
服务B初始化时通常会创建多个服务B子进程,只要内存足够大,创建的子进程会更多,每个子进程会负责一个客户端连接,一个服务B能创建上万个子进程,相当于每个服务B能支撑上万的客户端连接,由于服务B只负责客户端长链接和数据传输,不做逻辑处理,所以服务B性能非常高效。
相关技术中,主要是通过nginx做负载均衡从而达到性能提升,这种方式,是通过nginx代理和转发来实现的,一个请求进来,访问到nginx服务器,再由nginx转发到对应的服务器,这样每台服务器根据权重来处理数据,从而达到分担其他服务器压力。而本方案是通过建立连接数来提升服务器性能,每增加一台服务器,就在服务器上创建对应数量的连接数,服务器之间通过长链接链接起来,从而达到提升服务器性能的效果,该方案省去了nginx转发,通讯更快,并且,每个服务都通过长链接链接起来,这样处理的数据量能成配的提升。
本实施例提供的即时通讯***采用分布式IM服务架构的形式,该架构可以支持高并发,能够成倍提高服务器性能,提高服务稳定性。相对于其他IM服务器,更具有可扩展性,而且很容易就实现分布式部署。可以用几台廉价的服务器组成一个服务器集群,如多个第一服务器、多个第二服务器,这样能够大大的提升了服务性能,同时也降低了成本。如果其中一台服务器宕机,还有其他服务器继续工作,有效的降低了宕机产生的影响。在保证高性能的同时,还提升了稳定性。
本公开实施例提供了另一种即时通讯方法,如图4所示,方法包括:
步骤S402,针对每个第一服务器,当前第一服务器当监听到来自所连接的第一客户端的数据传输请求时,将数据传输请求发送至目标第二服务器;其中,数据传输请求中携带有待传输数据和待访问客户端的标识信息;目标第二服务器通过轮询方式确定;
步骤S404,目标第二服务器对待传输数据进行逻辑处理得到返回数据,根据待访问客户端的标识信息,将返回数据发送至待访问客户端连接的第一服务器,通过待访问客户端连接的第一服务器将返回数据发送至待访问客户端。
上述即时通讯方法,针对每个第一服务器,当前第一服务器用于当监听到来自所连接的第一客户端的数据传输请求时,将数据传输请求发送至目标第二服务器;其中,数据传输请求中携带有待传输数据和待访问客户端的标识信息;目标第二服务器通过轮询方式确定;目标第二服务器用于对待传输数据进行逻辑处理得到返回数据,根据待访问客户端的标识信息,将返回数据发送至待访问客户端连接的第一服务器,通过待访问客户端连接的第一服务器将返回数据发送至待访问客户端。该***中,每个第一服务器可以直接连接至少一个客户端,在高并发情况下,只需要按需增加服务器即可,不需要通过nginx代理服务器进行转发,从而可以降低业务优化的配置成本,并且第一服务器与客户端之间、第一 服务器与第二服务器之间均通过长链接方式连接,从而可以有效提升服务器性能。
最后应说明的是:以上各实施例仅用以说明本公开的技术方案,而非对其限制;尽管参照前述各实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本公开各实施例技术方案的范围。

Claims (10)

  1. 一种即时通讯***,其特征在于,包括多个第一服务器和多个第二服务器,每个所述第一服务器分别与每个所述第二服务器之间建立有长链接;每个所述第一服务器还与至少一个客户端之间建立有长链接;每个所述第一服务器连接不同的客户端;
    针对每个所述第一服务器,当前第一服务器用于当监听到来自所连接的第一客户端的数据传输请求时,将所述数据传输请求发送至目标第二服务器;其中,所述数据传输请求中携带有待传输数据和待访问客户端的标识信息;所述目标第二服务器通过轮询方式确定;
    所述目标第二服务器用于对所述待传输数据进行逻辑处理得到返回数据,根据所述待访问客户端的标识信息,将所述返回数据发送至所述待访问客户端连接的第一服务器,通过所述待访问客户端连接的第一服务器将所述返回数据发送至所述待访问客户端。
  2. 根据权利要求1所述的***,其特征在于,每个所述第一服务器与至少一个客户端之间的长链接通过以下方式建立:
    针对每个所述客户端,当前客户端用于向指定第一服务器发送长链接注册请求,通过所述指定第一服务器将所述长链接注册请求发送至指定第二服务器;其中,所述长链接注册请求中携带有所述当前客户端的标识信息;所述指定第二服务器通过轮询方式确定;
    所述指定第二服务器用于将所述当前客户端的标识信息与所述当前客户端进行绑定,通过所述指定第一服务器将绑定成功消息返回至所述当前客户端,以建立所述当前客户端与所述指定第一服务器之间的长链接。
  3. 根据权利要求2所述的***,其特征在于,每个所述第一服务器设置有多个客户端连接点;所述指定第一服务器通过下述方式确定:
    确定每个所述第一服务器对应的未连接客户端的空闲连接点数量;
    将所述空闲连接点数量最多的第一服务器确定为所述指定第一服务器。
  4. 根据权利要求1所述的***,其特征在于,所述***还包括第三服务器;所述第三服务器用于接收每个所述第一服务器的第一长链接注册请求,以注册每个所述第一服务器,保存每个所述第一长链接注册请求中携带的第一服务器的通讯地址;
    所述第三服务器用于接收每个所述第二服务器的第二长链接注册请求,以注册每个所述第二服务器,将每个所述第一服务器的通讯地址发送至每个所述第二服务器,以建立每个所述第一服务器与每个所述第二服务器之间的长链接。
  5. 根据权利要求4所述的***,其特征在于,
    所述第三服务器还用于,如果接收到新的第一服务器的新的长链接注册请求,注册所述新的第一服务器,保存所述新的长链接注册请求中携带的所述新的第一服务器的通讯地 址,将所述新的第一服务器的通讯地址发送至每个所述第二服务器,以建立所述新的第一服务器与每个所述第二服务器之间的长链接。
  6. 根据权利要求4所述的***,其特征在于,所述第三服务器还用于,
    每间隔第一预设时间,检查每个所述第一服务器在所述第一预设时间内发送预设数据的发送次数;其中,每个所述第一服务器每间隔第二预设时间向所述第三服务器发送所述预设数据;第二预设时间短于所述第一预设时间;
    如果目标第一服务器对应的发送次数为零,确定所述目标第一服务器下线。
  7. 根据权利要求6所述的***,其特征在于,
    所述第三服务器还用于,当确定所述目标第一服务器下线后,注销所述目标第一服务器,向每个所述第二服务器发送消息,以指示每个所述第二服务器取消与所述目标第一服务器之间的长链接。
  8. 根据权利要求4所述的***,其特征在于,所述第三服务器还用于,当启动进程后,检查当前运行环境。
  9. 根据权利要求4所述的***,其特征在于,每个所述第一服务器中运行有多个子进程;
    每个客户端通过各自对应的子进程与每个所述第一服务器之间建立长链接。
  10. 一种即时通讯方法,其特征在于,所述方法包括:
    针对每个第一服务器,当前第一服务器当监听到来自所连接的第一客户端的数据传输请求时,将所述数据传输请求发送至目标第二服务器;其中,所述数据传输请求中携带有待传输数据和待访问客户端的标识信息;所述目标第二服务器通过轮询方式确定;
    所述目标第二服务器对所述待传输数据进行逻辑处理得到返回数据,根据所述待访问客户端的标识信息,将所述返回数据发送至所述待访问客户端连接的第一服务器,通过所述待访问客户端连接的第一服务器将所述返回数据发送至所述待访问客户端。
PCT/CN2023/096600 2022-08-12 2023-05-26 即时通讯***及方法 WO2024032094A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210964562.2 2022-08-12
CN202210964562.2A CN115037785B (zh) 2022-08-12 2022-08-12 即时通讯***及方法

Publications (1)

Publication Number Publication Date
WO2024032094A1 true WO2024032094A1 (zh) 2024-02-15

Family

ID=83130698

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/096600 WO2024032094A1 (zh) 2022-08-12 2023-05-26 即时通讯***及方法

Country Status (2)

Country Link
CN (1) CN115037785B (zh)
WO (1) WO2024032094A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115037785B (zh) * 2022-08-12 2022-11-01 深圳市星卡软件技术开发有限公司 即时通讯***及方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103067359A (zh) * 2012-12-14 2013-04-24 北京思特奇信息技术股份有限公司 一种基于连接复用的提高服务器并发处理能力的***及方法
CN109600302A (zh) * 2018-11-27 2019-04-09 金瓜子科技发展(北京)有限公司 一种有序通讯的方法、装置、存储介质及电子设备
CN113364818A (zh) * 2020-03-03 2021-09-07 北京搜狗科技发展有限公司 一种数据处理方法、装置和电子设备
CN114095465A (zh) * 2021-11-17 2022-02-25 北京同城必应科技有限公司 一种在分布式环境下高效的im消息时序性保障机制实现方法
US20220210133A1 (en) * 2020-12-29 2022-06-30 Microsoft Technology Licensing, Llc Interim connections for providing secure communication of content between devices
CN115037785A (zh) * 2022-08-12 2022-09-09 深圳市星卡软件技术开发有限公司 即时通讯***及方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107317830B (zh) * 2016-04-26 2021-05-18 中兴通讯股份有限公司 一种服务发现的处理方法及装置
CN109587275A (zh) * 2019-01-08 2019-04-05 网宿科技股份有限公司 一种通信连接的建立方法及代理服务器
CN110224871B (zh) * 2019-06-21 2022-11-08 深圳前海微众银行股份有限公司 一种Redis集群的高可用方法及装置
CN111447185B (zh) * 2020-03-10 2023-07-28 平安科技(深圳)有限公司 一种推送信息的处理方法及相关设备
CN112291298B (zh) * 2020-09-18 2024-03-01 云镝智慧科技有限公司 异构***的数据传输方法、装置、计算机设备和存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103067359A (zh) * 2012-12-14 2013-04-24 北京思特奇信息技术股份有限公司 一种基于连接复用的提高服务器并发处理能力的***及方法
CN109600302A (zh) * 2018-11-27 2019-04-09 金瓜子科技发展(北京)有限公司 一种有序通讯的方法、装置、存储介质及电子设备
CN113364818A (zh) * 2020-03-03 2021-09-07 北京搜狗科技发展有限公司 一种数据处理方法、装置和电子设备
US20220210133A1 (en) * 2020-12-29 2022-06-30 Microsoft Technology Licensing, Llc Interim connections for providing secure communication of content between devices
CN114095465A (zh) * 2021-11-17 2022-02-25 北京同城必应科技有限公司 一种在分布式环境下高效的im消息时序性保障机制实现方法
CN115037785A (zh) * 2022-08-12 2022-09-09 深圳市星卡软件技术开发有限公司 即时通讯***及方法

Also Published As

Publication number Publication date
CN115037785A (zh) 2022-09-09
CN115037785B (zh) 2022-11-01

Similar Documents

Publication Publication Date Title
US11522734B2 (en) Method for controlling a remote service access path and relevant device
CN100446495C (zh) 一种动态共享连接的方法和***
US10439833B1 (en) Methods and apparatus for using multicast messaging in a system for implementing transactions
CN101094237B (zh) 一种ip多媒体子***中网元间的负荷分担方法
US10834033B2 (en) Method and system for transferring messages between messaging systems
WO2024032094A1 (zh) 即时通讯***及方法
CN101447989A (zh) 用于改进的高可用性组件实现的***和方法
US20100281169A1 (en) Presence-awareness for wireless devices
CN102571947A (zh) 一种代理处理数据的方法、装置和***
CN114553799B (zh) 基于可编程数据平面的组播转发方法、装置、设备及介质
CN111555965B (zh) 一种适用于iOS客户端的消息推送方法及***
WO2013159492A1 (zh) 信息上报与下载的方法及***
US10938993B2 (en) Workload balancing technique for a telephone communication system
CN108076111B (zh) 一种在大数据平台中分发数据的***及方法
TWI477113B (zh) Information processing methods and systems
CN114095465A (zh) 一种在分布式环境下高效的im消息时序性保障机制实现方法
CN115604160A (zh) 网络检测处理方法及装置、电子设备、存储介质
CN105933131B (zh) 多媒体任务处理方法及装置
CN114465955B (zh) 组播报文处理方法及装置
CN111835576B (zh) 基于dpvs的后端服务器健康检测方法和服务器
CN114826887B (zh) 私网连接通信方法和***
CN102497437A (zh) 一种实现负载均衡的方法、设备及***
CN111416861B (zh) 一种通信管理***和方法
Selim et al. Ensuring reliability and availability of soft system bus
CN117176782A (zh) 数据交互方法、装置及***

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23851322

Country of ref document: EP

Kind code of ref document: A1