CN113301086A - DNS data management system and management method - Google Patents

DNS data management system and management method Download PDF

Info

Publication number
CN113301086A
CN113301086A CN202010658908.7A CN202010658908A CN113301086A CN 113301086 A CN113301086 A CN 113301086A CN 202010658908 A CN202010658908 A CN 202010658908A CN 113301086 A CN113301086 A CN 113301086A
Authority
CN
China
Prior art keywords
data
dns
service
node
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010658908.7A
Other languages
Chinese (zh)
Inventor
郭川
刘志辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010658908.7A priority Critical patent/CN113301086A/en
Publication of CN113301086A publication Critical patent/CN113301086A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1036Load balancing of requests to servers for services different from user content provisioning, e.g. load balancing across domain name servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0663Performing the actions predefined by failover planning, e.g. switching to standby network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/45Network directories; Name-to-address mapping
    • H04L61/4505Network directories; Name-to-address mapping using standardised directories; using standardised directory access protocols
    • H04L61/4511Network directories; Name-to-address mapping using standardised directories; using standardised directory access protocols using domain name system [DNS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1034Reaction to server failures by a load balancer

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A DNS data management system and method are disclosed. The system comprises: the system comprises a plurality of service nodes and a plurality of data nodes, wherein the plurality of service nodes are used for receiving DNS management requests and generating DNS data maintenance requests according to the DNS management requests; and the data nodes are used for storing DNS data and maintaining the DNS data according to the DNS data maintenance request and a data consistency protocol. The embodiment of the disclosure ensures the consistency of the DNS data on each data node through a consistency protocol.

Description

DNS data management system and management method
Technical Field
The present disclosure relates to the field of internet, and in particular, to a DNS data management system and a DNS data management method.
Background
DNS is an abbreviation for Domain Name System (Domain Name System). The domain name system is one of basic network services in the internet, a hierarchical tree-shaped service system is established in the network, DNS data reflecting the logical mapping relation between IP addresses and domain names is established, and each access of a user converts the domain names into corresponding IP addresses through the DNS data.
The DNS data management service ensures high availability of the DNS data management service in a main/standby mode + manual switching mode. As shown in fig. 1, DNS data management services are respectively deployed on a main server 11 and a standby server 13, domain name systems are respectively deployed on a plurality of DNS servers 12, an administrator user accesses the DNS data management services through a management terminal 1, and a network user obtains the domain name services through a request terminal 4. As shown in the figure, DNS data is also stored in the primary server 11 and the backup server 13, respectively, and the DNS data is periodically synchronized between the primary server 11 and the backup server 13. Normally, the DNS data management service deployed on the primary server 11 is responsible for managing DNS data, and the domain name service accesses the DNS data on the primary server 11, and when it fails, the DNS data management service deployed on the backup server 13 is manually enabled, and the domain name service accesses the DNS data on the backup server 13.
The pain points of the current architecture are: 1) during the switching period of the main server and the standby server, the domain name system is unavailable; 2) the DNS data on the main server and the DNS data on the standby server are synchronized with interval time, and if the main server fails, the risk of data loss exists.
Disclosure of Invention
In view of the above, an object of the present disclosure is to provide a DNS data management system and a DNS data management method, so as to solve the problems in the prior art.
According to a first aspect of the present disclosure, there is provided a DNS data management system including: a plurality of service nodes and a plurality of data nodes,
the plurality of service nodes are used for receiving the DNS management request and generating a D NS data maintenance request according to the DNS management request;
and the data nodes are used for storing DNS data and maintaining the DNS data according to the DNS data maintenance request and a data consistency protocol.
Optionally, the plurality of service nodes form a master/standby mode, the master service node and the standby service node have the same virtual IP address, and automatic switching between the master service node and the standby service node is realized through the virtual IP address.
Optionally, heartbeat information is established between the main service node and the standby service node, and whether the main service node fails is determined through the heartbeat information, so as to determine whether to switch from the main service node to the standby service node.
Optionally, the plurality of service nodes form a cluster mode, the plurality of service nodes include a main service node, the main service node forwards the DNS management request to one of the other service nodes according to a service list and a load balancing policy, heartbeat information is established between the main service node and the other service nodes, and when a failed service node is detected according to the heartbeat information, the failed service node is deleted from the service list and is added to the service list after being recovered from a failure.
Optionally, when the master service node fails, a new master service node is generated through a cluster selection master algorithm.
Optionally, the plurality of data nodes form a cluster mode, the main data node is responsible for receiving the DNS data maintenance request, recording the DNS data maintenance request as a log, and copying the log to the other data nodes, and then the main data node and the other data nodes submit the log to complete consistency maintenance of the DNS data.
Optionally, the master data node is generated by a cluster election master algorithm.
In a second aspect, an embodiment of the present disclosure provides a DNS data management method, including:
receiving a DNS management request by adopting a plurality of service nodes and generating a DNS data maintenance request according to the DNS management request;
and storing DNS data by adopting a plurality of data nodes, and maintaining the DNS data according to the DNS data maintenance request and a data consistency protocol.
Optionally, the plurality of service nodes form a master-standby mode, and the plurality of service nodes have the same virtual IP address, and the automatic switching between the master service node and the standby service node is realized through the virtual IP address.
Optionally, the plurality of data nodes form a cluster mode, the main data node is responsible for receiving the DNS data maintenance request, recording the DNS data maintenance request as a log, and copying the log to the other data nodes, and then the main data node and the other data nodes submit the log to complete consistency maintenance of the DNS data.
In a third aspect, an embodiment of the present disclosure provides a DNS resolution method, including:
receiving a domain name to be resolved;
sending the domain name to be resolved to a domain name server, wherein the domain name server obtains an IP address corresponding to the domain name to be resolved from a first data node, the first data node is any data node in a plurality of data nodes which can be accessed by the domain name server, and the data nodes manage DNS data according to a data consistency protocol; and
and mapping the domain name to be analyzed into an IP address.
According to the embodiment of the disclosure, the consistency of the DNS data on each data node is ensured through a consistency protocol. Furthermore, the DNS data management service is deployed in the active-standby mode, and when the main service node fails, the standby service node is enabled to provide the DNS data management service so as to achieve high availability of the DNS data management service.
Drawings
The foregoing and other objects, features, and advantages of the disclosure will be apparent from the following description of embodiments of the disclosure, which refers to the accompanying drawings in which:
fig. 1 shows a network configuration diagram of a conventional DNS data management system;
fig. 2 shows a network structure diagram of a DNS data management system provided in a first embodiment of the present disclosure;
fig. 3 shows a network structure diagram of a DNS data management system according to a second embodiment of the present disclosure;
fig. 4 shows a network structure diagram of a DNS data management system provided in a third embodiment of the present disclosure;
fig. 5 shows a flowchart of a DNS data management method according to a first embodiment of the present disclosure.
Detailed Description
The present disclosure is described below based on examples, but the present disclosure is not limited to only these examples. In the following detailed description of the present disclosure, some specific details are set forth in detail. It will be apparent to those skilled in the art that the present disclosure may be practiced without these specific details. Well-known methods, procedures, and procedures have not been described in detail so as not to obscure the present disclosure. The figures are not necessarily drawn to scale.
Fig. 2 shows a network structure diagram of a DNS data management system according to a first embodiment of the present disclosure.
As shown in the figure, the system comprises a main service node 21 and a standby service node 24 for deploying DNS data management service, a cluster 2 composed of a plurality of data nodes 22, and a DNS server 23 with domain name service deployed. The administrator user accesses the DNS data management service through the administrator terminal 1. The network user obtains the domain name service through the request terminal 4, for example, the network user can obtain the DNS service based on the web application of the request terminal 4, specifically, first, the network user inputs the domain name to be resolved through a browser, the domain name to be resolved by the web application is provided to the DNS server 23, the DNS server 23 obtains the IP address mapped by the domain name to be resolved through one data node 22 and returns the IP address to the web application, the data node 22 is any one of a plurality of data nodes which can be accessed by the DNS server 23, and the web application maps the domain name to be resolved into the IP address, thereby obtaining the service provided by the IP address.
As shown in the figure, main service node 21 and standby service node 24 operate in the main standby mode. The standby service node 24 serves as a backup for the primary service node 21, and the standby service node 24 does not normally receive and process DNS management requests. When primary service node 21 fails, service is switched to standby service node 24. In one embodiment, two service nodes are specified by virtual IP addresses, one of the service nodes is used as a main service node, the other service node is used as a standby service node, heartbeat information is established between the main service node 21 and the standby service node 24, and whether the main service node 21 fails or not is determined by the heartbeat information. When the main service node 21 fails, the service node pointed by the virtual IP address is switched to the standby service node 24.
In this embodiment, different from the prior art, the active/standby mode is implemented by an application based on a Virtual Router Redundancy Protocol (Virtual Router Redundancy Protocol). As known to those skilled in the art, the virtual routing redundancy protocol is a network layer protocol that assigns multiple service nodes providing the same service to the same virtual IP address, and if one of the service nodes fails, another service node can take over its work quickly to ensure the reliability and continuity of communication. The application based on the virtual routing redundancy protocol is realized in an application layer, namely, the virtual routing redundancy protocol of a network layer is utilized in an application program to realize the automatic switching of the main service node and the standby service node, namely, when the main service node fails, the automatic service is switched to the standby service node. Keepalived is an application with such a function. When the keepalive service works normally, the main service node can continuously send heartbeat information to the standby service node to tell that the standby service node is still alive, when the main service node fails, the heartbeat information cannot be sent, the standby service node cannot detect the heartbeat information from the main service node, and then a self take-over program is called to take over the IP resources and the services of the main service node. When the main service node is recovered, the standby service node releases the IP resources and services taken over by the standby service node when the main service node fails, and the original standby role is recovered. When the keepalived implementation is adopted in this embodiment, the keepalived is deployed on the service nodes 21 and 24, the service node is designated as a main service node in the keepalived configuration of the service node 21, the service node is designated as a standby service node in the keepalived configuration of the service node 24, and the same virtual IP address and the heartbeat information interval are designated in the configurations of the two. The DNS management request sent via the network is then forwarded to the primary service node 21 for processing by the primary service node 21. And heartbeat information is sent between the primary service node 21 and the standby service node 24 via keepalived. When the standby service node 24 does not receive the heartbeat information of the main service node for the time out, the standby service node 24 starts to take over the service.
In this embodiment, the automatic switching between the main service node and the standby service node is realized through the virtual IP address. It should be understood that, although two active/standby nodes are used to provide the DNS management service in fig. 2, the present disclosure is not limited thereto, and the present disclosure may also implement the backup mode using more service nodes.
With continued reference to the illustration in the figure, DNS data is stored in a cluster 2, the cluster 2 includes a plurality of data nodes 22, each data node 22 stores the same DNS data, and the consistency of the DNS data between the nodes is achieved through a data consistency protocol. When one data node fails, DNS data may be provided by another data node while the DNS service still has normal access to the DNS data. It should be noted that resolving host names to IP addresses typically uses a hierarchical domain name resolution service, each hierarchical domain name resolution service using DNS data at that level, and each data node of cluster 2 stores DNS data at all levels.
The data consistency protocol is used for solving the problem of data consistency among multiple copies. In one embodiment, the plurality of data nodes includes a master data node and at least one slave data node. The master data node is responsible for receiving the DNS data maintenance request and processing the DNS data maintenance request. The DNS data maintenance request may also be sent to at least one data slave node at the same time. And the master data node and the slave data node maintain own DNS data according to the DNS data maintenance request. The maintenance includes additions, deletions, and modifications to the DNS data. In this manner, consistency of DNS data on the master and slave data nodes is achieved. It should be noted that the consistency of the data is divided into strong and weak points, and strong consistency is understood to mean that the data in each data node is the same at any time. Weak consistency can only ensure that the data on the various data nodes is eventually consistent, but cannot ensure the time required to achieve consistency. Therefore weak consistency is also referred to as final consistency. The higher the strength of the consistency, the higher the data security, but the execution efficiency may be relatively poor, and vice versa. In the present embodiment, DNS data is implemented with weak consistency in consideration of the usage scenario of DNS data.
Furthermore, if the plurality of data nodes are distinguished as master data nodes and slave data nodes, the cluster election problem will be involved, i.e. how to elect a master data node from the plurality of data nodes. The problem of cluster owner selection usually occurs when a main node of a cluster is down or the cluster is just started, and at the moment, no main node exists in the cluster, and then cluster owner selection is triggered. There are two common ways of cluster selection: voting and competition. The voted cluster voter is for example ZooKeeper. The specific voting process comprises the following steps: after the master node is lost, all nodes broadcast own election values to all nodes in the first round of broadcasting respectively, each node compares the own election value with the received election values of all other nodes to select the largest election value, if the largest election value is not the own election value, the largest election value is broadcasted in the second round of broadcasting, after some nodes receive the second round of broadcasting, the statistics of more than half of the election values is carried out, and the corresponding nodes become new master nodes. The cluster election of competing elections needs to be implemented by means of an external storage service, for example, each node decides who is the master node by accessing an agreed Key-Value pair (Key-Value) data. Suppose KV data is the master node: UUID (unique identifier generated before writing), the specific preemptive logic is as follows: trying to acquire key value pair data and judging whether the data exists or not; if the master node does not exist, identifying the master node as: the UUID is written into the storage service and the TTL of the UUID is set (if the storage service does not support the TTL, the TTL can be written together as a part of Value), the UUID Value is stored locally, and the current process is a main node; if the main node identification exists, judging whether the main node identification is overdue through TTL, if the main node identification is overdue, performing processing as that the main node identification does not exist, and if not, comparing whether the UUID of the main node identification is consistent with the locally stored UUID; if the current process is consistent with the main node identification, refreshing the data, wherein the current process is the main node identification; if not, no operation is carried out, and the current node is not the main node. Voting and competition have characteristics respectively, but currently, a lot of voting modes are used in a cluster. In the present disclosure, the cluster election method is not limited.
In this embodiment, the automatic switching between the main service node and the standby service node is realized through the virtual IP address, so as to realize high availability of the DNS data management service, and the consistency of the DNS data on each data node is ensured through the consistency protocol, so as to realize high availability of the DNS data.
Fig. 3 shows a network structure diagram of a DNS data management system according to a second embodiment of the present disclosure. As shown in the figure, the system includes a main service node 31 and a plurality of standby service nodes 32 in the main/standby mode. Both the primary service node 31 and the standby service node 32 are deployed with DNS data management services and DNS data. The administrator user accesses the DNS data management service through the management terminal 1, and the network user obtains the domain name service through the request terminal 4. Normally, the DNS data management service is provided to the management terminal 1 by the main service node 31, and when the main service node 31 fails, the DNS data management service is provided to the management terminal 1 by the standby service node 32. It can be seen from the figure that each serving node acts as both a serving node and a data node, which is a difference from fig. 2, and the other difference is that there are two standby serving nodes 32. In addition, a consistency protocol is also adopted among the data nodes to ensure the consistency of the DNS data.
In fig. 3, a DNS data management service is provided in a master/standby mode. In the mode, the DNS data management service and the DNS data management service are deployed in multiple copies, each DNS data management service can provide DNS data management in a peer-to-peer mode, and even if one service node fails, the DNS management service and the DNS data are not used. This has the advantage of high availability of both services and data.
Meanwhile, the service nodes 31 and 32 can be designated by the virtual IP addresses, the service node 31 is designated as a main service node, the other two service nodes 32 are used as standby service nodes, heartbeat information is established between the main service node 31 and the standby service nodes 32, and whether the main service node 31 fails or not is determined by the heartbeat information. When the main service node 31 fails, the service node pointed by the virtual IP address is switched to a standby service node 32. The active/standby mode in this embodiment may also be implemented by Keepalived application. When the keep-alive mode is realized by using keep alive, the keep alive is deployed on each service node, the service node is designated as a main service node 31 in the keep alive configuration of the main service node 31, the service node is designated as a standby service node 32 in the keep alive configuration of the standby service node 32, and the same virtual IP address and the heartbeat information interval are designated in the configurations of the two. A DNS management request sent via the network is forwarded to the primary service node 31 and processed by the primary service node 31. Heartbeat information is sent between the primary service node 31 and the standby service node 32. When the standby service node 32 does not receive the heartbeat information of the main service node 31 after the expiration, the standby service node 32 starts to take over the service.
Fig. 4 shows a network structure diagram of a DNS data management system according to a third embodiment of the present disclosure. As described above, the system includes a plurality of master service nodes 41 and slave service nodes 42 in a cluster mode. Both the master service node 41 and the slave service node 42 have DNS data management services and DNS data deployed thereon. The administrator user accesses the DNS data management service through the management terminal 1, and the network user obtains the domain name service through the request terminal 4. Consistency protocols are also adopted among the data nodes to ensure the consistency of the DNS data.
The difference from fig. 3 is that the DNS management request sent by the management side 1 is always sent to the master service node 41 in the cluster, and the master service node 41 is configured to distribute the DNS management request to one of the remaining slave service nodes according to the service list and the load balancing policy. And the master service node and the slave service nodes are managed through cluster management software. The cluster management software manages an active service list, detects whether each slave node fails according to heartbeat information, deletes the failed slave node from the service list, and adds the failed slave node to the service list after the failed slave node is recovered. When the main service node fails or the cluster is just started, the main node is selected in a cluster master selection mode. The specific master selection method may refer to the cluster master selection method described above.
In this embodiment, load balancing is implemented through a cluster mode, and since each peer service node has complete data and service, access traffic of a user can be distributed to different service nodes.
Corresponding to the network structure diagram, the present disclosure provides a DNS data management method. As shown in fig. 5, the method includes steps S501 and S502.
In step S501, a plurality of service nodes are used to receive a DNS management request and generate a DNS data maintenance request according to the DNS management request.
In step S502, a plurality of data nodes are used to store DNS data, and the DNS data is maintained according to a data consistency protocol according to a DNS data maintenance request.
The operation mode of the plurality of service nodes may be a master/standby mode. In the active/standby mode, under normal conditions, the main service node works, the standby service node is in an idle state, and when the main service node fails, the standby service node starts working. In order to realize automatic conversion between the main service node and the standby service node, the automatic conversion can be realized through an application based on a virtual routing redundancy protocol. The application can be an existing application based on the virtual routing redundancy protocol, and can also be a reconstructed application. As known to those skilled in the art, the virtual routing redundancy protocol is a network layer protocol that assigns multiple service nodes providing the same service to the same virtual IP address, and then distributes the request to one of the multiple service nodes based on the virtual IP address. Therefore, as long as the service nodes with the DNS data management service are set to the same virtual IP address in the application, the automatic switching of the DNS data service can be realized. Further, heartbeat information is established between the main service node and the standby service node, and whether the main service node fails or not is determined through the heartbeat information so as to determine whether to perform automatic switching of the main service node and the standby service node or not.
The operation mode of the plurality of service nodes may also be a cluster mode. Since the master service node selects a new master service node when it fails, high availability of services can be ensured as long as it is ensured that DNS management requests are always received by the master service node and distributed to a plurality of slave service nodes. Load balancing can also be achieved through the cluster mode.
And a plurality of data nodes are realized in a cluster mode. Each data node stores the same DNS data, and the consistency of the DNS data of each node is realized through a data consistency protocol. When one data node fails, other data nodes can provide DNS data, and the domain name service 3 can still normally access the DNS data. In one embodiment, a plurality of data nodes form a cluster mode, a master data node is responsible for receiving a DNS data maintenance request, recording the DNS data maintenance request as a log and copying the log to a slave data node, and the master data node and the slave data node submit the log when appropriate so as to complete the maintenance of the DNS data.
In the embodiment of the present disclosure, consistency of DNS data on each data node is ensured by a consistency algorithm. Furthermore, the DNS data management service is deployed in the active-standby mode, and when the main service node fails, the standby service node is enabled to provide the DNS data management service so as to ensure high availability of the DNS data management service.
As will be appreciated by one skilled in the art, the present disclosure may be embodied as systems, methods and computer program products. Accordingly, the present disclosure may be embodied in the form of entirely hardware, entirely software (including firmware, resident software, micro-code), or in the form of a combination of software and hardware. Furthermore, in some embodiments, the present disclosure may also be embodied in the form of a computer program product in one or more computer-readable media having computer-readable program code embodied therein.
Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium is, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer-readable storage medium include: an electrical connection for the particular wire or wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical memory, a magnetic memory, or any suitable combination of the foregoing. In this context, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with a processing unit, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a chopper. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any other suitable combination. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., and any suitable combination of the foregoing.
Computer program code for carrying out embodiments of the present disclosure may be written in one or more programming languages or combinations. The programming language includes an object-oriented programming language such as JAVA, C + +, and may also include a conventional procedural programming language such as C. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (11)

1. A DNS data management system, comprising: a plurality of service nodes and a plurality of data nodes,
the plurality of service nodes are used for receiving the DNS management request and generating a DNS data maintenance request according to the DNS management request;
and the data nodes are used for storing DNS data and maintaining the DNS data according to the DNS data maintenance request and a data consistency protocol.
2. The DNS data management system according to claim 1, wherein the plurality of service nodes constitute a master/backup mode, the master service node and the backup service node have the same virtual IP address, and automatic switching between the master service node and the backup service node is realized by the virtual IP address.
3. The DNS data management system according to claim 2, wherein heartbeat information is established between the main service node and the standby service node, and whether the main service node has failed is determined by the heartbeat information to decide whether to switch from the main service node to the standby service node.
4. The DNS data management system according to claim 1, wherein the plurality of service nodes constitute a cluster mode, the plurality of service nodes include a main service node, the main service node forwards the DNS management request to one of the remaining service nodes according to a service list and a load balancing policy, heartbeat information is established between the main service node and the remaining service nodes, and when a failed service node is detected according to the heartbeat information, the failed service node is deleted from the service list and added to the service list after it recovers from a failure.
5. The DNS data management system of claim 4, wherein upon failure of the primary service node, a new primary service node is generated by a cluster election primary algorithm.
6. The DNS data management system according to claim 1, wherein the plurality of data nodes form a cluster model, the master data node is responsible for receiving the DNS data maintenance request, recording the DNS data maintenance request as a log, and copying the log to the other data nodes, and then the master data node and the other data nodes submit the log to complete consistency maintenance of the DNS data.
7. The DNS data management system according to claim 6, the master data node being generated by a cluster election master algorithm.
8. A DNS data management method, comprising:
receiving a DNS management request by adopting a plurality of service nodes and generating a DNS data maintenance request according to the DNS management request;
and storing DNS data by adopting a plurality of data nodes, and maintaining the DNS data according to the DNS data maintenance request and a data consistency protocol.
9. The DNS data management method according to claim 8, wherein the plurality of service nodes constitute a master/backup mode, and the plurality of service nodes have the same virtual IP address, and automatic switching between the master service node and the backup service node is realized by the virtual IP address.
10. The DNS data management method according to claim 8, wherein the plurality of data nodes form a cluster model, a master data node is responsible for receiving the DNS data maintenance request, recording the DNS data maintenance request as a log, and copying the log to the remaining data nodes, and then the master data node and the remaining data nodes submit the log to complete consistency maintenance of the DNS data.
11. A DNS resolution method, comprising:
receiving a domain name to be resolved;
sending the domain name to be resolved to a domain name server, wherein the domain name server obtains an IP address corresponding to the domain name to be resolved from a first data node, the first data node is any data node in a plurality of data nodes which can be accessed by the domain name server, and the data nodes manage DNS data according to a data consistency protocol; and
and mapping the domain name to be analyzed into an IP address.
CN202010658908.7A 2020-07-09 2020-07-09 DNS data management system and management method Pending CN113301086A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010658908.7A CN113301086A (en) 2020-07-09 2020-07-09 DNS data management system and management method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010658908.7A CN113301086A (en) 2020-07-09 2020-07-09 DNS data management system and management method

Publications (1)

Publication Number Publication Date
CN113301086A true CN113301086A (en) 2021-08-24

Family

ID=77318342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010658908.7A Pending CN113301086A (en) 2020-07-09 2020-07-09 DNS data management system and management method

Country Status (1)

Country Link
CN (1) CN113301086A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114124673A (en) * 2021-11-25 2022-03-01 杭州安恒信息技术股份有限公司 Method for comparing and testing syslog and high availability of main and standby system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114124673A (en) * 2021-11-25 2022-03-01 杭州安恒信息技术股份有限公司 Method for comparing and testing syslog and high availability of main and standby system

Similar Documents

Publication Publication Date Title
JP4690461B2 (en) Branch office DNS storage and resolution
CN107391294B (en) Method and device for establishing IPSAN disaster recovery system
WO2016070375A1 (en) Distributed storage replication system and method
CN106911728A (en) The choosing method and device of host node in distributed system
JP2017525008A (en) Arbitration processing method, quorum storage device, and system after cluster brain division
WO2021088254A1 (en) Dual-stack access method, apparatus and device for user-mode network file system
WO2021184587A1 (en) Prometheus-based private cloud monitoring method and apparatus, and computer device and storage medium
CN113572831B (en) Communication method, computer equipment and medium between Kubernetes clusters
CN103581276A (en) Cluster management device and system, service client side and corresponding method
US10917289B2 (en) Handling network failures in networks with redundant servers
CN110474802B (en) Equipment switching method and device and service system
CN112202853B (en) Data synchronization method, system, computer device and storage medium
CN112328421A (en) System fault processing method and device, computer equipment and storage medium
CN112671554A (en) Node fault processing method and related device
US11153173B1 (en) Dynamically updating compute node location information in a distributed computing environment
CN111158949A (en) Configuration method, switching method and device of disaster recovery architecture, equipment and storage medium
CN116257186A (en) Data object erasure code storage method, device, equipment and medium
CN113326100B (en) Cluster management method, device, equipment and computer storage medium
CN114090342A (en) Storage disaster tolerance link management method, message execution node and storage control cluster
CN113301086A (en) DNS data management system and management method
CN105323271B (en) Cloud computing system and processing method and device thereof
CN109344202B (en) Data synchronization method and management node
CN111953808A (en) Data transmission switching method of dual-machine dual-active architecture and architecture construction system
US20240244414A1 (en) Session binding relationship processing method and apparatus, electronic device, and readable medium
CN115550287A (en) Method for establishing remote copy relationship and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40059154

Country of ref document: HK