CN117193855A - Code management method, device, storage medium and electronic equipment - Google Patents

Code management method, device, storage medium and electronic equipment Download PDF

Info

Publication number
CN117193855A
CN117193855A CN202210609239.3A CN202210609239A CN117193855A CN 117193855 A CN117193855 A CN 117193855A CN 202210609239 A CN202210609239 A CN 202210609239A CN 117193855 A CN117193855 A CN 117193855A
Authority
CN
China
Prior art keywords
log
metadata
node
nodes
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210609239.3A
Other languages
Chinese (zh)
Inventor
叶立飞
朱玉银
钱勇
傅昆
李一鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Supcon Technology Co Ltd
Original Assignee
Zhejiang Supcon Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Supcon Technology Co Ltd filed Critical Zhejiang Supcon Technology Co Ltd
Priority to CN202210609239.3A priority Critical patent/CN117193855A/en
Publication of CN117193855A publication Critical patent/CN117193855A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a code management method, a code management device, a storage medium and electronic equipment. Wherein the method comprises the following steps: generating a visual programming object in response to an operation instruction of the target object; converting the visualized programming objects into metadata in a predetermined format; and storing the metadata into a cluster service manager in a log form, wherein the cluster service manager comprises a plurality of distributed nodes, and the consistency of the data states among the distributed nodes is met. The application solves the technical problems that the operation and maintenance management is complex, the cluster cost is high, the quick elastic capacity expansion can not be realized and the like because of the relational database-based management codes in the related technology.

Description

Code management method, device, storage medium and electronic equipment
Technical Field
The present application relates to the field of code development, and in particular, to a code management method, a device, a storage medium, and an electronic apparatus.
Background
In the field of software design and development, the low-code development platform performs application program development through visualization, so that developers with different experience levels can create business objects and realize business logic by using drag components and model-driven logic through a graphical user interface. The application can be quickly generated without encoding or with a small amount of code.
And converting the objects into semi-structured metadata for persistence processing when the elements such as an application program, a system module, a business object, a task object, a system code and the like generated through visual configuration are stored. When the application program is finally released, the platform converts the metadata into executable code files through rules. Metadata is data that is an intermediate state from a visual programming object to an executable code file.
In the related art, metadata is stored in a relational database. Large software development enterprises have numerous product lines, a large number of subsystems and modules to be managed; in the technical background of system transformation to micro-service architecture, business applications are also faced with a large number of service splits. The above situation causes that when the low-code development platform is used in large-scale enterprise software development, the content of metadata is rapidly increased, the capacity expansion becomes normal, and the conventional scheme for managing metadata by using the relational database exposes various disadvantages of complex deployment, operation and maintenance, complex management, high cost of a cluster scheme, incapability of rapid elastic capacity expansion and the like.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the application provides a code management method, a device, a storage medium and electronic equipment, which at least solve the technical problems that the operation and maintenance management is complex, the cluster cost is high, and the quick elastic capacity expansion cannot be realized due to the fact that the related database-based management code in the related technology exists.
According to an aspect of an embodiment of the present application, there is provided a code management method including: generating a visual programming object in response to an operation instruction of the target object; converting the visualized programming objects into metadata in a predetermined format; and storing the metadata into a cluster service manager in a log form, wherein the cluster service manager comprises a plurality of distributed nodes, and the consistency of the data states among the distributed nodes is met.
Optionally, a plurality of distributed nodes in the cluster service manager can only have one master node at the same time, and an operation request of the client is executed by the master node and the master node copies a log to the slave node; and the cluster service manager is further used by the master node to initiate heartbeat connection to the slave nodes, and change the log state under the condition that responses exceeding the preset number of the slave nodes are received.
Optionally, storing the metadata in the cluster service manager in a log form includes: receiving a data request of a client, wherein the data request is at least used for writing metadata into a cluster service manager; converting metadata into ordered log files, copying the log files to a plurality of slave nodes through a master node, updating a current commit index value to obtain a target commit index value under the condition that a preset number of slave nodes respond, and sending the target commit index value to the slave nodes for applying the log files of the slave nodes to a state machine; the control state machine analyzes entity object metadata information in the log file, and obtains globally unique object codes and object attribute fields from analysis results; and constructing key value pairs by taking the object codes as identification information and the object attribute fields as attribute values, and caching the key value pairs.
Optionally, the message types of communication between the plurality of distributed nodes of the cluster service manager include at least: in the case that the type of the message communicated between the plurality of distributed nodes is election, each node in the plurality of distributed nodes divides the life cycle into a plurality of continuous periods, and only one master node exists in each period in the plurality of periods, wherein the periods comprise: election period and run period.
Optionally, the electing process includes: detecting whether a main node fails; under the condition that the master node is determined to be faulty, after the first time length of losing the heartbeat of any one of the slave nodes is determined to be longer than the preset time length, determining a target slave node to initiate an election request; the target slave node is determined to be the candidate node.
Optionally, the message types communicated between the plurality of distributed nodes of the cluster service manager further include at least: log replication, which is achieved by: the method comprises the steps that a master node receives a data request of a client, and data corresponding to the data request are determined to be latest log data; loading the log data to the last bit in the log set, and broadcasting log replication requests to a plurality of slave nodes; under the condition that responses exceeding a preset number of slave nodes are received, updating the submitted index into a target index of currently confirmed log data, and sending the target index to the slave nodes; control loads all unexecuted log entries from the node up to the target index into the state machine for execution.
Optionally, the log file of each distributed node in the plurality of distributed nodes is persisted in the local file in a segment preservation manner, wherein each segment log maintains basic information of a current log segment through metadata, and the basic information includes: current tenure, start log number, last log number submitted.
According to another aspect of the embodiment of the present application, there is also provided a code management apparatus including: the response module is used for responding to the operation instruction of the target object to generate a visual programming object; the conversion module is used for converting the visual programming object into metadata in a preset format; and the storage module is used for storing the metadata into the cluster service manager in a log form, wherein the cluster service manager comprises a plurality of distributed nodes, and consistency of data states is met among the distributed nodes.
According to another aspect of the embodiment of the present application, there is also provided a nonvolatile storage medium, the storage medium including a stored program, wherein the device on which the storage medium is controlled to execute the above-described code management method when the program runs.
According to another aspect of the embodiment of the present application, there is also provided an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to execute instructions to implement the code management method described above.
In the embodiment of the application, a log storage mode is adopted, and a visual programming object is generated by responding to an operation instruction of a target object; converting the visualized programming objects into metadata in a predetermined format; the metadata is stored into the cluster service manager in a log form, wherein the cluster service manager comprises a plurality of distributed nodes, consistency of data states is met among the plurality of distributed nodes, and the purpose of storing the metadata in the log form is achieved, so that a code management scheme in the related technology is optimized, quick deployment and elastic capacity expansion of codes are realized, clustering of the metadata is realized, and the technical effects of high availability are achieved, and further the technical problems that operation and maintenance management are complex, cluster cost is high, quick elastic capacity expansion cannot be realized and the like due to the fact that the management code based on the relational database in the related technology are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a flow diagram of an alternative code management method according to an embodiment of the application;
FIG. 2 is a schematic diagram of a design framework for an alternative metadata service according to an embodiment of the present application;
FIG. 3 is a schematic frame diagram of an alternative overall platform design in accordance with an embodiment of the present application;
FIG. 4 is a schematic illustration of an alternative tenninal mechanism according to an embodiment of the application;
FIG. 5 is a schematic illustration of an alternative election process according to an embodiment of the application;
FIG. 6 is a schematic illustration of a schematic representation of an alternative snapshot in accordance with an embodiment of the present application;
FIG. 7 is a schematic diagram of an alternative cluster in which services due to network partitioning are not available, in accordance with an embodiment of the present application;
fig. 8 is a schematic structural view of an alternative code management apparatus according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to facilitate a better understanding of the related embodiments of the present application, technical terms or partial terms that may be related to the related embodiments of the present application are explained below:
1. low code development platform: a low code development platform is a development platform that can quickly generate an application program without encoding or with a small amount of code. It is powerful in that it allows end users to develop their own applications using easily understood visualization tools, rather than traditional ways of writing code. The functions required by the business process, logic, data model and the like are constructed, and codes of the functions can be added if necessary. After the business logic and function construction is completed, the application can be delivered by one key and updated and deployed.
2. Distributed system: the distributed system (distributed system) is a software system built on top of a network with a high degree of cohesiveness and transparency. The distributed system has the following characteristics: (1) distribution: the distributed system is composed of a plurality of computers, the computers are distributed in regions, and functions of the whole system are realized by being distributed on each node, so that the distributed system has the data processing distribution property. (2) autonomy: each node in the distributed system comprises a processor and a memory, and each node has an independent data processing function. (3) parallelism: a large task may be divided into several sub-tasks, each executing on a different host. (4) globally: a single, global process communication mechanism must exist in a distributed system so that any one process can communicate with other processes and does not distinguish between local and remote communications.
3. The Raft protocol: a distributed coherency protocol.
4. CAP principle: refers to Consistency (Availability), partition fault tolerance (Partition tolerance) issues that need to be addressed in a distributed system. In general, the three cannot be satisfied at the same time.
5. Consistency: consistency refers to data Consistency in multiple copies (Replications) problem. The strong consistency has the following characteristics: 1) Any read can read the last written data of a certain data. 2) All processes in the system see the operation sequence which is consistent with the sequence under the global clock.
6. write-ahead-log: a Write-ahead log (WAL) is a series of techniques used in relational database systems to provide atomicity and durability (two of the ACID attributes). In a system using WAL, all modifications are written to a log file prior to commit, which typically includes the redox and undo information, and after the data changes are described by the log records (redox and undo), the data is written to a cache, and after the cache area is full, the data is modified to the persistence layer.
According to an embodiment of the present application, there is provided an embodiment of a code management method, it being noted that the steps shown in the flowcharts of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that herein.
Fig. 1 is a flow chart of a code management method according to an embodiment of the present application, as shown in fig. 1, the method includes the following steps:
step S102, a visual programming object is generated in response to an operation instruction of a target object;
in the technical solution provided in the above step S102 of the present application, a visual programming object may be generated in response to an operation instruction of a target object, and it should be noted that the visual programming object includes: visual Programming Language (VPL), which is any programming language that allows a user to manipulate program elements through graphics rather than creating programs through text specification. VPL allows programming with visual expressions, spatial arrangements of text and graphic symbols, used as syntax elements or auxiliary symbols. For example, many VPLs (known as dataflow or graphical programming) are based on the concept of "boxes and arrows," with boxes or other screen objects as entities, connected by arrows, straight line segments, or arcs representing relationships
Step S104, converting the visual programming object into metadata in a preset format;
in the technical solution provided in the step S104, the visualized programming objects may be converted into metadata in a predetermined format, and it is easy to note that the metadata is data from the visualized programming objects to an intermediate state of the executable code file. It should be noted that the predetermined format includes, but is not limited to: json format.
Step S106, storing the metadata into a cluster service manager in a log form, wherein the cluster service manager comprises a plurality of distributed nodes, and consistency of data states is met among the distributed nodes.
In the technical scheme provided in the step S106 of the present application, metadata can be stored in the cluster service manager in a log form, and it should be noted that, communication between distributed nodes in the metadata management service can be agreed through a raft protocol, so as to complete replication and consistency confirmation of data.
Through the technical schemes of the steps S102 to S106, a log storage mode can be adopted, and a visual programming object is generated by responding to an operation instruction of a target object; converting the visualized programming objects into metadata in a predetermined format; the metadata is stored into the cluster service manager in a log form, wherein the cluster service manager comprises a plurality of distributed nodes, consistency of data states is met among the plurality of distributed nodes, and the purpose of storing the metadata in the log form is achieved, so that a code management scheme in the related technology is optimized, quick deployment and elastic capacity expansion of codes are realized, clustering of the metadata is realized, and the technical effects of high availability are achieved, and further the technical problems that operation and maintenance management are complex, cluster cost is high, quick elastic capacity expansion cannot be realized and the like due to the fact that the management code based on the relational database in the related technology are solved.
As an alternative implementation method, a plurality of distributed nodes in the cluster service manager can only have one master node at the same time, and an operation request of a client is executed by the master node and a log is copied to a slave node by the master node; and the cluster service manager is further used by the master node to initiate heartbeat connection to the slave nodes, and change the log state under the condition that responses exceeding the preset number of the slave nodes are received.
In some optional embodiments of the present application, the metadata is stored in the cluster service manager in a log form, which may be implemented by the following manner, specifically: receiving a data request of a client, wherein the data request is at least used for writing metadata into a cluster service manager; converting metadata into ordered log files, copying the log files to a plurality of slave nodes through a master node, updating a current commit index value to obtain a target commit index value under the condition that a preset number of slave nodes respond, and sending the target commit index value to the slave nodes for applying the log files of the slave nodes to a state machine; the control state machine analyzes entity object metadata information in the log file, and obtains globally unique object codes and object attribute fields from analysis results; and constructing key value pairs by taking the object codes as identification information and the object attribute fields as attribute values, and caching the key value pairs.
Optionally, the message types of communication between the plurality of distributed nodes of the cluster service manager include at least: electing; wherein the cluster service manager comprises: election, and heartbeat.
In some optional embodiments of the present application, in the case where the type of the message communicated between the plurality of distributed nodes is election, each of the plurality of distributed nodes divides the lifecycle into a plurality of consecutive periods, and only one master node is included in each period of the plurality of periods, it is to be noted that the periods include: election period and run period.
Optionally, the electing process includes: detecting whether a main node fails; under the condition that the master node is determined to be faulty, after the first time length of losing the heartbeat of any one of the slave nodes is determined to be longer than the preset time length, determining a target slave node to initiate an election request; the target slave node is determined to be the candidate node.
It should be noted that, the message types of the communication between the plurality of distributed nodes of the cluster service manager at least further include: and (5) log replication.
In some alternative embodiments of the present application, log replication may be achieved by: specifically, the master node receives a data request of a client, and determines data corresponding to the data request as the latest log data; loading the log data to the last bit in the log set, and broadcasting log replication requests to a plurality of slave nodes; under the condition that responses exceeding a preset number of slave nodes are received, updating the submitted index into a target index of currently confirmed log data, and sending the target index to the slave nodes; control loads all unexecuted log entries from the node up to the target index into the state machine for execution.
It should be noted that, log files of each distributed node in the plurality of distributed nodes are persisted in a local file in a segment storage manner, where each segment log maintains basic information of a current log segment through metadata, and the basic information includes: current tenure, start log number, last log number submitted.
The above technical solution of this embodiment is further exemplified below.
The design scheme of the metadata service is shown in fig. 2, a user configures and creates a system, a module, an object and other entity models by using a visual and configuration method through a low-code development platform, the low-code development platform abstracts the entity models, converts the entity models into the metadata in json format, and submits the metadata to a metadata management service cluster in a log form. And the distributed nodes in the metadata management service reach consensus through a shift protocol, and the copying and consistency confirmation of the data are completed.
FIG. 3 is a platform overall design, metadata services provide access services for physical objects to the export low-code development platform, and embody strong data consistency to clients; data consistency is ensured by the raft protocol.
The metadata service nodes (nodes) use a raft protocol to realize distributed consistency, wherein Leader election, log replication and state machine are core logic.
The metadata service cluster internal node has three roles and four states. The three roles are leader, follower, candidate respectively; the four states are leader, follower, pre _ candidate, candidate, respectively. The nodes strictly follow a strong leader mechanism, namely, only one leader exists in the cluster at the same time, write operation requests of all clients can only be executed by the leader, and the leader copies request logs to the follower; meanwhile, the whole cluster is responsible for maintaining communication by a leader, initiating heartbeat to the follower, receiving a response from the follower, judging the response result of a plurality of groups (more than half) and submitting log state change.
After receiving the data read-write request of the client, the service node firstly converts the service data into an ordered log (write-ahead-log), copies the log to the full network through the leader, and updates the value of the commit after obtaining the confirm response of the most dispatching nodes (commit index). After obtaining the latest commit value of the leader, the follower node applies the latest globally acknowledged log entry content to the state machine (apply). The local state machine analyzes entity object metadata information (data) in the log, takes globally unique object codes as key values, takes json data with object attribute fields as main contents as value values, and persists the json data to KV cache. It should be noted that, the state machine persistence medium in the application adopts open source software RocksDB, the software is KV cache based on file IO, and the application has the advantages of embedded deployment, lightweight, small memory occupation, etc.
Communication messages between nodes can be classified into 4 types, and specifically can include election (election), heartbeat (heartbeat), log data (segment_log), and snapshot (snapshot). The data reading and writing requests of the client are converted into log formats, a continuous and monotonically increasing index (index) is uniformly maintained by a leader node, each log is strictly completed according to the sequence of the index, and the sequential execution mechanism of the log is one of guarantee measures for realizing data linear consistency reading among the nodes.
Implementation schemes of election (election), log replication (log replication), state machine (state machine), snapshot (snapshot) are described in detail below:
in the election mechanism, each metadata node (node) divides the life cycle into successive periods (term), the period mechanism is shown in fig. 4, and the life cycle of the service node is divided into two phases, election and operation; periods without a leader selected will be skipped directly and no service will be provided to the outside. The tenure is a monotonically increasing value starting from 0 on the timeline. The time of each period is divided into two parts, namely an election period and an operation period, and in each period, the whole cluster can only have one leader.
The election is triggered by a timeout mechanism, and the election flow is shown in fig. 5, and the election is driven by the timeout mechanism, so that each node adds a random time stamp on the basis of default timeout time for avoiding collision. In the daily operation process of the metadata service cluster, a leader maintains heartbeat to each follower node, each node locally maintains unified configuration information (configuration), the configuration comprises server information such as the number, address, port and the like of each node of the cluster, and the metadata service cluster also comprises information such as heartbeat time (heartbeat) and election timeout time (election timeout) and the like, wherein the heartbeat time is far less than the timeout time. After the leader node fails or a network failure occurs, each node locally judges that the heartbeat is lost and the time length exceeds the timeout time, namely, an election request is initiated to the whole network.
When an election request is initiated, the initiating node changes the role of the initiating node into a candidate, and the term is +1, so that collision is generated in order to avoid that a plurality of nodes initiate an election event at the same time as much as possible, and the system adds a random time stamp on the basis of the election timeout.
When a candidate sends an election request, a local node's commit index and term are carried in a log, other nodes compare the local commit index and term with the candidate's request value after receiving the candidate's request, and if the local node updates, a reject response is sent. This mechanism ensures that the leader log content of the election generation is up-to-date.
If a leader is not selected in one period, the cluster may have no leader, and the period does not have a run-time, and after term+1, a new round of election is continued.
After the election is finished, the node which becomes a leader changes the role of the node into a leader, and starts to execute the task of the running period; other candidates change roles to follow, continuing to receive requests from the leader at run-time.
In the segmented log file, log details (log entries) are stored, each log is guaranteed to be unique by term+index, a log object (log entry) comprises a term (term), an index (index), a type (type) and data (data), and the data field stores the complete content of entity object metadata.
After receiving the request sent by the client, the leader node adds the request data as the latest log to the last position of the log set, broadcasts a log replication request (appdlogentries) to the follower node, and after receiving the acknowledgement of the node receipt of the majority dispatch, the leader updates the commit_index to the currently acknowledged log index, and notifies all nodes of the execution result again. After receiving the commit_index update message, the full node loads all unexecuted log entries up to commit_index into the state machine for execution (apply), and after execution, the state machine updates the executed index (apply_index) to the latest value. For an unresponsive follower node, the leader will attempt to send a request for log replication until the follower node executes and responds.
The operation of log replication is accomplished by a unique leader, which ensures that if two different log entries have the same index and tenure number, then the data they contain must be the same. When the leader node sends a log replication request, it will carry the tenure and index (pre_index) of the last log, and the follower node will check whether it matches the index and tenure number in the local log, and if not, it will reject. It can be understood that the log of the leader node only increases and cannot be covered or deleted, and the leader node can ensure the strong consistency of the data of the cluster nodes by forcing the follower to copy the log of the leader node in order.
The leader node maintains a next index for each follower that indicates the next log index to be sent to that follower. After receiving the log replication request of the leader, the follower can firstly check consistency, and if the index of the local latest log is inconsistent with the next_index of the leader, a failure result is returned. After the leader receives the failure response, the next_index is decremented and then retried, and finally the next_index of the leader is decremented to be consistent with the index of the follower, and the leader resends the log replication request from the matched position, so that the data of the follower is kept consistent finally.
The state machine orderly makes actual application of log data, wherein the core method is statemachine.apply, in the scheme, the log data is persisted by adopting a KV cache scheme, the state machine in the embodiment adopts open-source embedded KV cache software RocksDB as a data landing tool, each node of metadata service maintains a local state machine and is responsible for applying the log data to the state machine, and finally, the data is landed.
Optionally, after the leader node generates a log entry or the log replication is completed by the follower node, the log data is not immediately applied to the state machine, but the log data can be ensured to be correctly used through a commit index mechanism. It should be noted that, the commit index mechanism is a multiple-dispatch mechanism, that is, when the leader node sends a log replication request to the follow, all log indexes (match_index) that the follow has replicated are counted first, and the maximum index value (max (index)) of the multiple dispatch (N/2+1) is calculated. After the calculation is completed, comparing the latest index with the local commit_index, if new_index is greater than the commit_index, updating the commit_index, completing the pushing work of the commit_index, and simultaneously sending the latest commit_index to each follower node through a log replication request. After receiving the latest commit_index, the following can apply all the unapplied log entries (apply_index) before the commit_index to the state machine.
The application operation of the state machine is the operation of truly landing the metadata of the entity object. The state opportunity in this embodiment parses the data field (data) in the log entry, takes the code element as key, and the data body as value, and persists into the KV cache by the rocksdb.put method.
FIG. 6 is a schematic workflow diagram of a snapshot, shown in FIG. 6, with a total of 7 logs, snapshot location at index 5. After the nodes are newly added, the leader sends a snapshot file to the follow, the file content is compressed, only the final value of the data is reserved, and redundant log data is ignored. After the completion of the installation, the follower continues to apply the log data of index 6-7, and finally the catch-up of the log is completed. It is easily noted that the purpose of a snapshot is to compress log capacity, a node rejoins the cluster, or a newly added node is a typical use case for a snapshot. In the management of entity object metadata, the number of logs is often much larger than the actual number of metadata. The low code developer can generate a plurality of logs when repeatedly modifying the same entity object. In the scene of newly added nodes or newly added nodes in the cluster, if all logs need to be replayed and executed, so as to achieve the purpose of node data trace leader and synchronous log, a great deal of network and IO overhead is caused, and a great deal of time is consumed.
Creation of a snapshot: according to the embodiment, the purpose of fast loading and node adding is achieved by compressing the log capacity and reducing the server resource overhead during node initialization through a snapshot mechanism. Each node maintains snapshot execution time snapshot_period locally, starts a task thread and executes snapshot action locally at intervals. The content of the snapshot is divided into two parts, namely configuration information snapshot metadata_snapshot and data file snapshot data_file_snapshot. The configuration information snapshot is used for storing lastIncludedIndex, lastIncludedTerm of the current node, node information servers of the current cluster and the like, the data file snapshot is used for storing a data file which is currently applied to a state machine, and backup of the data file is achieved through RocksDB.
State of snapshot: after the leader applies for adding to the cluster, it will issue a snapshot load instruction installSnapshot (peer) to the new node and send the latest snapshot file. After receiving the snapshot file, the new node firstly loads metadata data, then loads the data file datafile, and the state machine realizes the loading of the data file through the RocksDB.
After the snapshot data is loaded, the new node copies the rest log data in batches from the maximum log index of the snapshot data to the latest commit index of the leader, and applies the rest log data to a state machine to finish the catch-up of the data.
The CAP problem faced by the distributed system, namely consistency, availability and partition tolerance, cannot be satisfied at the same time; under the distributed scene, the situation that the application partition is caused by multi-node deployment exists necessarily, and the embodiment is used as the application scene of distributed storage, and the availability is selected to be sacrificed, so that the requirement of strong data consistency is met, namely, the cluster can accept the unavailable state in a short time.
In this embodiment, based on the strong leader mechanism, leader selection requires that multiple groups (N/2+1) agree on, so that the cluster can ensure that the entire service is still available in case of N/2-1 nodes failure.
FIG. 7 illustrates a service unavailability of a cluster due to network partitioning, as shown in FIG. 7, when a service partitioning occurs, a follower tries to become a leader, and a voting vote is always initiated, and term grows indefinitely; after the partition is restored, the leader is in a stepdown state due to a persistent election event caused by an excessive term gap, and the cluster service is not available. Namely, when the cluster generates network partition, the node 1 continuously initiates election operation, which causes the local term to increase consistently, and after the network is restored, the node 1 leads too much term, and the content is lagged, which causes the cluster to be in an election state for a long time, and the embodiment avoids similar problems through a pre_vot mechanism.
Election is divided into two stages, before formal vot, a candidate can change own identity into pre_candidate, initiate a pre_vot request and wait for response results of other nodes. If the local period is smaller than the period of the response node, the local node is degraded to a follower; after more than half of nodes vote, the state is converted into candidate, and a real vot request is initiated. The pre_vot mechanism ensures that node 1 cannot initiate a formal vot request when the node 1 cannot obtain majority vote, so that the normal operation of a leader node after partition recovery is ensured.
It should be noted that, there are multiple implementations of the distributed coherency protocol, such as Paxos, raft, ZAB, gossip, which are used to implement different requirements of strong coherency, weak coherency, and so on. In addition, the public chain, the alliance chain and the private chain have a plurality of consensus mechanisms, and can respectively process related problems such as consensus, consistency, identity forging, consensus cost and the like in different application scenes. The application is mainly used for processing the data consistency problem in the distributed system, and the specific technology adopted by the consistency protocol selection and consensus mechanism is not particularly limited.
Fig. 8 is a schematic structural view of a code management apparatus according to an embodiment of the present application, as shown in fig. 8, the apparatus comprising:
a response module 40 for generating a visual programming object in response to an operation instruction of the target object;
a conversion module 42 for converting the visual programming object into metadata in a predetermined format;
the storage module 44 is configured to store metadata in a log form into a cluster service manager, where the cluster service manager includes a plurality of distributed nodes, and consistency of data states is satisfied among the plurality of distributed nodes.
In the code management device, a response module 40 is used for responding to an operation instruction of a target object to generate a visual programming object; a conversion module 42 for converting the visual programming object into metadata in a predetermined format; the storage module 44 is configured to store metadata in a log form to a cluster service manager, where the cluster service manager includes a plurality of distributed nodes, and consistency of data states is satisfied among the plurality of distributed nodes, so as to achieve a purpose of storing metadata in the log form, thereby implementing optimization of a code management scheme in related technologies, implementing rapid deployment of codes, elastic expansion, implementing metadata clustering, and achieving a high-availability technical effect, and further solving technical problems of complex operation and maintenance management, high cluster cost, and incapability of rapid elastic expansion due to a relational database management code in related technologies.
According to another aspect of the embodiment of the present application, there is also provided a nonvolatile storage medium, the storage medium including a stored program, wherein the device on which the storage medium is controlled to execute the above-described code management method when the program runs.
According to another aspect of the embodiment of the present application, there is also provided an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to execute instructions to implement the code management method described above.
Specifically, the storage medium is configured to store program instructions for the following functions, and implement the following functions:
generating a visual programming object in response to an operation instruction of the target object; converting the visualized programming objects into metadata in a predetermined format; and storing the metadata into a cluster service manager in a log form, wherein the cluster service manager comprises a plurality of distributed nodes, and the consistency of the data states among the distributed nodes is met.
In the related embodiment of the application, a log storage mode is adopted, and a visual programming object is generated by responding to an operation instruction of a target object; converting the visualized programming objects into metadata in a predetermined format; the metadata is stored into the cluster service manager in a log form, wherein the cluster service manager comprises a plurality of distributed nodes, consistency of data states is met among the plurality of distributed nodes, and the purpose of storing the metadata in the log form is achieved, so that a code management scheme in the related technology is optimized, quick deployment and elastic capacity expansion of codes are realized, clustering of the metadata is realized, and the technical effects of high availability are achieved, and further the technical problems that operation and maintenance management are complex, cluster cost is high, quick elastic capacity expansion cannot be realized and the like due to the fact that the management code based on the relational database in the related technology are solved.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (10)

1. A code management method, comprising:
generating a visual programming object in response to an operation instruction of the target object;
converting the visual programming object into metadata in a predetermined format;
and storing the metadata into a cluster service manager in a log form, wherein the cluster service manager comprises a plurality of distributed nodes, and consistency of data states is met among the distributed nodes.
2. The method of claim 1, wherein a plurality of distributed nodes in the cluster service manager can only exist one master node at a time, an operation request of a client is executed by the master node, and a log is copied by the master node to a slave node; and the cluster service manager is further used for initiating heartbeat connection to the slave nodes by the master node and changing the log state under the condition that responses exceeding the preset number of the slave nodes are received.
3. The method of claim 2, wherein storing the metadata in a log form into a cluster service manager comprises:
receiving a data request of a client, wherein the data request is at least used for writing the metadata into the cluster service manager;
converting the metadata into ordered log files, copying the log files to a plurality of slave nodes through the master node, updating a current commit index value to obtain a target commit index value under the condition that a preset number of slave node responses are obtained, and sending the target commit index value to the slave nodes for applying the log files of the slave nodes to a state machine;
the state machine is controlled to analyze the entity object metadata information in the log file, and globally unique object codes and object attribute fields are obtained from analysis results;
and constructing key value pairs by taking the object codes as identification information and the object attribute fields as attribute values, and caching the key value pairs.
4. A method according to claim 3, wherein the message types communicated between the plurality of distributed nodes of the cluster service manager include at least: electing;
In the case that the type of the message communicated between the plurality of distributed nodes is the election, each node in the plurality of distributed nodes divides the life cycle into a plurality of continuous periods, and only one master node exists in each period in the plurality of periods, wherein the periods comprise: election period and run period.
5. The method of claim 4, wherein the electing comprises:
detecting whether a main node fails;
under the condition that the master node is determined to be faulty, after the first time length of losing the heartbeat of any one of the slave nodes is determined to be longer than the preset time length, determining a target slave node to initiate an election request;
the target slave node is determined to be a candidate node.
6. The method of claim 1, wherein the message types communicated between the plurality of distributed nodes of the cluster service manager include at least: log replication, the log replication is realized by the following way:
the method comprises the steps that a master node receives a data request of a client, and data corresponding to the data request is determined to be latest log data;
loading the log data to the last bit in a log set, and broadcasting log replication requests to the plurality of slave nodes;
Updating the submitted index to a target index of currently confirmed log data and transmitting the target index to the slave node under the condition that responses exceeding a preset number of slave nodes are received; and controlling the slave node to load all unexecuted log entries up to the target index into a state machine for execution.
7. The method of claim 6, wherein the log file of each of the plurality of distributed nodes is persisted in a local file in a segment save manner, wherein each segment log maintains basic information of a current log segment through metadata, the basic information comprising: current tenure, start log number, last log number submitted.
8. A code management apparatus, comprising:
the response module is used for responding to the operation instruction of the target object to generate a visual programming object;
the conversion module is used for converting the visual programming object into metadata in a preset format;
and the storage module is used for storing the metadata into a cluster service manager in a log form, wherein the cluster service manager comprises a plurality of distributed nodes, and consistency of data states is met among the distributed nodes.
9. A non-volatile storage medium, characterized in that the storage medium comprises a stored program, wherein the program, when run, controls a device in which the storage medium is located to perform the code management method of any one of claims 1 to 7.
10. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the code management method of any of claims 1 to 7.
CN202210609239.3A 2022-05-31 2022-05-31 Code management method, device, storage medium and electronic equipment Pending CN117193855A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210609239.3A CN117193855A (en) 2022-05-31 2022-05-31 Code management method, device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210609239.3A CN117193855A (en) 2022-05-31 2022-05-31 Code management method, device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN117193855A true CN117193855A (en) 2023-12-08

Family

ID=88994795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210609239.3A Pending CN117193855A (en) 2022-05-31 2022-05-31 Code management method, device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN117193855A (en)

Similar Documents

Publication Publication Date Title
US10831614B2 (en) Visualizing restoration operation granularity for a database
US8886609B2 (en) Backup and restore of data from any cluster node
US20170316046A1 (en) Importation, presentation, and persistent storage of data
CN102591741B (en) Scalable virtual capacity is used to realize the method and system of disaster recovery
US8301600B1 (en) Failover recovery in a distributed data store
US11579981B2 (en) Past-state backup generator and interface for database systems
DE602005002532T2 (en) CLUSTER DATABASE WITH REMOTE DATA MIRROR
US20120226664A1 (en) Parallel database backup and restore
JP2000137694A (en) System and method for supplying continuous data base access by using common use redundant copy
CN114466027B (en) Cloud primary database service providing method, system, equipment and medium
CN103581332A (en) HDFS framework and pressure decomposition method for NameNodes in HDFS framework
EP4276651A1 (en) Log execution method and apparatus, and computer device and storage medium
CN115562676B (en) Triggering method of graph calculation engine
Chen et al. Replication-based fault-tolerance for large-scale graph processing
US11522966B2 (en) Methods, devices and systems for non-disruptive upgrades to a replicated state machine in a distributed computing environment
Goniwada et al. Cloud native architecture and design patterns
CN116150263B (en) Distributed graph calculation engine
US20230126173A1 (en) Methods, devices and systems for writer pre-selection in distributed data systems
CN116303789A (en) Parallel synchronization method and device for multi-fragment multi-copy database and readable medium
CN117193855A (en) Code management method, device, storage medium and electronic equipment
US11520668B2 (en) Vendor-neutral models of vendors' application resources
US20210334396A1 (en) Creating vendor-neutral data protection operations for vendors' application resources
Hagen et al. Highly available process support systems: Implementing backup mechanisms
CN117931531B (en) Data backup system, method, apparatus, device, storage medium and program product
KR20240075240A (en) System and method for providing high availibility of distributed learning model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination