CN115408407A - Service cooperative processing method, system, terminal and computer storage medium - Google Patents

Service cooperative processing method, system, terminal and computer storage medium Download PDF

Info

Publication number
CN115408407A
CN115408407A CN202211042981.7A CN202211042981A CN115408407A CN 115408407 A CN115408407 A CN 115408407A CN 202211042981 A CN202211042981 A CN 202211042981A CN 115408407 A CN115408407 A CN 115408407A
Authority
CN
China
Prior art keywords
data
node
entry
process operation
sending
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211042981.7A
Other languages
Chinese (zh)
Inventor
莫高勇
刘珏
李霖
刘纯
谢朝辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Yungu Technology Co Ltd
Original Assignee
Zhongke Yungu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Yungu Technology Co Ltd filed Critical Zhongke Yungu Technology Co Ltd
Priority to CN202211042981.7A priority Critical patent/CN115408407A/en
Publication of CN115408407A publication Critical patent/CN115408407A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2308Concurrency control
    • G06F16/2315Optimistic concurrency control
    • G06F16/2329Optimistic concurrency control using versioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2365Ensuring data consistency and integrity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/103Workflow collaboration or project management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computer Security & Cryptography (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to a service cooperative processing method, a system, a terminal and a computer storage medium, wherein the service cooperative processing method comprises the following steps: acquiring a first flow definition file, and extracting entry coding data from the first flow definition file; and when the entry coding data are updated, sending the updated content of the entry coding data to the child nodes so as to synchronously update the second flow definition file of the child nodes. The main node serves as a central control node, the main node designs and issues processes to all sub-nodes in the whole world in a unified mode, consistency and synchronism of process definition of all nodes across regions and data centers are improved, process operation data of all sub-nodes in the whole world are collected to the main node, switching to other sub-nodes is not needed, global process services can be handled in a unified mode at the main node, and efficiency and accuracy of process service collaborative processing of the sub-regions and the data centers are improved.

Description

Service cooperative processing method, system, terminal and computer storage medium
Technical Field
The present application belongs to the technical field of service processing, and in particular, to a service co-processing method, system, terminal, and computer storage medium.
Background
The process engine is widely used in cooperative office and business approval, and can be generally divided into a centralized type and a distributed type in architecture. The centralized architecture is generally used in many small and medium-sized enterprises, the instances of the process engines are distributed in the same data center and share the same database, all business systems access the unified process platform to carry out business approval, the architecture is simple, and cross-node synchronous data is not needed. The distributed architecture is usually used in large-scale transnational enterprises, due to differences of geographic positions, languages, use habits and the like, the examples of the process engine are distributed in different data centers around the world, the examples of each data center have independent databases and independent configurations, each regional node is responsible for processing the business process of the region in business, the headquarter node needs to process the business process of each regional node around the world in a unified manner, and the problems of how to synchronize data between the headquarter node and the regional nodes and how to handle the processes in a coordinated manner exist. If no flow engine supporting global business cooperation exists, the approval efficiency of the global business flow can be seriously influenced, and even business confusion can be caused.
Currently, global business cooperation is performed through a process engine, and an independent process engine is usually deployed in each data center in the world, as shown in fig. 1, and is deployed in continent a, continent B, and continent C, respectively, and then users in different areas are navigated to the corresponding data centers through a portal system to perform process business processing. As the process engine platforms of all regions around the world are independently deployed, each region needs to independently design a process, a release process and a handling process. The headquarters node often needs to participate in the process service handling of a plurality of global areas, which causes the user to frequently switch the process engine and has low handling efficiency. And once the flow changes, the flow data needs to be synchronized manually by all regional nodes around the world, which is easy to cause inconsistent flow, thereby affecting the service.
Disclosure of Invention
In view of the above technical problems, the present application provides a method, a system, a terminal and a computer storage medium for collaborative processing of services, so as to improve consistency and synchronization of flow definitions of nodes across areas and data centers, and efficiency and accuracy of collaborative processing of flow services.
The application provides a service cooperative processing method, which is applied to a main node and comprises the following steps: acquiring a first flow definition file, and extracting entry coding data from the first flow definition file; and when the entry coding data are updated, sending the updated content of the entry coding data to a child node so as to synchronously update the second process definition file of the child node.
In one embodiment, when there is an update in the entry coding data, sending the update content of the entry coding data to a child node includes: when newly added entry coding data exist, the newly added entry coding data are sent to an internationalization platform; translating the newly added entry coding data through the international platform, determining entry translation data of a plurality of language versions, responding to a language selection instruction of the child node, and sending the entry translation data of the corresponding language version to the child node.
The application also provides a service cooperative processing method, which is applied to the child node and comprises the following steps: receiving the updating content of entry coding data in a first flow definition file of a main node, and updating a second flow definition file according to the updating content of the entry coding data; acquiring process operation data, wherein the process operation data comprises process instance data; and when new process instance data exist, sending the process operation data corresponding to the new process instance data to the main node so that the main node can process the corresponding process operation data in a unified manner.
In one embodiment, when there is new process instance data, sending process operation data corresponding to the new process instance data to the master node, includes: when the newly added process instance data exist, sending process operation data corresponding to the newly added process instance data to a message queue of a synchronous server; and responding to the subscription process operation data message of the main node through the synchronous server, extracting one or more target process operation data from the message queue, and sending the target process operation data to the main node.
The application also provides a business cooperative processing system, which comprises a main node and at least one sub-node; the main node is used for acquiring a first flow definition file, extracting entry coding data from the first flow definition file, and sending the updated content of the entry coding data to the child node when the entry coding data is updated; and the child node is used for receiving the updated content of the entry encoding data and updating the second flow definition file according to the updated content of the entry encoding data.
In one embodiment, the processing system further comprises an internationalization platform; the main node is also used for sending the newly added entry coding data to the international platform when the newly added entry coding data exists; the internationalization platform is used for translating the newly added entry coding data, determining entry translation data of a plurality of language versions, responding to a language selection instruction of a node, and sending the entry translation data of the corresponding language version to the node; wherein the node comprises the master node and the child nodes.
In an embodiment, the child node is further configured to obtain process running data, where the process running data includes process instance data, and when new process instance data exists, send the process running data corresponding to the new process instance data to the host node; and the main node is also used for uniformly processing the corresponding process operation data.
In one embodiment, the processing system further comprises a synchronization server; the child node is also used for sending the process running data corresponding to the newly added process instance data to a message queue of a synchronous server when the newly added process instance data exists; the synchronous server is used for responding to the subscription process operation data message of the main node, extracting one or more target process operation data from the message queue, and sending the target process operation data to the main node.
The present application further provides a terminal, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the steps of the processing method are implemented.
The present application also provides a computer storage medium, which stores a computer program that, when executed by a processor, implements the steps of the above-described processing method.
The service cooperative processing method, system, terminal and computer storage medium provided by the application take the main node as a central control node, the main node designs and issues the flow to each global sub-node in a unified way, and summarizes the flow operation data of each global sub-node to the main node, and the service cooperative processing method, system, terminal and computer storage medium have the following advantages:
1) The efficiency and the accuracy of the flow business cooperative processing of all areas in the world are improved;
2) The consistency and the synchronism of the flow definition of each global area are improved;
3) The unified handling and supervision of all global regional processes are realized;
4) Internationalization of entry coding data defined by the flow is realized.
Drawings
FIG. 1 is a schematic diagram of an existing global flow engine deployment scenario;
fig. 2 is a schematic flowchart of a service cooperative processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a specific entry translation according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a service cooperative processing method provided in the second embodiment of the present application;
fig. 5 is a schematic specific flowchart of data synchronization provided in the second embodiment of the present application;
fig. 6 is a schematic structural diagram of a service cooperative processing system provided in the third embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal according to a fourth embodiment of the present application.
Detailed Description
The technical solution of the present application is further described in detail with reference to the drawings and specific embodiments. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, "and/or" includes any and all combinations of one or more of the associated listed items.
Fig. 2 is a schematic flow chart of a service cooperative processing method provided in an embodiment of the present application, where the service cooperative processing method provided in the embodiment of the present application is applied to a master node. As shown in fig. 2, the service cooperative processing method of the present application may include the following steps:
step S101: acquiring a first flow definition file, and extracting entry coding data from the first flow definition file;
optionally, the process definition file includes data such as a process definition, a process node, and a name and a description of a process related module, where the first process definition file is a process definition file locally stored by the master node, and the second process definition file is a process definition file locally stored by the child node.
Optionally, the entry encoding data includes an entry identifier (key value) and entry content.
Step S102: and when the entry encoding data is updated, sending the updated content of the entry encoding data to the child node so as to synchronously update the second process definition file of the child node.
Optionally, the updating content of the entry coding data includes changing the entry content corresponding to the entry identifier a from a to B, adding the entry identifier B and its corresponding entry content C, deleting the entry identifier C and its corresponding entry content D, and the like.
In one embodiment, when there is an update in entry coding data, sending the update content of the entry coding data to a child node includes:
when newly added entry coding data exist, the newly added entry coding data are sent to the international platform;
translating the newly added entry coding data through an international platform, determining entry translation data of a plurality of language versions, responding to a language selection instruction of the child node, and sending the entry translation data of the corresponding language version to the child node.
Optionally, the entry translation process is as shown in fig. 3: the method comprises the steps that a main node obtains a process definition file, extracts entry coding data, judges whether newly added entry coding data exist or not, sends the newly added entry coding data to an international platform through a Resource representation State Transfer (RESTful) interface if the newly added entry coding data exist, and continues to obtain the process definition file if the newly added entry coding data do not exist; translating the newly added entry coding data by the international platform, judging whether the translation version is greater than the local version or not, if the translation version is greater than the local version, indicating that the translation is finished to obtain entry translation data of a plurality of language versions, and if the translation version is less than or equal to the local version, continuing to translate the newly added entry coding data; the internationalization platform responds to the language selection instruction of each node, and sends the entry translation data of the corresponding language version to each node, if the selected language of the main node is Chinese, the entry translation data of the Chinese version is sent to the main node, and if the selected language of the sub-node 1 is portuguese, the entry translation data of the portuguese version is sent to the sub-node 1; if the selected language of the child node 2 is English, the entry translation data of the English version is sent to the child node 2; and each node updates the local entry translation according to the entry translation data sent by the international platform.
Optionally, in order to implement a workflow of a Business Process Modeling and labeling V2.0 (Business Process Modeling notification 2.0, BPMN 2.0) standard, an open source workflow framework (activti) is introduced, which is limited to the defects of the activti itself in terms of internationalization, distribution, and multi-Business system support, and the application will only use the query function of the activti on flow definition parsing, flow execution, and related flow data, and on this basis, the following aspects are expanded:
1) Supporting a multi-tenant and multi-service system, and defining the flow definition management of a three-level structure: the method comprises the steps of a service system, module definition and process definition, wherein a tenant definition Identity (ID) is connected on the module definition and the process definition, so that data isolation of the service system and the tenant on the process definition is realized, and the mutual independence of data such as a process instance, a task instance and the like led out by the subsequent process definition on the tenant and the service system is further ensured;
2) Taking over the process running data, storing the actual process running instance and task instance data into a service database and recording the process node execution sequence data by monitoring events such as the start and end events of the process instance and the start and end of the task instance, the distribution of a handler and the like issued by Activiti in the process of executing the process, and simultaneously issuing related events to message queues such as a Kafka/RabbitMQ (RabbitMessage Queue) in the form of messages so as to provide possible subsequent extension support;
3) The process definition is expanded, wherein a BPMN2.0 process file is defined in an Extensible Markup Language (XML) format, on the basis, an expanded node attribute and a custom element are defined under a name space by adding a custom XML name space, various nodes defined by the BPMN2.0 related to a user task node, a sequence flow, a gateway, a global process attribute and the like are expanded in data storage in a JSON (JavaScript Object Notation) format mode in the expanded related elements, and the expanded information in the XML is analyzed and then stored in a service database, so that the process from front-end process definition design to rear-end definition information storage is realized;
4) The process definition pretest mainly comprises the steps of calibrating an initial initiating node, splitting a process node, presetting a process event, checking the legality of process definition grammar and analyzing extension information of process participants, forms and the like, so that a complete executable process definition is obtained, and the accuracy and the performability of the process definition are pretested according to the execution sequence set by the process and the execution logic in a standard process form initiating mode.
The business cooperative processing method provided by the embodiment of the application takes the main node as a central control node, and the main node designs and issues the flow to all sub-nodes in the world in a unified manner, so that the consistency of flow definitions of all nodes across regions and data centers is improved, and the internationalization of entry coding data of the flow definitions is realized.
Fig. 4 is a schematic flow chart of the service cooperative processing method provided in the second embodiment of the present application, where the service cooperative processing method provided in the second embodiment of the present application is applied to a child node. As shown in fig. 4, the service cooperative processing method of the present application may include the following steps:
step S201: receiving the updating content of the entry encoding data in the first flow definition file of the main node, and updating the second flow definition file according to the updating content of the entry encoding data;
step S202: acquiring process operation data, wherein the process operation data comprises process instance data;
step S203: and when the newly added process instance data exist, sending the process operation data corresponding to the newly added process instance data to the main node so that the main node can uniformly process the corresponding process operation data.
Optionally, the process run data further comprises task instance data.
In one embodiment, when there is new process instance data, sending process operation data corresponding to the new process instance data to a master node, includes:
when the newly added process instance data exist, the process operation data corresponding to the newly added process instance data are sent to a message queue of a synchronous server;
and responding to the subscription process operation data message of the main node through the synchronous server, extracting one or more target process operation data from the message queue, and sending the target process operation data to the main node.
Alternatively, the data synchronization process is as shown in fig. 5: the child node monitors initialization of the client and reads the process operation data, judges whether the process operation data contains newly added process instance data, and if the newly added process instance data does not exist, continues to read the process operation data; if the new process example data exists, the process operation data corresponding to the new process example data is stored locally, meanwhile, the corresponding process operation data is sent to the synchronous server, and in addition, if other related systems exist with the corresponding process operation data, data messages are sent to other related systems (for example, after a certain employee passes the approval of the process of asking for leave application submitted by the attendance system, a notice of completing the leave approval is sent to the human resource system, so that the current meal complement of the employee can be deducted by the human resource system); the synchronous server receives the corresponding process running data, stores the corresponding process running data in a Kafka message queue and performs data backup at the same time; the synchronous server responds to a subscription process operation data message of the main node, extracts one or more target process operation data from the message queue, and sends the target process operation data to the main node; the main node stores the target process operation data and sends the target process operation data to the corresponding examination and approval responsible person.
According to the business cooperative processing method provided by the embodiment II, the main node is used as the central control node, the flow operation data of all the sub-nodes in the world are collected to the main node, the global flow business can be handled uniformly at the main node without switching to other sub-nodes, the efficiency and the accuracy of the process business cooperative processing of cross-region and cross-data center are improved, and the uniform handling and supervision of all the regional flows in the world are realized.
Fig. 6 is a schematic structural diagram of a service cooperative processing system according to a third embodiment of the present application. The business cooperative processing system of the application comprises: a main node and at least one sub-node;
in one embodiment, the master node is configured to acquire a first flow definition file, extract entry coding data from the first flow definition file, and send update content of the entry coding data to the child node when the entry coding data is updated;
the child node is used for receiving the updating content of the entry encoding data in the first flow definition file of the main node and updating the second flow definition file according to the updating content of the entry encoding data.
In one embodiment, the service cooperative processing system further comprises an internationalization platform;
the main node is also used for sending the newly added entry coding data to the international platform when the newly added entry coding data exists;
the internationalization platform is used for translating the newly added entry coding data, determining entry translation data of a plurality of language versions, responding to a language selection instruction of the node, and sending the entry translation data of the corresponding language version to the node; the nodes comprise main nodes and sub nodes.
In one embodiment, the child node is further configured to obtain process operation data, where the process operation data includes process instance data, and send, when new process instance data exists, process operation data corresponding to the new process instance data to the host node;
the main node is also used for uniformly processing corresponding process operation data.
In one embodiment, the service cooperative processing system further includes a synchronization server;
the child nodes are also used for sending the process running data corresponding to the newly added process instance data to the message queue of the synchronous server when the newly added process instance data exist;
the synchronous server is used for responding to the subscription process operation data message of the main node, extracting one or more target process operation data from the message queue, and sending the target process operation data to the main node.
The main node and the sub-nodes are similar in main structure, and both have an independent process engine system and a process data management module, can independently undertake initiation, approval, operation, monitoring, management and optimization of a process, and are responsible for process services of a service radiation area (including one or more countries) of the node, and the main node and the sub-nodes are structurally different as follows:
1) The main node is separately provided with a process definition design module for designing process definition files of all global nodes including the node of the main node and issuing and controlling the process definition files;
2) The host node is separately provided with a unified to-do management module which is used for carrying out unified display and task approval skip on global node to-do data;
3) The main node is separately provided with an internationalized synchronization module and is used for synchronously adding entry coding data to the internationalized platform;
4) The child nodes are individually deployed with data synchronization clients for the transmission between the process operation data of the global nodes and the synchronization server.
For the specific implementation process of this embodiment, reference is made to the first embodiment and the second embodiment, which are not described herein again.
The service cooperative processing system provided by the third embodiment of the application adopts distributed node deployment, each node is responsible for regional service support, the master node and the slave node are synchronous in data, the master node is responsible for unified to-do processing and remote deployment and distribution of flow definition, and unified handling and supervision are performed on global regional flows, so that the service processing speed of the global regional flows and the consistency of implementation effects are improved, and the internationalization of BPMN2.0 standard flow engine driving and flow definition entry coding data is realized.
Fig. 7 is a schematic structural diagram of a terminal according to the fourth embodiment of the present disclosure. The terminal of the application includes: a processor 110, a memory 111, and a computer program 112 stored in the memory 111 and operable on the processor 110. The steps in the embodiments of the business coprocessing method described above are implemented when the processor 110 executes the computer program 112.
The terminal may include, but is not limited to, a processor 110, a memory 111. Those skilled in the art will appreciate that fig. 7 is merely an example of a terminal and is not intended to be limiting and may include more or fewer components than those shown, or some of the components may be combined, or different components, e.g., the terminal may also include input-output devices, network access devices, buses, etc.
The Processor 110 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 111 may be an internal storage unit of the terminal, such as a hard disk or a memory of the terminal. The memory 111 may also be an external storage device of the terminal, such as a plug-in hard disk provided on the terminal, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 111 may also include both an internal storage unit of the terminal and an external storage device. The memory 111 is used for storing computer programs and other programs and data required by the terminal. The memory 111 may also be used to temporarily store data that has been output or is to be output.
The present application further provides a computer storage medium, where a computer program is stored on the computer storage medium, and when the computer program is executed by a processor, the steps in the embodiment of the service cooperative processing method are implemented.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
As used herein, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, including not only those elements listed, but also other elements not expressly listed.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A service cooperative processing method is applied to a main node and comprises the following steps:
acquiring a first flow definition file, and extracting entry coding data from the first flow definition file;
and when the entry coding data are updated, sending the updated content of the entry coding data to a child node so as to synchronously update the second process definition file of the child node.
2. The processing method of claim 1, wherein when there is an update in the entry-encoding data, sending the updated content of the entry-encoding data to a child node, comprises:
when newly added entry coding data exist, the newly added entry coding data are sent to an internationalized platform;
translating the newly added entry coding data through the international platform, determining entry translation data of a plurality of language versions, responding to a language selection instruction of the child node, and sending the entry translation data of the corresponding language version to the child node.
3. A service cooperative processing method is applied to a child node, and comprises the following steps:
receiving the updating content of entry coding data in a first flow definition file of a main node, and updating a second flow definition file according to the updating content of the entry coding data;
acquiring process operation data, wherein the process operation data comprises process instance data;
and when new process instance data exist, sending the process operation data corresponding to the new process instance data to the main node so that the main node can process the corresponding process operation data in a unified manner.
4. The processing method of claim 3, wherein when new process instance data exists, sending process operation data corresponding to the new process instance data to the master node, comprises:
when the newly added process instance data exist, sending the process operation data corresponding to the newly added process instance data to a message queue of a synchronous server;
and responding to the subscription process operation data message of the main node through the synchronous server, extracting one or more target process operation data from the message queue, and sending the target process operation data to the main node.
5. A business cooperative processing system is characterized in that the processing system comprises a main node and at least one sub-node;
the main node is used for acquiring a first flow definition file, extracting entry coding data from the first flow definition file, and sending the updated content of the entry coding data to the child node when the entry coding data is updated;
and the child node is used for receiving the updated content of the entry encoding data and updating the second flow definition file according to the updated content of the entry encoding data.
6. The processing system of claim 5, wherein the processing system further comprises an internationalized platform;
the main node is also used for sending the newly added entry coding data to the international platform when the newly added entry coding data exists;
the internationalization platform is used for translating the newly added entry coding data, determining entry translation data of a plurality of language versions, responding to a language selection instruction of a node, and sending the entry translation data of the corresponding language version to the node; wherein the node comprises the master node and the child nodes.
7. The processing system of claim 5, wherein the child node is further configured to obtain process operation data, wherein the process operation data includes process instance data, and when new process instance data exists, send process operation data corresponding to the new process instance data to the host node;
and the main node is also used for uniformly processing the corresponding process operation data.
8. The processing system of claim 7, wherein the processing system further comprises a synchronization server;
the child node is further configured to send process running data corresponding to the newly added process instance data to a message queue of a synchronization server when the newly added process instance data exists;
the synchronous server is used for responding to the subscription process operation data message of the main node, extracting one or more target process operation data from the message queue, and sending the target process operation data to the main node.
9. A terminal, characterized in that the terminal comprises a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the processing method according to any one of claims 1 to 4 when executing the computer program.
10. A computer storage medium storing a computer program, wherein the computer program is executed by a processor to implement the steps of the processing method according to any one of claims 1 to 4.
CN202211042981.7A 2022-08-29 2022-08-29 Service cooperative processing method, system, terminal and computer storage medium Pending CN115408407A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211042981.7A CN115408407A (en) 2022-08-29 2022-08-29 Service cooperative processing method, system, terminal and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211042981.7A CN115408407A (en) 2022-08-29 2022-08-29 Service cooperative processing method, system, terminal and computer storage medium

Publications (1)

Publication Number Publication Date
CN115408407A true CN115408407A (en) 2022-11-29

Family

ID=84160642

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211042981.7A Pending CN115408407A (en) 2022-08-29 2022-08-29 Service cooperative processing method, system, terminal and computer storage medium

Country Status (1)

Country Link
CN (1) CN115408407A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112488652A (en) * 2020-11-30 2021-03-12 乐刷科技有限公司 Work order auditing method, system, terminal and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112488652A (en) * 2020-11-30 2021-03-12 乐刷科技有限公司 Work order auditing method, system, terminal and storage medium
CN112488652B (en) * 2020-11-30 2024-05-10 乐刷科技有限公司 Work order auditing method, system, terminal and storage medium

Similar Documents

Publication Publication Date Title
CN101847100B (en) Method for expanding software application and device
US10338958B1 (en) Stream adapter for batch-oriented processing frameworks
CN103077024B (en) A kind of device and method supporting the on-demand customization of SaaS application flow and operation
US20090287726A1 (en) Method and system for synchronization of databases
CN101834750B (en) Method for monitoring common service
CN112612449B (en) Webpage synchronization method and device, equipment and storage medium
CN115934855A (en) Full-link field level blood margin analysis method, system, equipment and storage medium
JP2008204430A (en) Business process reconstruction method, its program and computer
CN113448837B (en) Development and test environment deployment method, system, electronic equipment and medium
CN108229799B (en) Multi-source heterogeneous power grid operation real-time data access system and method
CN110532074A (en) A kind of method for scheduling task and system of multi-tenant Mode S aaS service cluster environment
CN112417051A (en) Container arrangement engine resource management method and device, readable medium and electronic equipment
CN112905323B (en) Data processing method, device, electronic equipment and storage medium
CN111090587A (en) Method, device and equipment for testing production point service and storage medium
CN113377626B (en) Visual unified alarm method, device, equipment and medium based on service tree
CN103646134A (en) Service-oriented networked simulation system dynamic generation method
CN113792008A (en) Method and device for acquiring network topology structure, electronic equipment and storage medium
WO2016082594A1 (en) Data update processing method and apparatus
CN115408407A (en) Service cooperative processing method, system, terminal and computer storage medium
CN116708266A (en) Cloud service topological graph real-time updating method, device, equipment and medium
CN114866617A (en) Micro-service request processing method, device, equipment and medium
CN113626163A (en) Lightweight distributed increment self-scheduling method, system, equipment and medium
US20190089575A1 (en) Event Ordering Framework in Distributed Asynchronous Systems
CN111143408B (en) Event processing method and device based on business rule
US10713014B2 (en) Multi-platform interface framework

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination