CN107463434B - Distributed task processing method and device - Google Patents
Distributed task processing method and device Download PDFInfo
- Publication number
- CN107463434B CN107463434B CN201710686163.3A CN201710686163A CN107463434B CN 107463434 B CN107463434 B CN 107463434B CN 201710686163 A CN201710686163 A CN 201710686163A CN 107463434 B CN107463434 B CN 107463434B
- Authority
- CN
- China
- Prior art keywords
- container
- task
- language runtime
- call request
- processed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/466—Transaction processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/542—Event management; Broadcasting; Multicasting; Notifications
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Devices For Executing Special Programs (AREA)
Abstract
The application aims to provide a distributed task processing method and equipment. Compared with the prior art, the method and the device have the advantages that the tasks to be processed are obtained through the distributed real-time message bus, the tasks to be processed are forwarded to the language runtime container, the language runtime container is utilized to execute the tasks to be processed, wherein, in the execution process, when the condition of parallel processing of asynchronous services occurs, a plurality of asynchronous call requests are respectively sent to the computing service component container and the resource service component container through the distributed real-time message bus, then, utilizing the language runtime container to receive and process the processing result of the computing service component container and the resource service component container for the asynchronous call request until the task to be processed is executed, therefore, the service processing capacity of the system is improved, the processing time of the complex service logic of the mobile interconnection scene is reduced, and low-delay response is achieved.
Description
Technical Field
The application relates to the field of computers, in particular to a distributed task processing technology.
Background
The big data related technology originated from the internet is developed rapidly in recent years, and in order to optimize user experience, all walks of life need to use big data technology to improve self business capability. Taking the financial industry as an example, the IT application center of gravity of the traditional bank has gradually turned to the main trend of providing personalized scene services for customers by taking mobile internet equipment as a main channel from providing standard transaction product services such as standardized deposit, credit and payment and the like, and relying on big data and intelligent technology. Compared with the traditional internet bank application and mobile banking application, the requirements of the mobile internet scene application have the following new characteristics:
1) passive response turns into active prediction. The system not only makes passive response to the transaction and information inquiry of the customer, but also collects the browsing behavior data of the customer at any time, integrates the latest transaction and other interactive data of the customer in each channel, predicts the product and service requirements of the customer, and pushes out personalized product and information in time along with the clicking behavior of the customer.
2) And (4) carrying out real-time mass production. The method comprises the steps of collecting mass device data such as the geographic position of a customer from smart phone devices or other wearable devices in real time, recommending suitable third-party non-financial services by combining consumption preference and crowd segmentation requirements of the customer, and guiding websites and self-service devices near a bank to serve.
3) With the continuous deepening of the scene application, the bank not only needs to improve the service level in collecting and processing mass new data, but also has the aspect that the relation between a customer and the bank is not limited to financial requirements any more, but also includes the aspect of a clothing and eating house, and the bank has the opportunity to utilize the self resource integration advantages, provide higher-quality and timely financial services and provide more comprehensive life services.
In such a large business context, the data service capability provided by the bank IT system is hundreds of times or even thousands of times higher than the past. Therefore, it is necessary to apply distributed task processing technology to improve the business processing capability.
Disclosure of Invention
An object of the present application is to provide a distributed task processing method and apparatus.
According to an aspect of the present application, a distributed task processing method is provided, wherein the method includes:
acquiring a task to be processed through a distributed real-time message bus, and forwarding the task to be processed to a language runtime container;
utilizing the language runtime container to execute the task to be processed, wherein in the execution process, when the parallel processing of asynchronous services occurs, a plurality of asynchronous call requests are respectively sent to the computing service component container and the resource service component container through the distributed real-time message bus;
and receiving and processing the processing results of the computing service component container and the resource service component container on the asynchronous call request by using the language runtime container until the execution of the task to be processed is completed.
According to another aspect of the present application, there is provided a distributed task processing apparatus, wherein the apparatus includes:
the device comprises a first device, a second device and a language runtime container, wherein the first device is used for acquiring a task to be processed through a distributed real-time message bus and forwarding the task to be processed to the language runtime container;
the second device is used for executing the task to be processed by utilizing the language runtime container, wherein in the execution process, when the parallel processing of asynchronous services occurs, a plurality of asynchronous call requests are respectively sent to the computing service component container and the resource service component container through the distributed real-time message bus;
and the third device is used for receiving and processing the processing results of the computing service component container and the resource service component container on the asynchronous call request by utilizing the language runtime container until the execution of the task to be processed is completed.
According to yet another aspect of the application, there is provided an electronic device comprising at least a processor and a memory unit, the processor being configured to execute the following instructions:
acquiring a task to be processed through a distributed real-time message bus, and forwarding the task to be processed to a language runtime container;
utilizing the language runtime container to execute the task to be processed, wherein in the execution process, when the parallel processing of asynchronous services occurs, a plurality of asynchronous call requests are respectively sent to the computing service component container and the resource service component container through the distributed real-time message bus;
and receiving and processing the processing results of the computing service component container and the resource service component container on the asynchronous call request by using the language runtime container until the execution of the task to be processed is completed.
Compared with the prior art, the method and the device have the advantages that the tasks to be processed are obtained through the distributed real-time message bus, the tasks to be processed are forwarded to the language runtime container, the language runtime container is utilized to execute the tasks to be processed, wherein, in the execution process, when the condition of parallel processing of asynchronous services occurs, a plurality of asynchronous call requests are respectively sent to the computing service component container and the resource service component container through the distributed real-time message bus, then, utilizing the language runtime container to receive and process the processing result of the computing service component container and the resource service component container for the asynchronous call request until the task to be processed is executed, therefore, the service processing capacity of the system is improved, the processing time of the complex service logic of the mobile interconnection scene is reduced, and low-delay response is achieved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a flow diagram of a distributed task processing method according to one embodiment of the present application;
FIG. 2 illustrates a program execution logic diagram for distributed task processing in accordance with a preferred embodiment of the present application;
FIG. 3 shows a flow diagram of a distributed task processing method according to another embodiment of the present application;
FIG. 4 shows a schematic diagram of a distributed task processing device according to an embodiment of the present application;
FIG. 5 shows a schematic diagram of a distributed task processing device according to another embodiment of the present application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
Fig. 1 shows a flowchart of a distributed task processing method according to an embodiment of the present application, the method including step S11, step S12, and step S13.
Specifically, in step S11, device 1 obtains a task to be processed through a distributed real-time message bus, and forwards the task to be processed to a language runtime container; in step S12, the device 1 executes the task to be processed by using the language runtime container, wherein in the execution process, when parallel processing of asynchronous services occurs, a plurality of asynchronous call requests are respectively initiated to the computation class service component container and the resource class service component container through the distributed real-time message bus; in step S13, the device 1 receives and processes the processing results of the computation class service component container and the resource class service component container for the asynchronous call request by using the language runtime container until the execution of the task to be processed is completed.
Here, the device 1 includes, but is not limited to, a device in which network devices are integrated through a network. The network device includes an electronic device capable of automatically performing numerical calculation and information processing according to preset or stored instructions, and the hardware includes but is not limited to a microprocessor, an Application Specific Integrated Circuit (ASIC), a programmable gate array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like. The network device comprises but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud formed by a plurality of servers; here, the Cloud is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual supercomputer consisting of a collection of loosely coupled computers. Including, but not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless Ad Hoc network (Ad Hoc network), etc. Preferably, the device 1 may also be a script program running on the network device. Of course, those skilled in the art will appreciate that the above-described apparatus 1 is merely exemplary, and that other existing or future existing apparatus 1, as may be suitable for use in the present application, are also intended to be encompassed within the scope of the present application and are hereby incorporated by reference.
Preferably, the language runtime container, the computing class service component container, and the resource class service component container are implemented based on an Akka micro-service software framework.
The Akka micro-service software framework is used as a basic software platform, and the method has the following advantages: (1) the role (Actor) concurrent programming model better shields the bottom layer communication details and the thread task scheduling details, and programming is easier. (2) The performance is high, and a single node can complete 5000 ten thousand messages per second. (3) The resource requirement is low, and each GB memory can hold 250 ten thousand Actor entity objects. (4) The maturity is high, and the software is the bottom layer architecture software of Spark big data platform, obtains JAX innovation development technology jackpot in 2015. (5) The method is written based on a powerful Scala language, has high coding realization efficiency, and can be mutually called with other JVM language class libraries. (6) The gossip cluster communication protocol from the amazon Dynamo system is verified by the amazon cloud service platform for a long time, so that the reliability is high; the method can support various Cluster application scenarios such as Cluster sharing, Cluster Singleton, Distributed publishing describe, Distributed Data and the like. (7) In a Supervisor (Supervisor) mode of a parent role object (Actor), the parent role object can capture the abnormity of the child role object and automatically execute different restart strategies, so that the fault tolerance and service availability of the system are greatly improved, and the high-availability code compiling difficulty is reduced. (8) The method realizes various Actor concurrency and load balancing algorithm strategies through a combination of a software router (router) and an execution distributor (dispatcher). Such as round robin (round robin), random dispatch (random), broadcast (broadcast), Consistent Hashing (Consistent Hashing), etc.
Of course, those skilled in the art will appreciate that the Akka micro-services software framework (reference www.akka.io) described above is merely exemplary, other existing or future micro-service software frameworks such as node. js (a platform built based on Chrome JavaScript runtime for conveniently building a network application with fast response speed and easy expansion), match (Twitter based on fault-tolerant and protocol-independent RPC framework developed by Netty), Akka (a library written by Scala for simplifying Actor model application for writing fault-tolerant and highly scalable Java and Scala), RxJava (a responsive extended library implemented on a Java virtual machine providing asynchronous call implemented based on observable sequence and event-based programming), vert.x (an application platform based on JVM, lightweight, and high performance), etc., as applicable to this application, are intended to be encompassed within the scope of this application and are hereby incorporated by reference.
In step S11, the device 1 acquires the task to be processed through the distributed real-time message bus, and forwards the task to be processed to the language runtime container.
Communication is effected via the distributed real-time message bus. The language runtime container may be implemented based on an Actor of the Akka micro-service software framework.
Preferably, in step S11, the device 1 acquires the task to be processed sent by the user equipment through the distributed real-time message bus.
Here, the user equipment includes, but is not limited to: the smart phone and the tablet personal computer with corresponding client terminals are installed, and the computer with the internet bank page opened is installed. For example, referring to fig. 2, the Web page resource of the Web server may be acquired by the user equipment, and a connection with the distributed real-time message bus may be established. And then, acquiring the task to be processed sent by the user equipment through the communication of the distributed real-time message bus.
Preferably, the task to be processed comprises a remote service invocation request; in step S11, the device 1 obtains the remote service call request through the distributed real-time message bus, converts the remote service call request into a function call request, and forwards the function call request to the language runtime container.
For example, referring to fig. 2, the user equipment initiates the remote service invocation request through the distributed real-time message bus, and after receiving the remote service invocation request, the service agent converts the remote service invocation request into a function invocation request, and then forwards the function invocation request to the language runtime container.
In step S12, the device 1 executes the to-be-processed task by using the language runtime container, wherein in the execution process, when the asynchronous service is processed in parallel, a plurality of asynchronous call requests are respectively issued to the compute class service component container and the resource class service component container through the distributed real-time message bus.
For example, the language runtime container looks up the function definition, creates a program runtime instance; the language runtime container is loaded with a distributed parallel processing language interpreter, and the distributed parallel processing language interpreter interprets an application program code and distributively and parallelly calls a related service component to cooperatively complete the task to be processed; preferably, the distributed parallel processing language interpreter discovers parallel processing of asynchronous services by recognizing specific tags in the application code.
Preferably, the language runtime container is a Zebra language runtime container, and the distributed parallel processing language interpreter is a Zebra language interpreter.
For example, referring to fig. 2, the Zebra language interpreter executes Zebra script statements in sequence according to the programming logic, and during the execution, when the asynchronous service is processed in parallel, a plurality of asynchronous call requests are simultaneously initiated to the compute class component container a, the resource class component container B and the resource class component container C, and the Context (Context) of the current program running instance is switched to the dormant program instance set waiting for the response of the micro service call.
Here, the Zebra distributed parallel processing language supports distributed parallel processing and can handle asynchronous callback problems very well. The Zebra language follows the design concept of MDA (Model Driven Architecture), different languages and runtime containers can be used, in principle, the programmer's business logic code can be written once and then run in different environments (including being separated from JVM), and after appropriate service abstraction is made, the optimization evolution of the lower layer software Architecture will not have any influence on the business function code.
Preferably, executing the task to be processed by using the language runtime container includes: analyzing a syntax tree according to the task to be executed by utilizing the language runtime container; and executing the syntax tree.
For example, the Zebra language runtime container looks up function definitions, parses the task to be processed into a Zebra syntax tree structure, and creates a program runtime instance.
Preferably, the language runtime container is loaded with a distributed parallel processing language interpreter; wherein executing the syntax tree comprises: and dynamically constructing, executing and cleaning a statement execution tree by using the distributed parallel processing language interpreter in the process of executing the syntax tree.
Here, if the compilation language implementation mode of parent Terence Parr of ANTLR is applied, the distributed parallel processing language interpreter of this embodiment may be referred to as a tree-based interpreter, which mainly dynamically constructs and cleans a statement execution tree in the interpretation and execution process based on a complete syntax tree, so as to effectively save memory resources. The statement execution tree can be viewed as a running instance of a parse tree object.
In step S13, the device 1 receives and processes the processing results of the computation class service component container and the resource class service component container for the asynchronous call request by using the language runtime container until the execution of the task to be processed is completed.
For example, the language runtime container receives the returned results of the computation class service component container and the resource class service component container through the distributed message bus and processes them one by one, and when processing each result packet, the associated program runtime instance is also switched back to the runtime container (as shown in 7 in fig. 2), and marks an asynchronous call state at the corresponding syntax tree node and tries to continue to execute backwards, and if the execution cannot be performed successively, the language runtime container is switched back to the sleep waiting state. In the embodiment shown in fig. 2, until receiving the complete three result messages (6-a), (6-B), and (6-C), the program run instance cannot continue to execute downwards without pause, complete all the call function logic, and release the memory resources of the program run instance; the function execution results are returned by the Zebra runtime container to the service agent.
Preferably, the method further comprises: the device 1 returns the processing result of the task to be processed to the user device through the distributed real-time message bus.
For example, referring to fig. 2, the user equipment initiates a remote service invocation request through the distributed real-time message bus, and after the processing is completed, the service agent returns a processing result of the remote service invocation request to the user equipment through the distributed real-time message bus.
Preferably, as shown in fig. 3, the method further includes step S14'; in step S14', the device 1 receives the asynchronous call request by using the compute class service component container, routes and distributes the asynchronous call request to the compute class component instance for execution, and returns the processing result of the asynchronous call request to the language runtime container.
For example, referring to FIG. 2, compute class component container A receives a service invocation request (4-A), routes it directly to a component instance execution (4-A-1), and returns the result to the Zebra language runtime container (6-A).
Preferably, as shown in fig. 3, the method further includes step S15'; in step S15', the device 1 receives the asynchronous call request by using the resource service component container, puts the asynchronous call request into a cache queue, uniformly schedules the asynchronous call request to be sent to an idle resource component instance for execution, and returns a processing result of the asynchronous call request to the language runtime container.
For example, the resource class service component container puts the received asynchronous call request into a cache queue; if the idle service component instance assignable task is detected to exist, the resource type service component container extracts the asynchronous call request from a cache queue; if no idle service component instance is detected, the resource class service component container can also create a new service component instance to allocate a task on the premise that the maximum number of instances is not exceeded.
Preferably, the idle resource class component instance executes the asynchronous call request by at least any one of: the resource class component instance calls an external network service to process the asynchronous call request through asynchronous I/O; the resource class component instance processes the asynchronous call request through a blocking service call database service.
For example, referring to fig. 2, the resource class component container B receives a service call request (4-B), puts the service call request into a pending task queue, and uniformly schedules and sends the pending task queue to an idle component instance (4-B-1); the component instance calls external web services through asynchronous I/O and returns the result to the Zebra language runtime container (6-B) after processing. The resource component container C receives the service call request (4-C), puts into the task queue to be processed, and uniformly schedules and sends to the idle component instance (4-C-1); the component instance returns the results to the Zebra language runtime container (6-C) after processing by the blocking service call database service.
Fig. 4 shows a distributed task processing device 1 according to an embodiment of the present application, wherein the device 1 comprises a first means 11, a second means 12 and a third means 13.
Specifically, the first device 11 obtains a task to be processed through a distributed real-time message bus, and forwards the task to be processed to a language runtime container; the second device 12 executes the task to be processed by using the language runtime container, wherein in the execution process, when parallel processing of asynchronous services occurs, a plurality of asynchronous call requests are respectively initiated to the computation class service component container and the resource class service component container through the distributed real-time message bus; the third device 13 utilizes the language runtime container to receive and process the processing result of the computing service component container and the resource service component container for the asynchronous call request until the execution of the task to be processed is completed.
Here, the device 1 includes, but is not limited to, a device in which network devices are integrated through a network. The network device includes an electronic device capable of automatically performing numerical calculation and information processing according to preset or stored instructions, and the hardware includes but is not limited to a microprocessor, an Application Specific Integrated Circuit (ASIC), a programmable gate array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like. The network device comprises but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud formed by a plurality of servers; here, the Cloud is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual supercomputer consisting of a collection of loosely coupled computers. Including, but not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless Ad Hoc network (Ad Hoc network), etc. Preferably, the device 1 may also be a script program running on the network device. Of course, those skilled in the art will appreciate that the above-described apparatus 1 is merely exemplary, and that other existing or future existing apparatus 1, as may be suitable for use in the present application, are also intended to be encompassed within the scope of the present application and are hereby incorporated by reference.
Preferably, the language runtime container, the computing class service component container, and the resource class service component container are implemented based on an Akka micro-service software framework.
The Akka micro-service software framework is used as a basic software platform, and the method has the following advantages: (1) the role (Actor) concurrent programming model better shields the bottom layer communication details and the thread task scheduling details, and programming is easier. (2) The performance is high, and a single node can complete 5000 ten thousand messages per second. (3) The resource requirement is low, and each GB memory can hold 250 ten thousand Actor entity objects. (4) The maturity is high, and the software is the bottom layer architecture software of Spark big data platform, obtains JAX innovation development technology jackpot in 2015. (5) The method is written based on a powerful Scala language, has high coding realization efficiency, and can be mutually called with other JVM language class libraries. (6) The gossip cluster communication protocol from the amazon Dynamo system is verified by the amazon cloud service platform for a long time, so that the reliability is high; the method can support various Cluster application scenarios such as Cluster sharing, Cluster Singleton, Distributed publishing describe, Distributed Data and the like. (7) In a Supervisor (Supervisor) mode of a parent role object (Actor), the parent role object can capture the abnormity of the child role object and automatically execute different restart strategies, so that the fault tolerance and service availability of the system are greatly improved, and the high-availability code compiling difficulty is reduced. (8) The method realizes various Actor concurrency and load balancing algorithm strategies through a combination of a software router (router) and an execution distributor (dispatcher). Such as round robin (round robin), random dispatch (random), broadcast (broadcast), Consistent Hashing (Consistent Hashing), etc.
Of course, those skilled in the art will appreciate that the Akka micro-services software framework (reference www.akka.io) described above is merely exemplary, other existing or future micro-service software frameworks such as node. js (a platform built based on Chrome JavaScript runtime for conveniently building a network application with fast response speed and easy expansion), match (Twitter based on fault-tolerant and protocol-independent RPC framework developed by Netty), Akka (a library written by Scala for simplifying Actor model application for writing fault-tolerant and highly scalable Java and Scala), RxJava (a responsive extended library implemented on a Java virtual machine providing asynchronous call implemented based on observable sequence and event-based programming), vert.x (an application platform based on JVM, lightweight, and high performance), etc., as applicable to this application, are intended to be encompassed within the scope of this application and are hereby incorporated by reference.
The first device 11 obtains the task to be processed through a distributed real-time message bus, and forwards the task to be processed to a language runtime container.
Communication is effected via the distributed real-time message bus. The language runtime container may be implemented based on an Actor of the Akka micro-service software framework.
Preferably, the first device 11 obtains the task to be processed sent by the user equipment through a distributed real-time message bus.
Here, the user equipment includes, but is not limited to: the smart phone and the tablet personal computer with corresponding client terminals are installed, and the computer with the internet bank page opened is installed. For example, referring to fig. 2, the Web page resource of the Web server may be acquired by the user equipment, and a connection with the distributed real-time message bus may be established. And then, acquiring the task to be processed sent by the user equipment through the communication of the distributed real-time message bus.
Preferably, the task to be processed comprises a remote service invocation request; the first device 11 obtains a remote service call request through a distributed real-time message bus, converts the remote service call request into a function call request, and forwards the function call request to a language runtime container.
For example, referring to fig. 2, the user equipment initiates the remote service invocation request through the distributed real-time message bus, and after receiving the remote service invocation request, the service agent converts the remote service invocation request into a function invocation request, and then forwards the function invocation request to the language runtime container.
The second device 12 executes the task to be processed by using the language runtime container, wherein in the execution process, when parallel processing of asynchronous services occurs, a plurality of asynchronous call requests are respectively initiated to the computation class service component container and the resource class service component container through the distributed real-time message bus.
For example, the language runtime container looks up the function definition, creates a program runtime instance; the language runtime container is loaded with a distributed parallel processing language interpreter, and the distributed parallel processing language interpreter interprets an application program code and distributively and parallelly calls a related service component to cooperatively complete the task to be processed; preferably, the distributed parallel processing language interpreter discovers parallel processing of asynchronous services by recognizing specific tags in the application code.
Preferably, the language runtime container is a Zebra language runtime container, and the distributed parallel processing language interpreter is a Zebra language interpreter.
For example, referring to fig. 2, the Zebra language interpreter executes Zebra script statements in sequence according to the programming logic, and during the execution, when the asynchronous service is processed in parallel, a plurality of asynchronous call requests are simultaneously initiated to the compute class component container a, the resource class component container B and the resource class component container C, and the Context (Context) of the current program running instance is switched to the dormant program instance set waiting for the response of the micro service call.
Here, the Zebra distributed parallel processing language supports distributed parallel processing and can handle asynchronous callback problems very well. The Zebra language follows the design concept of MDA (Model Driven Architecture), different languages and runtime containers can be used, in principle, the programmer's business logic code can be written once and then run in different environments (including being separated from JVM), and after appropriate service abstraction is made, the optimization evolution of the lower layer software Architecture will not have any influence on the business function code.
Preferably, executing the task to be processed by using the language runtime container includes: analyzing a syntax tree according to the task to be executed by utilizing the language runtime container; and executing the syntax tree.
For example, the Zebra language runtime container looks up function definitions, parses the task to be processed into a Zebra syntax tree structure, and creates a program runtime instance.
Preferably, the language runtime container is loaded with a distributed parallel processing language interpreter; wherein executing the syntax tree comprises: and dynamically constructing, executing and cleaning a statement execution tree by using the distributed parallel processing language interpreter in the process of executing the syntax tree.
Here, if the compilation language implementation mode of parent Terence Parr of ANTLR is applied, the distributed parallel processing language interpreter of this embodiment may be referred to as a tree-based interpreter, which mainly dynamically constructs and cleans a statement execution tree in the interpretation and execution process based on a complete syntax tree, so as to effectively save memory resources. The statement execution tree can be viewed as a running instance of a parse tree object.
The third device 13 utilizes the language runtime container to receive and process the processing result of the computing service component container and the resource service component container for the asynchronous call request until the execution of the task to be processed is completed.
For example, the language runtime container receives the returned results of the computation class service component container and the resource class service component container through the distributed message bus and processes them one by one, and when processing each result packet, the associated program runtime instance is also switched back to the runtime container (as shown in 7 in fig. 2), and marks an asynchronous call state at the corresponding syntax tree node and tries to continue to execute backwards, and if the execution cannot be performed successively, the language runtime container is switched back to the sleep waiting state. In the embodiment shown in fig. 2, until receiving the complete three result messages (6-a), (6-B), and (6-C), the program run instance cannot continue to execute downwards without pause, complete all the call function logic, and release the memory resources of the program run instance; the function execution results are returned by the Zebra runtime container to the service agent.
Preferably, the apparatus 1 comprises sixth means (not shown in the figures); and the sixth device returns the processing result of the task to be processed to the user equipment through the distributed real-time message bus.
For example, referring to fig. 2, the user equipment initiates a remote service invocation request through the distributed real-time message bus, and after the processing is completed, the service agent returns a processing result of the remote service invocation request to the user equipment through the distributed real-time message bus.
Preferably, as shown in fig. 5, the apparatus 1 further comprises fourth means 14'; the fourth device 14' receives the asynchronous call request by using the compute class service component container, routes and distributes the asynchronous call request to the compute class component instance for execution, and returns the processing result of the asynchronous call request to the language runtime container.
For example, referring to FIG. 2, compute class component container A receives a service invocation request (4-A), routes it directly to a component instance execution (4-A-1), and returns the result to the Zebra language runtime container (6-A).
Preferably, as shown in fig. 5, the apparatus 1 further comprises fifth means 15'; the fifth device 15' receives the asynchronous call request by using the resource class service component container, puts the asynchronous call request into a cache queue, uniformly schedules the asynchronous call request to be sent to an idle resource class component instance for execution, and returns a processing result of the asynchronous call request to the language runtime container.
For example, the resource class service component container puts the received asynchronous call request into a cache queue; if the idle service component instance assignable task is detected to exist, the resource type service component container extracts the asynchronous call request from a cache queue; if no idle service component instance is detected, the resource class service component container can also create a new service component instance to allocate a task on the premise that the maximum number of instances is not exceeded.
Preferably, the idle resource class component instance executes the asynchronous call request by at least any one of: the resource class component instance calls an external network service to process the asynchronous call request through asynchronous I/O; the resource class component instance processes the asynchronous call request through a blocking service call database service.
For example, referring to fig. 2, the resource class component container B receives a service call request (4-B), puts the service call request into a pending task queue, and uniformly schedules and sends the pending task queue to an idle component instance (4-B-1); the component instance calls external web services through asynchronous I/O and returns the result to the Zebra language runtime container (6-B) after processing. The resource component container C receives the service call request (4-C), puts into the task queue to be processed, and uniformly schedules and sends to the idle component instance (4-C-1); the component instance returns the results to the Zebra language runtime container (6-C) after processing by the blocking service call database service.
According to yet another aspect of the application, there is provided an electronic device comprising at least a processor and a memory unit, the processor being configured to execute the following instructions:
acquiring a task to be processed through a distributed real-time message bus, and forwarding the task to be processed to a language runtime container;
utilizing the language runtime container to execute the task to be processed, wherein in the execution process, when the parallel processing of asynchronous services occurs, a plurality of asynchronous call requests are respectively sent to the computing service component container and the resource service component container through the distributed real-time message bus;
and receiving and processing the processing results of the computing service component container and the resource service component container on the asynchronous call request by using the language runtime container until the execution of the task to be processed is completed.
Compared with the prior art, the method and the device have the advantages that the tasks to be processed are obtained through the distributed real-time message bus, the tasks to be processed are forwarded to the language runtime container, the language runtime container is utilized to execute the tasks to be processed, wherein, in the execution process, when the condition of parallel processing of asynchronous services occurs, a plurality of asynchronous call requests are respectively sent to the computing service component container and the resource service component container through the distributed real-time message bus, then, utilizing the language runtime container to receive and process the processing result of the computing service component container and the resource service component container for the asynchronous call request until the task to be processed is executed, therefore, the service processing capacity of the system is improved, the processing time of the complex service logic of the mobile interconnection scene is reduced, and low-delay response is achieved.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Program instructions which invoke the methods of the present application may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Claims (19)
1. A distributed task processing method, wherein the method comprises:
acquiring a task to be processed through a distributed real-time message bus, and forwarding the task to be processed to a language runtime container;
utilizing the language runtime container to execute the task to be processed, wherein in the execution process, when the parallel processing of asynchronous services occurs, a plurality of asynchronous call requests are respectively sent to a computing service component container and a resource service component container through the distributed real-time message bus, and the execution process of the task to be processed comprises the following steps: searching function definition and creating a program operation example through the language runtime container, interpreting application program codes through a distributed parallel processing language interpreter loaded in the language runtime container and calling associated service components in a distributed parallel mode to cooperatively complete the task to be processed;
receiving the asynchronous call request by using the resource type service component container, putting the asynchronous call request into a cache queue, uniformly scheduling and sending the asynchronous call request to an idle resource type component instance for execution, and returning a processing result of the asynchronous call request to the language runtime container, wherein if the idle service component instance assignable task is detected to exist, the asynchronous call request is extracted from the cache queue by the resource type service component container; if no idle service component instance exists, the resource type service component container creates a new service component instance to allocate a task on the premise of not exceeding the maximum number of instances;
receiving and processing the processing results of the computing service component container and the resource service component container on the asynchronous call request by using the language runtime container until the execution of the task to be processed is completed; and the language runtime container receives the returned results of the computation type service component container and the resource type service component container through the distributed real-time message bus and processes the returned results one by one, when each result message is processed, the associated program runtime instance is also switched back to the language runtime container, an asynchronous calling state is marked at the corresponding syntax tree node and the language runtime container tries to continue to execute backwards, and if the language runtime container cannot execute backwards, the language runtime container is switched back to a dormant waiting state.
2. The method of claim 1, wherein the method further comprises:
receiving the asynchronous call request by utilizing the computing class service component container, distributing the asynchronous call request to a computing class component instance for execution in a routing way, and returning a processing result of the asynchronous call request to the language runtime container.
3. The method of claim 1, wherein the idle resource class component instance executes the asynchronous call request by at least any one of:
the resource class component instance calls an external network service to process the asynchronous call request through asynchronous I/O;
the resource class component instance processes the asynchronous call request through a blocking service call database service.
4. The method of any of claims 1-3, wherein the obtaining the pending task over the distributed real-time message bus comprises:
acquiring a task to be processed sent by user equipment through a distributed real-time message bus;
wherein the method further comprises:
and returning the processing result of the task to be processed to the user equipment through the distributed real-time message bus.
5. The method of any of claims 1-3, wherein the pending task comprises a remote service invocation request;
the acquiring a task to be processed through a distributed real-time message bus and forwarding the task to be processed to a language runtime container includes:
the method comprises the steps of obtaining a remote service call request through a distributed real-time message bus, converting the remote service call request into a function call request and forwarding the function call request to a language runtime container.
6. The method of claim 1, wherein executing the pending task with the language runtime container comprises:
analyzing a syntax tree according to the task to be processed by utilizing the language runtime container;
and executing the syntax tree.
7. The method of claim 6, wherein the language runtime container is loaded with a distributed parallel processing language interpreter;
wherein executing the syntax tree comprises:
and dynamically constructing, executing and cleaning a statement execution tree by using the distributed parallel processing language interpreter in the process of executing the syntax tree.
8. The method of claim 7, wherein the language runtime container is a Zebra language runtime container and the distributed parallel processing language interpreter is a Zebra language interpreter.
9. The method of any of claims 1-3, wherein the language runtime container, the compute class service component container, and the resource class service component container are implemented based on an Akka micro-services software framework.
10. A distributed task processing device, wherein the device comprises:
the device comprises a first device, a second device and a language runtime container, wherein the first device is used for acquiring a task to be processed through a distributed real-time message bus and forwarding the task to be processed to the language runtime container;
a second device, configured to execute the to-be-processed task by using the language runtime container, where in an execution process, when parallel processing of asynchronous services occurs, a plurality of asynchronous call requests are respectively initiated to a compute class service component container and a resource class service component container through the distributed real-time message bus, and the execution process of the to-be-processed task includes: searching function definition and creating a program operation example through the language runtime container, interpreting application program codes through a distributed parallel processing language interpreter loaded in the language runtime container and calling associated service components in a distributed parallel mode to cooperatively complete the task to be processed;
a fifth device, configured to receive the asynchronous call request by using the resource-based service component container, place the asynchronous call request in a cache queue, uniformly schedule and send the asynchronous call request to a free resource-based component instance for execution, and return a processing result of the asynchronous call request to the language runtime container, where if it is detected that a free service-based component instance assignable task exists, the resource-based service component container extracts the asynchronous call request from the cache queue; if no idle service component instance exists, the resource type service component container creates a new service component instance to allocate a task on the premise of not exceeding the maximum number of instances;
a third device, configured to receive and process a processing result of the computation class service component container and the resource class service component container for the asynchronous call request by using the language runtime container until the execution of the to-be-processed task is completed; and the language runtime container receives the returned results of the computation type service component container and the resource type service component container through the distributed real-time message bus and processes the returned results one by one, when each result message is processed, the associated program runtime instance is also switched back to the language runtime container, an asynchronous calling state is marked at the corresponding syntax tree node and the language runtime container tries to continue to execute backwards, and if the language runtime container cannot execute backwards, the language runtime container is switched back to a dormant waiting state.
11. The apparatus of claim 10, wherein the apparatus further comprises:
and the fourth device is used for receiving the asynchronous call request by utilizing the computing class service component container, routing and distributing the asynchronous call request to the computing class component instance for execution, and returning a processing result of the asynchronous call request to the language runtime container.
12. The device of claim 10, wherein the idle resource class component instance executes the asynchronous call request by at least any one of:
the resource class component instance calls an external network service to process the asynchronous call request through asynchronous I/O;
the resource class component instance processes the asynchronous call request through a blocking service call database service.
13. The apparatus of any of claims 10 to 12, wherein the obtaining of the pending task over the distributed real-time message bus comprises:
acquiring a task to be processed sent by user equipment through a distributed real-time message bus;
wherein the apparatus further comprises:
and a sixth device, configured to return a processing result of the to-be-processed task to the user equipment through the distributed real-time message bus.
14. The device of any of claims 10 to 12, wherein the pending task comprises a remote service invocation request;
wherein the first means is for:
the method comprises the steps of obtaining a remote service call request through a distributed real-time message bus, converting the remote service call request into a function call request and forwarding the function call request to a language runtime container.
15. The apparatus of claim 10, wherein performing the pending task with the language runtime container comprises:
analyzing a syntax tree according to the task to be processed by utilizing the language runtime container;
and executing the syntax tree.
16. The apparatus of claim 15, wherein the language runtime container is loaded with a distributed parallel processing language interpreter;
wherein executing the syntax tree comprises:
and dynamically constructing, executing and cleaning a statement execution tree by using the distributed parallel processing language interpreter in the process of executing the syntax tree.
17. The apparatus of claim 16, wherein the language runtime container is a Zebra language runtime container and the distributed parallel processing language interpreter is a Zebra language interpreter.
18. The apparatus of any of claims 10-12, wherein the language runtime container, the compute class service component container, and the resource class service component container are implemented based on an Akka micro-services software framework.
19. An electronic device comprising at least a processor and a memory unit, the processor configured to execute the following instructions:
acquiring a task to be processed through a distributed real-time message bus, and forwarding the task to be processed to a language runtime container;
utilizing the language runtime container to execute the task to be processed, wherein in the execution process, when the parallel processing of asynchronous services occurs, a plurality of asynchronous call requests are respectively sent to a computing service component container and a resource service component container through the distributed real-time message bus, and the execution process of the task to be processed comprises the following steps: searching function definition and creating a program operation example through the language runtime container, interpreting application program codes through a distributed parallel processing language interpreter loaded in the language runtime container and calling associated service components in a distributed parallel mode to cooperatively complete the task to be processed;
receiving the asynchronous call request by using the resource type service component container, putting the asynchronous call request into a cache queue, uniformly scheduling and sending the asynchronous call request to an idle resource type component instance for execution, and returning a processing result of the asynchronous call request to the language runtime container, wherein if the idle service component instance assignable task is detected to exist, the asynchronous call request is extracted from the cache queue by the resource type service component container; if no idle service component instance exists, the resource type service component container creates a new service component instance to allocate a task on the premise of not exceeding the maximum number of instances;
receiving and processing the processing results of the computing service component container and the resource service component container on the asynchronous call request by using the language runtime container until the execution of the task to be processed is completed; and the language runtime container receives the returned results of the computation type service component container and the resource type service component container through the distributed real-time message bus and processes the returned results one by one, when each result message is processed, the associated program runtime instance is also switched back to the language runtime container, an asynchronous calling state is marked at the corresponding syntax tree node and the language runtime container tries to continue to execute backwards, and if the language runtime container cannot execute backwards, the language runtime container is switched back to a dormant waiting state.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710686163.3A CN107463434B (en) | 2017-08-11 | 2017-08-11 | Distributed task processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710686163.3A CN107463434B (en) | 2017-08-11 | 2017-08-11 | Distributed task processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107463434A CN107463434A (en) | 2017-12-12 |
CN107463434B true CN107463434B (en) | 2021-08-24 |
Family
ID=60548770
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710686163.3A Active CN107463434B (en) | 2017-08-11 | 2017-08-11 | Distributed task processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107463434B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110083455B (en) * | 2019-05-07 | 2022-07-12 | 网易(杭州)网络有限公司 | Graph calculation processing method, graph calculation processing device, graph calculation processing medium and electronic equipment |
CN110489139A (en) * | 2019-07-03 | 2019-11-22 | 平安科技(深圳)有限公司 | A kind of real-time data processing method and its relevant device based on micro services |
CN110443512A (en) * | 2019-08-09 | 2019-11-12 | 北京思维造物信息科技股份有限公司 | A kind of regulation engine and regulation engine implementation method |
CN110827125A (en) * | 2019-11-06 | 2020-02-21 | 兰州领新网络信息科技有限公司 | Periodic commodity transaction management method |
CN111177008A (en) * | 2019-12-31 | 2020-05-19 | 京东数字科技控股有限公司 | Data processing method and device, electronic equipment and computer storage medium |
CN112272231B (en) * | 2020-10-23 | 2022-05-13 | 杭州卷积云科技有限公司 | Edge cloud collaborative service arrangement method for intelligent manufacturing scene |
CN112379992B (en) * | 2020-12-04 | 2024-01-30 | 中国科学院自动化研究所 | Role-based multi-agent task cooperative message transmission and exception handling method |
US20220232069A1 (en) * | 2021-01-18 | 2022-07-21 | Vmware, Inc. | Actor-and-data-grid-based distributed applications |
CN113688602A (en) * | 2021-10-26 | 2021-11-23 | 中电云数智科技有限公司 | Task processing method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1494017A (en) * | 2002-10-07 | 2004-05-05 | �Ҵ���˾ | Holder selector used in global network service structure and its selection method |
CN1783019A (en) * | 2004-12-03 | 2006-06-07 | 微软公司 | Interface infrastructure for creating and interacting with web services |
CN101295261A (en) * | 2008-06-25 | 2008-10-29 | 中国人民解放军国防科学技术大学 | Componentization context processing method facing general computation surroundings |
CN105528290A (en) * | 2015-12-04 | 2016-04-27 | 中国航空综合技术研究所 | Construction method of script-based embedded software simulation and test integrated platform |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7340735B2 (en) * | 2003-10-30 | 2008-03-04 | Sprint Communications Company L.P. | Implementation of distributed and asynchronous processing in COBOL |
WO2007006126A1 (en) * | 2005-04-18 | 2007-01-18 | Research In Motion Limited | Method and system for hosting and executing a component application |
US8745584B2 (en) * | 2007-05-03 | 2014-06-03 | International Business Machines Corporation | Dependency injection by static code generation |
US8533672B2 (en) * | 2008-03-20 | 2013-09-10 | Sap Ag | Extending the functionality of a host programming language |
US8914799B2 (en) * | 2009-06-30 | 2014-12-16 | Oracle America Inc. | High performance implementation of the OpenMP tasking feature |
CN102611642A (en) * | 2012-02-27 | 2012-07-25 | 杭州闪亮科技有限公司 | System for processing nonsynchronous message and method for system to send message and monitor processing task |
CN106657232A (en) * | 2016-09-29 | 2017-05-10 | 山东浪潮商用***有限公司 | Distributed server configuration and service method thereof |
CN106777026B (en) * | 2016-12-08 | 2019-12-20 | 用友网络科技股份有限公司 | Method, device and system for supporting final consistency of micro-service architecture transaction |
-
2017
- 2017-08-11 CN CN201710686163.3A patent/CN107463434B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1494017A (en) * | 2002-10-07 | 2004-05-05 | �Ҵ���˾ | Holder selector used in global network service structure and its selection method |
CN1783019A (en) * | 2004-12-03 | 2006-06-07 | 微软公司 | Interface infrastructure for creating and interacting with web services |
CN101295261A (en) * | 2008-06-25 | 2008-10-29 | 中国人民解放军国防科学技术大学 | Componentization context processing method facing general computation surroundings |
CN105528290A (en) * | 2015-12-04 | 2016-04-27 | 中国航空综合技术研究所 | Construction method of script-based embedded software simulation and test integrated platform |
Also Published As
Publication number | Publication date |
---|---|
CN107463434A (en) | 2017-12-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107463434B (en) | Distributed task processing method and device | |
CN107479990B (en) | Distributed software service system | |
US11836533B2 (en) | Automated reconfiguration of real time data stream processing | |
US10447772B2 (en) | Managed function execution for processing data streams in real time | |
CN109284197B (en) | Distributed application platform based on intelligent contract and implementation method | |
McChesney et al. | Defog: fog computing benchmarks | |
US11716264B2 (en) | In situ triggered function as a service within a service mesh | |
CN110083455B (en) | Graph calculation processing method, graph calculation processing device, graph calculation processing medium and electronic equipment | |
CN110908658A (en) | Micro-service and micro-application system, data processing method and device | |
CN110413822B (en) | Offline image structured analysis method, device and system and storage medium | |
US9755923B2 (en) | Predictive cloud provisioning based on human behaviors and heuristics | |
Castro et al. | The server is dead, long live the server: Rise of Serverless Computing, Overview of Current State and Future Trends in Research and Industry | |
CN112383533A (en) | Message format conversion method and device | |
Shu-Qing et al. | The improvement of PaaS platform | |
US20200012545A1 (en) | Event to serverless function workflow instance mapping mechanism | |
Gan et al. | Unveiling the hardware and software implications of microservices in cloud and edge systems | |
Cicconetti et al. | FaaS execution models for edge applications | |
Paraiso et al. | A middleware platform to federate complex event processing | |
CN111338775B (en) | Method and equipment for executing timing task | |
Gazis et al. | Middleware 101: What to know now and for the future | |
Suzumura et al. | StreamWeb: Real-time web monitoring with stream computing | |
Zorrilla et al. | Web browser-based social distributed computing platform applied to image analysis | |
US20110247007A1 (en) | Operators with request-response interfaces for data stream processing applications | |
Nivethitha et al. | Survey on architectural design principles for edge oriented computing systems | |
CN114844957B (en) | Link message conversion method, device, equipment, storage medium and program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP02 | Change in the address of a patent holder |
Address after: 250000 Luoyuan street, Lixia District, Jinan City, Shandong Province Patentee after: HENGFENG BANK CO.,LTD. Address before: 264001 No. 248, South Street, Zhifu District, Yantai City, Shandong Province Patentee before: HENGFENG BANK CO.,LTD. |
|
CP02 | Change in the address of a patent holder |