Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of one or more embodiments of the present description. One or more embodiments of the present specification can be implemented in many different ways than those described herein, and those skilled in the art will appreciate that the embodiments described herein can be similarly generalized without departing from the spirit and scope of the embodiments described herein, and that the embodiments described herein are not limited to the specific implementations disclosed below.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in one or more embodiments of the present description to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, a first can also be referred to as a second and, similarly, a second can also be referred to as a first without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In one or more embodiments of the present disclosure, a method and an apparatus for processing a payment process, a computing device, and a computer storage medium are provided, and details are individually described in the following embodiments.
First, terms referred to in one or more embodiments of the present specification are explained.
And (3) payment channel: a carrier for the flow of funds such as a bank debit card channel, a farmer credit card channel, or the like.
A flow engine: a software system component can load service components and execute the service components in sequence according to the arranged sequence.
External clamping: international universal credit cards such as Visa, MasterCard, and JCB.
Asynchronous processing: in contrast to synchronous processing, traditional synchronous processing is sequential execution of instructions in order, while asynchronous processing is multi-threaded processing. In asynchronous processing, the payment process engine may give a result immediately after the user submits the payment application, but the final result requires the payment process engine to give it after the asynchronous process has been executed. During this waiting period, the user can be made to progress the process in real time.
Fig. 1 is a schematic diagram of a system architecture to which a processing method of a payment flow provided in one or more embodiments of the present specification is applied.
The client 10 refers to a user terminal, for example, a mobile device including a mobile phone, a notebook, a smart watch, and the like, a desktop device including a desktop computer or a computer, and the like. Server 20 refers to a network-side server of a website or an application program, and may be a mobile or a stationary server. The server 20 and the client 10 are connected via a network 30. The server 20 is provided with a first process engine 21 and a second process engine 22.
The client terminal 10 generates a payment event and sends a payment request to the server terminal 20. After receiving the payment request from the client 10, the server 20 starts the first process engine 21 to load the payment process to be executed, and sequentially executes a plurality of process nodes. In the case where the first process engine 21 determines that the process node to be executed at present is an asynchronous process node, the first process engine 21 interrupts execution of the asynchronous process node, and starts the second process engine 22 to execute the asynchronous process node. Moreover, the first process engine 21 may obtain an intermediate result generated by the second process engine 22 in the process of executing the asynchronous processing process flow node, and learn the execution progress of the asynchronous processing process flow node according to the intermediate result. In addition, the first process engine 21 returns an execution progress prompt message to the client 10 according to the execution progress of the asynchronous process flow node, so that the user knows the processing progress of the current payment flow in real time.
When the first process engine 21 obtains the final result of executing the asynchronous processing process flow node by the second process engine 22, the first process engine 21 continues to execute the process nodes after the asynchronous processing process flow node according to the final result until the payment process is completed.
In one embodiment of the present specification, a method for processing a payment process is disclosed, and referring to fig. 2, the method includes steps 202 to 210:
202. and starting a first process engine, and loading the payment process to be executed.
Wherein the payment process includes a plurality of process nodes, and the plurality of process nodes includes at least one asynchronous processing process node.
In step 202, starting the first process engine is performed by the server receiving the payment request from the client. For example, in a specific usage scenario, a user selects to use an external card channel to pay at a client, and after submitting payment, the client sends a payment request to a server, thereby triggering the start of a first process engine.
Optionally, before loading the payment flow that needs to be executed, the method further includes:
adding configuration information to each process node, wherein the configuration information comprises: whether asynchronous processing is enabled, interval of polling time, asynchronous waiting timeout time, and polling result acquisition address.
204. The first flow engine executes the flow nodes sequentially.
Specifically, before a first process engine executes a current process node, the first process engine analyzes configuration information of the current process node to be executed, and determines that the current process node to be executed is an asynchronous process node or a non-asynchronous process node according to the configuration information.
When the information of 'whether asynchronous processing can be performed' in the configuration information is 'yes', the current flow node to be executed is an asynchronous processing flow node;
and when the information of 'whether asynchronous processing can be performed' in the configuration information is 'no', the current flow node to be executed is a non-asynchronous processing flow node.
206. And under the condition that the first process engine determines that the process node to be executed is the asynchronous processing process node, interrupting the execution of the asynchronous processing process node by the first process engine, and starting a second process engine to execute the asynchronous processing process node.
Wherein the first process engine comprises: an asynchronous thread pool component for supporting configuration of multithreading processing rules to enable the first process engine to initiate the second process engine to execute the asynchronous processing flow node.
In one embodiment, for example, the card channel payment process, the asynchronous processing process node includes: and (4) international business risk inspection. Under the condition that the current process node to be executed is the international business risk check, the first process engine firstly analyzes the configuration information of the current process node to be executed, the international business risk check is determined to be an asynchronous processing process node, then the first process engine interrupts the execution of the international business risk check, and the first process engine starts the second process engine to execute the international business risk check through the asynchronous thread pool component. In the process of executing the international business risk check, the second process engine accesses a remote server outside the domain to call the international business risk system to execute the international business risk check.
It should be noted that, after the second process engine is started, the second process engine does not traverse the process node before the asynchronous process node, but directly executes the current asynchronous process node. In addition, during the execution of the second process engine, the first process engine merely interrupts the execution of the process node, and does not end the execution process.
208. The first process engine acquires an intermediate result generated by the second process engine in the process of executing the asynchronous processing process flow node, and acquires the execution progress of the asynchronous processing process flow node according to the intermediate result.
Specifically, the step 208 of obtaining an intermediate result generated by the second process engine in the process of executing the asynchronous process flow node by the first process engine includes:
2082. and the second flow engine stores the intermediate result generated in the process of executing the asynchronous processing flow node to a local cache.
In this specification, the local cache may be a cache on the server side.
In step 2082, the second process engine may store the intermediate result in the local cache for reaching the first time interval threshold. The first time interval threshold may be set, for example, 1 second.
2084. And the first flow engine queries a local cache to obtain the intermediate result.
In step 2082, the first process engine may be configured to query the local cache every second time interval threshold. The second time interval threshold may be set, for example, 2 seconds.
210. And the first process engine acquires the final result of the second process engine executing the asynchronous processing process nodes and continues to execute the process nodes behind the asynchronous processing process nodes according to the final result.
Specifically, the step 210 of acquiring, by the first process engine, a final result of the asynchronous processing process executed by the second process engine includes:
2102. and the second process engine stores the final result of the asynchronous processing process node to a local cache.
2104. And the first flow engine queries a local cache to obtain the final result.
Specifically, the asynchronous processing flow node includes: the international business risk check, wherein the first process engine judges whether the final result of the asynchronous processing process node executed by the second process engine is that the check is passed or not, if the check is passed, the process node after the asynchronous processing process node is continuously executed, and the method comprises the following steps:
generating interactive data and sending the interactive data to a bank background to finish payment;
saving the payment data of the user to the local; and
the payment process is ended.
If the check is not passed, the step of continuing to execute the flow node after the asynchronous processing flow node comprises the following steps: the payment process is ended.
The interactive data is used for interacting with the bank background, and comprises the following steps: information such as a payment account number, payment amount, a payment channel and the like; the payment data includes information such as the account number of the user, bank card data, and the like.
The payment data and the interactive data are different, the payment data are generated by taking a payment platform of the client as a main body, and the interactive data are generated by taking a bank background as a main body.
There are a variety of payment platforms for clients, such as a pay for treasure payment platform. If the user selects the external card payment channel to pay on the payment platform of the payment instrument, the user generates payment data after selecting the external card payment channel and submitting payment, and the interactive data is assembled and generated by the server after the international business risk check is passed.
In the payment process processing method of the specification, when the first process engine determines that the current process node to be executed is the asynchronous processing process node, the first process engine interrupts execution of the asynchronous processing process node, starts the second process engine to execute the asynchronous processing process node, and can acquire an intermediate result and a final result generated in the process of executing the asynchronous processing process node by the second process engine, acquire the execution progress of the asynchronous processing process node according to the intermediate result, and continuously execute the process node behind the asynchronous processing process node according to the final result, so that the efficiency of the risk of checking the card for use by the international wind control system service can be improved, and the use experience of a user can be improved.
An embodiment of the present specification discloses a processing method of a payment process, referring to fig. 3, including the following steps 302 to 314:
302. and starting a first process engine, and loading the payment process to be executed.
For a detailed explanation of step 302, reference may be made to the related contents of step 202 in the above embodiments.
304. The first process engine analyzes the configuration information of the process node to be executed currently, judges whether the process node to be executed currently is an asynchronous processing process node or a non-asynchronous processing process node according to the configuration information, and executes step 306 if the process node to be executed currently is an asynchronous processing process node; if it is a non-asynchronous process flow node, go to step 314.
306. And the first process engine interrupts and executes the asynchronous processing process node, and starts the second process engine to execute the asynchronous processing process node.
For a detailed explanation of step 306, reference may be made to the related contents of step 206 in the above embodiments.
308. The first process engine acquires an intermediate result generated by the second process engine in the process of executing the asynchronous processing process flow node, and acquires the execution progress of the asynchronous processing process flow node according to the intermediate result.
For a detailed explanation of step 308, reference may be made to the related contents of step 208 in the above embodiments.
310. The first process engine obtains the final result of the second process engine executing the asynchronous processing process flow node, continues to execute the process nodes after the asynchronous processing process flow node according to the final result, and returns to the execution step 304.
For a detailed explanation of step 310, reference may be made to the related contents of step 210 in the above embodiments.
312. The first process engine executes the non-asynchronous process flow node and then returns to execute step 304.
Optionally, the processing method of the payment process in this embodiment further includes:
314. and the first process engine returns the execution progress prompt information to the user interface.
And the execution progress prompt information is generated according to the execution progress of the asynchronous processing flow node.
In addition, the user interface is a display interface of the client, for example, a display interface of a mobile phone. The client can display the progress prompt in the user interface according to the execution progress prompt information.
The progress prompt may be of various kinds, such as a progress bar showing progress, a number showing a percentage, and so on.
In this step 314, the first process engine may return the execution progress prompting message to the client when reaching the third time interval threshold, or the client may actively query the server when reaching the third time interval threshold, so as to obtain the execution progress prompting message returned by the first process engine. The third time interval threshold may be set, for example, to 3 seconds.
The payment flow processing method disclosed in the embodiment of the specification can improve the efficiency of the card using risk of the service check of the international wind control system and improve the use experience of a user.
In addition, in the method disclosed in the embodiment of the present specification, the first process engine may return the execution progress prompt information to the user interface according to the execution progress of the asynchronous processing process node, so that the user knows the processing progress of the current payment process in real time, and the anxiety of the user is relieved.
To more fully illustrate the method of the present application, fig. 4 shows a schematic diagram of a payment flow at a client in one embodiment of the present description.
Referring to fig. 4, taking an example that the user selects to use the external card channel for payment, the payment process at the client includes:
402. and selecting to use an external card channel for payment.
The selection instruction of the user for the foreign card channel can be realized by inputting instructions such as a trigger instruction, a voice instruction, a key selection instruction and the like.
In addition, this step 402 may be implemented in a payment platform of the client, such as a paypal platform.
404. And submitting a payment request to the server side.
406. If the submission is successful, go to step 410, otherwise go to step 408.
408. And prompting the failure reason.
In this embodiment, the failure reason for submitting the payment request is various, for example, network interruption, and card death of the payment platform program.
410. And jumping to a waiting page, and acquiring the execution progress of the server.
After receiving the payment request, the server executes the processing method of the payment flow in the above embodiment. And simultaneously, the client jumps to a waiting page to wait for a payment result returned by the server.
And in the process that the client jumps to the waiting page, the background program of the client inquires the execution progress of the server.
In the process, the first process engine of the server returns the execution progress prompt information to the user interface.
And generating the execution progress prompt information according to the execution progress of the asynchronous processing flow nodes.
And under the condition of acquiring the execution progress prompt information, the client displays a progress prompt on a user interface according to the execution progress prompt information. The progress prompt may be of various kinds, such as a progress bar showing progress, a number showing a percentage, and so on.
412. Checking whether the payment result is inquired, if so, executing step 414, and if not, returning to step 410.
414. And displaying the payment result.
In step 414, the payment result may be a result of successful payment, for example, in a case that a result of international business risk check executed by the server is passed, the server generates interactive data and sends the interactive data to the bank background to complete payment, and returns a result of successful payment after the payment process is finished; the payment result may be a result of payment failure, for example, in a case that a result of the international business risk check executed at the server side is failed, the payment process is ended, and a result of payment failure is returned.
In order to more fully explain the method of the present application, fig. 5 shows a schematic diagram of a specific application example of the processing method of the payment flow according to an embodiment of the present specification. In a specific example of the user performing the external card payment, the method for processing the payment flow at the server side includes the following steps 500-526:
500. and (5) starting.
And the server receives the payment request of the client and starts the first process engine.
502. The first flow engine queries the payment data.
The payment data includes information such as the account number of the user, bank card data, and the like.
504. The first process engine prepares a payment channel.
In this embodiment, the payment channel may be an external card payment channel.
506. The first flow engine queries whether a payment channel is available, if so, step 508 is performed, and if not, step 526 is performed.
508. The first flow engine checks the transaction traffic status.
Wherein the transaction service state may be a wait for payment state.
510. The first process engine checks whether the current payment environment of the user is normal, if yes, step 510 is executed, and if not, step 526 is executed.
512. The first process engine checks if the user status is normal, if yes, go to step 514, and if not, go to step 526.
514. And the first process engine starts the second process engine to carry out international business risk check.
516. The first flow engine obtains intermediate results and final results.
The first process engine acquires an intermediate result generated by the second process engine in the process of international business risk check, and acquires the execution progress of the international business risk check according to the intermediate result.
And the first process engine acquires a final result generated by the second process engine executing the international business risk check, and continues to execute the process nodes behind the asynchronous processing process nodes according to the final result.
518. The first flow engine checks whether the final result passes, if so, performs step 520, and if not, performs step 526.
520. The first flow engine pays for the submission.
The first process engine generates interaction data for interacting with the bank.
The interactive data includes: information such as payment account number, payment amount, payment channel and the like.
522. And the first process engine returns the execution progress prompt information to the client.
The client can display the progress prompt in the user interface according to the execution progress prompt information. The progress prompt may be of various kinds, such as a progress bar showing progress, a number showing a percentage, and so on.
In this step, the execution progress prompting message may be returned to the client when the first process engine reaches the third time interval threshold, or the client may actively query the server to obtain the execution progress prompting message when the third time interval threshold is reached. The third time interval threshold may be set, for example, to 3 seconds.
524. The first flow engine stores payment data.
Wherein the first process engine saves the payment data to a cache of the server.
526. The payment process is ended.
In an embodiment of the present specification, a payment flow processing apparatus is further disclosed, and with reference to fig. 6, the payment flow processing apparatus includes:
a payment process loading module 602 configured to start a first process engine and load a payment process to be executed, where the payment process includes a plurality of process nodes, and the plurality of process nodes includes at least one asynchronous processing process node;
a first flow engine processing module 604 configured to cause the first flow engine to sequentially execute the flow nodes;
an asynchronous processing module 606 configured to, when the first process engine determines that the process node to be executed currently is an asynchronous processing process node, cause the first process engine to interrupt execution of the asynchronous processing process node, and start a second process engine to execute the asynchronous processing process node;
an intermediate result obtaining module 608, configured to enable the first process engine to obtain an intermediate result generated by the second process engine in a process of executing an asynchronous processing process flow node, and obtain an execution progress of the asynchronous processing process flow node according to the intermediate result;
a final result obtaining module 610, configured to enable the first process engine to obtain a final result of the second process engine executing the asynchronous processing process flow node, and continue to execute the process nodes after the asynchronous processing process flow node according to the final result.
Wherein the first process engine comprises: an asynchronous thread pool component for supporting configuration of multithreading processing rules to enable the first process engine to initiate the second process engine to execute the asynchronous processing flow node.
Optionally, the plurality of process nodes comprises at least one non-asynchronous process flow node;
the device further comprises: a non-asynchronous processing module 614 configured to cause the first process engine to execute the non-asynchronous process flow node if the first process engine determines that the process node currently needing to be executed is the non-asynchronous process flow node.
Optionally, the apparatus further comprises:
a configuration information adding module configured to add configuration information to each process node, the configuration information including: whether asynchronous processing is enabled, interval of polling time, asynchronous waiting timeout time, and polling result acquisition address.
Optionally, the processing device of the payment process of this embodiment further includes:
the configuration information analysis module is configured to enable the first process engine to analyze the configuration information of the process node to be executed before the first process engine executes the current process node, and determine that the current process node to be executed is an asynchronous processing process node or a non-asynchronous processing process node according to the configuration information.
Optionally, the asynchronous process flow node includes: checking international business risks;
if the final result is that the check is passed, the final result obtaining module continues to execute the flow nodes after the asynchronous processing flow node according to the final result, including:
generating interactive data and sending the interactive data to a bank background to finish payment;
saving the payment data of the user to the local; and
the payment process is ended.
Optionally, when the final result is that the check fails, the final result obtaining module continues to execute the flow node after the asynchronous processing flow node according to the final result, including: the payment process is ended.
Optionally, referring to fig. 6, the intermediate result obtaining module 608 includes:
an intermediate result storage module 6082 configured to cause the second process engine to store an intermediate result generated in the process of executing the asynchronous process flow node to a local cache;
an intermediate result query module 6084 configured to cause the first flow engine to query the local cache for the intermediate result.
Optionally, referring to fig. 6, the final result obtaining module 610 includes:
a final result storage module 6102 configured to cause the second process engine to store the final result of executing the asynchronous process flow node to a local cache;
a final result query module 6104 configured to cause the first flow engine to query a local cache to obtain the final result.
Optionally, the processing device of the payment process of this embodiment further includes:
an execution progress prompt module 612 configured to cause the first process engine to return execution progress prompt information to a user interface; and the execution progress prompt information is generated according to the execution progress of the asynchronous processing flow node.
According to the payment process processing device, under the condition that the first process engine determines that the process node to be executed at present is the asynchronous processing process node, the first process engine interrupts execution of the asynchronous processing process node, the second process engine is started to execute the asynchronous processing process node, the first process engine can obtain an intermediate result and a final result generated in the process that the second process engine executes the asynchronous processing process node, the execution progress of the asynchronous processing process node is obtained according to the intermediate result, and the process node behind the asynchronous processing process node is continuously executed according to the final result, so that the efficiency of the card using risk of the international wind control system service inspection can be improved, and the use experience of a user is improved.
In addition, the first process engine can return execution progress prompt information to the user interface according to the execution progress of the asynchronous processing process node, so that the user can know the processing progress of the current payment process in real time, and the anxiety of the user is relieved.
The above is an illustrative scheme of a processing apparatus of a payment process according to the present embodiment. It should be noted that the technical solution of the processing apparatus of the payment flow belongs to the same concept as that of the above-mentioned processing method of the payment flow, and details that are not described in detail in the technical solution of the processing apparatus of the payment flow can be referred to the description of the technical solution of the above-mentioned processing method of the payment flow.
An embodiment of the present specification further provides a computing device, including a memory, a processor, and computer instructions stored on the memory and executable on the processor, where the processor executes the instructions to implement the processing method of the payment process.
It should be appreciated that the computing device may also include a network interface that enables the computing device to communicate via one or more networks. Examples of such networks include a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. The network interface may include one or more of any type of network interface (e.g., a Network Interface Card (NIC)) whether wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present description, the above and other components of the computing device not mentioned may also be connected to each other, e.g. via a bus. Those skilled in the art may add or replace other components as desired.
The computing device may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), a mobile phone (e.g., smartphone), a wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. The computing device may also be a mobile or stationary server.
The above is an illustrative scheme of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the above-mentioned payment flow processing method belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the above-mentioned payment flow processing method.
An embodiment of the present specification further provides a computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the processing method of the payment process as described above.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the foregoing method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present disclosure is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present disclosure. Further, those skilled in the art should also appreciate that the embodiments described in this specification are preferred embodiments and that acts and modules referred to are not necessarily required for this description.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the above-mentioned payment flow processing method, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the above-mentioned payment flow processing method.
In the foregoing embodiments, the descriptions of the embodiments have respective emphasis, and reference may be made to related descriptions of other embodiments for parts that are not described in detail in a certain embodiment.
The preferred embodiments of the present specification disclosed above are intended only to aid in the description of the specification. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the specification and its practical application, to thereby enable others skilled in the art to best understand the specification and its practical application. The specification is limited only by the claims and their full scope and equivalents.