CN115509754A - Business data processing method and device, electronic equipment and storage medium - Google Patents

Business data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115509754A
CN115509754A CN202211203627.8A CN202211203627A CN115509754A CN 115509754 A CN115509754 A CN 115509754A CN 202211203627 A CN202211203627 A CN 202211203627A CN 115509754 A CN115509754 A CN 115509754A
Authority
CN
China
Prior art keywords
resource scheduling
target
information
service request
scheduling operation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211203627.8A
Other languages
Chinese (zh)
Inventor
冯敏
刘艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Bank Corp
CCB Finetech Co Ltd
Original Assignee
China Construction Bank Corp
CCB Finetech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Bank Corp, CCB Finetech Co Ltd filed Critical China Construction Bank Corp
Priority to CN202211203627.8A priority Critical patent/CN115509754A/en
Publication of CN115509754A publication Critical patent/CN115509754A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a business data processing method and device, electronic equipment and a storage medium, and relates to the technical field of data processing. In the method, in response to a service request initiated by a target object for a target resource scheduling scene, target processing information for the service request is acquired, and each target resource scheduling operation associated with a target processing flow included in the target processing information is selected from a preset resource scheduling operation set, so that each target resource scheduling operation is sequentially executed among participating objects corresponding to each target resource scheduling operation, and each participating object controls each target resource scheduling operation to perform operation response; and when the target resource scheduling operation is executed once, resource scheduling information corresponding to the corresponding target resource scheduling operation is obtained and cached. By adopting the mode, resource scheduling information does not need to be recorded serially, so that the efficiency of service data processing is improved.

Description

Business data processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and an apparatus for processing service data, an electronic device, and a storage medium.
Background
With the rapid development of information technology, information service platforms are widely applied to the processing of various actual services, so that the data processing of the actual services is realized according to the service data processing method of the information service platform, and the smooth proceeding of the actual services is facilitated.
For example, in a resource scheduling scenario, a target object may generally initiate a service request carrying a corresponding target identifier thereof to a selected information service platform, so that the information service platform performs corresponding resource scheduling on the target object based on the received service request, where it is noted that in this process, corresponding resource scheduling information needs to be recorded to provide for subsequent operations such as query of resource scheduling information.
Therefore, by adopting the service data processing method, not only is the required resource scheduling realized, but also the resource scheduling information is recorded in series, however, if the data volume of the service request is large in a specific time range, a large amount of time is spent on recording the corresponding resource scheduling information in series in the process of scheduling the required resource, and thus the efficiency of processing the service data is greatly reduced.
Therefore, by adopting the mode, the efficiency of service data processing is low.
Disclosure of Invention
The embodiment of the application provides a business data processing method and device, electronic equipment and a storage medium, which are used for improving the business data processing efficiency.
In a first aspect, an embodiment of the present application provides a method for processing service data, where the method includes:
responding to a service request initiated by a target object aiming at a target resource scheduling scene, and acquiring target processing information aiming at the service request; the target processing information comprises a target processing flow corresponding to the service request;
selecting each target resource scheduling operation associated with the target processing flow from a preset resource scheduling operation set; wherein each target resource scheduling operation has at least one operation response acting on the service request, and each operation response is characterized by: the resource change corresponding to the corresponding process node in the target processing process;
sequentially executing each target resource scheduling operation among the participation objects corresponding to each target resource scheduling operation so that each participation object controls each target resource scheduling operation to perform operation response respectively and realize service data processing on the service request; wherein, when the target resource scheduling operation is executed once, the following operations are respectively executed:
and acquiring resource scheduling information corresponding to the corresponding target resource scheduling operation, and caching the resource scheduling information.
In a second aspect, an embodiment of the present application further provides a service data processing apparatus, where the apparatus includes:
the acquisition module is used for responding to a service request initiated by a target object aiming at a target resource scheduling scene and acquiring target processing information aiming at the service request; the target processing information comprises a target processing flow corresponding to the service request;
the selection module is used for selecting each target resource scheduling operation associated with the target processing flow from a preset resource scheduling operation set; wherein each target resource scheduling operation has at least one operation response acting on the service request, each operation response characterizing: the resource change corresponding to the corresponding process node in the target processing process;
the processing module is used for sequentially executing each target resource scheduling operation among the corresponding participating objects of each target resource scheduling operation so that each participating object controls each target resource scheduling operation to perform operation response respectively and realize service data processing on the service request; wherein, when the target resource scheduling operation is executed once, the following operations are respectively executed:
and acquiring resource scheduling information corresponding to the corresponding target resource scheduling operation, and caching the resource scheduling information.
In a possible embodiment, when obtaining the target processing information for the service request, the obtaining module is specifically configured to:
responding to the identification information of the target object carried by the service request, and acquiring target resource demand information of the target object from the service request;
screening out a target processing flow matched with the target resource demand type from a preset candidate processing flow set based on the target resource demand type of the target resource demand information;
and generating corresponding target processing information aiming at the service request based on the target processing flow.
In a possible embodiment, before responding to a service request initiated by a target object for a target resource scheduling scenario, the obtaining module is further configured to:
aiming at various service requests, the following operations are respectively executed:
acquiring candidate resource demand information of a corresponding object from a service request;
determining each candidate resource scheduling operation corresponding to one service request based on the candidate resource demand type carried by the candidate resource demand information;
and generating corresponding candidate processing flows aiming at a service request based on each candidate resource scheduling operation and the corresponding flow node thereof, and storing the candidate processing flows into a candidate processing flow set.
In a possible embodiment, after each execution of a target resource scheduling operation, the processing module is further configured to:
judging whether a target resource scheduling operation meets a preset resource scheduling termination condition or not;
if yes, ending the target processing flow;
if not, continuing the target resource scheduling operation of the next process node in the target processing process.
In a possible embodiment, when caching the resource scheduling information, the processing module is specifically configured to:
acquiring node identification of a flow node corresponding to the corresponding target scheduling operation;
and caching the resource scheduling information to a cache database set by the corresponding node identifier.
In a possible embodiment, the processing module is further configured to:
when the target resource scheduling operation is determined to be completed, updating the set original resource scheduling information database based on the resource scheduling information cached by the cache database of each process node;
alternatively, the first and second liquid crystal display panels may be,
and updating the original resource scheduling information database based on the resource scheduling information cached by the respective cache database of each process node when the fact that the respective corresponding resource scheduling information of each target resource scheduling operation is cached successfully is determined.
In a third aspect, an electronic device is proposed, which includes a processor and a memory, wherein the memory stores program codes, and when the program codes are executed by the processor, the processor is caused to execute the steps of the business data processing method according to the first aspect.
In a fourth aspect, a computer-readable storage medium is proposed, which includes program code for causing an electronic device to perform the steps of the business data processing method of the first aspect when the program code runs on the electronic device.
In a fifth aspect, a computer program product is provided, which, when invoked by a computer, causes the computer to perform the method steps of the business data processing method according to the first aspect.
The beneficial effect of this application is as follows:
in the service data processing method provided by the embodiment of the application, target processing information aiming at a service request is obtained in response to the service request initiated by a target object aiming at a target resource scheduling scene; the target processing information comprises a target processing flow corresponding to the service request; then, selecting each target resource scheduling operation associated with the target processing flow from a preset resource scheduling operation set; wherein each target resource scheduling operation has at least one operation response acting on the service request, and each operation response is characterized by: the resource corresponding to the corresponding process node in the target processing process is changed; finally, sequentially executing each target resource scheduling operation among the corresponding participating objects of each target resource scheduling operation, so that each participating object controls each target resource scheduling operation to perform operation response respectively, and service data processing of the service request is realized; and when the target resource scheduling operation is executed once, resource scheduling information corresponding to the corresponding target resource scheduling operation is obtained and cached.
By adopting the mode, the target resource scheduling operations are sequentially executed among the corresponding participating objects of the target resource scheduling operations, so that the participating objects control the target resource scheduling operations to respectively perform operation response, the service data processing of the service request is realized, and the resource scheduling information corresponding to the corresponding target resource scheduling operations is cached every time the target resource scheduling operations are executed, thereby avoiding the technical defect that in the prior art, the data volume of the service request is large in a specific time range, which causes that a large amount of time is needed to serially record the corresponding resource scheduling information in the process of scheduling the required resources, and further improving the efficiency of service data processing.
Furthermore, other features and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the present application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise. In the drawings:
fig. 1 schematically illustrates a specific application scenario of service data processing provided by an embodiment of the present application;
FIG. 2 illustrates an alternative schematic diagram of a system architecture to which embodiments of the present application are applicable;
fig. 3 is a schematic implementation flow diagram schematically illustrating a business data processing method provided by an embodiment of the present application;
FIG. 4 is a logic diagram illustrating an example of generating target process information according to an embodiment of the present application;
fig. 5 is a schematic diagram illustrating a logic for updating a set original resource scheduling information database according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a specific application scenario of another service data processing provided by an embodiment of the present application;
FIG. 7 is a logic diagram based on FIG. 3 and provided by an embodiment of the present application;
fig. 8 schematically illustrates a structural diagram of a service data processing apparatus according to an embodiment of the present application;
fig. 9 schematically illustrates a structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments, but not all embodiments, of the technical solutions of the present application. All other embodiments obtained by a person skilled in the art without any inventive step based on the embodiments described in the present application are within the scope of the protection of the present application.
It should be noted that "a plurality" is understood as "at least two" in the description of the present application. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. A is connected with B and can represent: a and B are directly connected and A and B are connected through C. In addition, in the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not intended to indicate or imply relative importance nor order to be construed.
In addition, in the technical scheme of the application, the data acquisition, transmission, use and the like all meet the requirements of relevant national laws and regulations.
To facilitate understanding of those skilled in the art, some terms and phrases referred to in the embodiments of the present application will be briefly described and explained below:
(1) Resource: the term refers to the general term of all material resources, financial resources and manpower in a region, and specifically can be divided into two major categories of natural resources and social resources. Wherein, the natural resources can comprise material resources such as sunlight, air, water, forest and the like; society may include human resources, information resources, and various material wealth created through labor.
For example, in the embodiment of the present application, the resource may be represented as a fund that needs to be paid by the target object or the corresponding service, and in some more specific scenarios, the resource may also refer to a schedulable asset such as overseas currency held by the target object in legal and compliant situations, which is not limited in this application.
(2) And (3) resource scheduling operation: the scheduling operation configured on the appointed use for the relatively scarce resource is usually used for meeting the use requirement of the target object on the schedulable resource only aiming at a certain service.
For example, in the embodiment of the present application, the information service platform may regard resource scheduling implemented by each flow node involved in implementing a specified service request as resource scheduling operation; specifically, in the embodiment of the present application, each time the resource scheduling operation is executed, a schedulable resource amount transmission change of an account associated with the corresponding process node is involved.
(3) The serial processing mode: multiple tasks, tasks or processes are performed in time succession, which may also be referred to as a "serial operation" mode, where a subsequent operation may use the results of a previous operation.
(4) An asynchronous processing mode: the method is used for processing problems according to asynchronous programs, the utilization rate of equipment is improved, and therefore the operation efficiency of the sequence is improved macroscopically.
(5) list: in the programming language, list is a class in a class library, and can be simply regarded as a bidirectional link serial to manage an object set in a linear row mode; it should be noted that, in this embodiment of the present application, the list may be a node identifier set corresponding to each process node in the target processing flow, and is used to cache each resource scheduling information.
(6) Pool accounts: the fund pool accounts can be main accounts of the fund pool or sub-accounts of the fund pool, wherein the fund pool refers to a liquidity cash management product which realizes the centralization of local foreign currency funds, the allocation under budget and the internal communication in ways of directly allocating or entrusting loan and the like when the funds are used legally and in compliance.
It should be noted that most of the cash management of the fund pool is still completed by the system, which is between various factors such as manpower, space and time, and the fund pool is operated more conveniently and quickly by the fund management system.
It should be noted that the above-mentioned naming manner of the terms is only an example, and the embodiments of the present application do not limit the naming manner of the above-mentioned terms.
Further, based on the above nouns and related term explanations, the following briefly introduces the design ideas of the embodiments of the present application:
in a resource scheduling scenario, a target object may generally initiate a service request carrying a corresponding target identifier to a selected information service platform, so that the information service platform performs corresponding resource scheduling on the target object based on the received service request, and in this process, in order to ensure consistency of resource scheduling and resource change, corresponding resource scheduling information needs to be recorded in addition to the resource change, so as to provide subsequent operations such as query of the resource scheduling information.
However, in the above-mentioned service data processing method, since it is necessary to record the corresponding resource scheduling information in series after each time of resource scheduling, if the data amount of the service request is large in a specific time range, it will also take a lot of time to record the corresponding resource scheduling information in series during the process of scheduling the required resource, thereby greatly reducing the efficiency of processing the service data.
For example, referring to fig. 1, in a resource scheduling scenario of a fund drawing transaction of a real-time cash pool sub-account, since a plurality of pool accounts complete a plurality of sub-task processes layer by layer, including independent financial processing contents such as fund drawing, fund down-dialing, and fund depositing, and since a large number of accounts are involved to synchronously process financial information and record transaction flow, at a transaction time point or a hot spot transaction time period in a transaction set, a transaction queuing or even a transaction timeout condition is likely to occur, which causes a system performance bottleneck and a user experience degradation.
In view of this, in the embodiment of the present application, in order to ensure consistency between resource scheduling and resource changing and improve efficiency of service data processing, a method for processing resource scheduling information is provided by processing in an asynchronous manner and recording resource scheduling information, and the method specifically includes: responding to a service request initiated by a target object aiming at a target resource scheduling scene, and acquiring a corresponding target processing flow from target processing information aiming at the service request; further, selecting each target resource scheduling operation associated with the target processing flow from a preset resource scheduling operation set; finally, sequentially executing each target resource scheduling operation among the corresponding participating objects of each target resource scheduling operation, so that each participating object controls each target resource scheduling operation to perform operation response respectively, and service data processing of the service request is realized; and when the target resource scheduling operation is executed once, resource scheduling information corresponding to the corresponding target resource scheduling operation is obtained and cached.
Obviously, based on the above manner, the embodiment of the application records the cached resource scheduling information after completing the resource scheduling, so that the frequency of updating the database in the same resource scheduling is reduced, and the sub-process and time of the resource scheduling are shortened, thereby improving the efficiency of processing the service data.
In particular, the following description will briefly describe preferred embodiments of the present application with reference to the drawings of the specification, and it should be understood that the preferred embodiments described herein are only for illustrating and explaining the technical solutions provided by the present application, and are not used for limiting the present application, and features in embodiments and embodiments related to the present application may be combined with each other without conflict.
Fig. 2 is a schematic diagram of a system architecture applicable to the embodiment of the present application, where the system architecture includes: a target terminal (201a, 201b) and a server 202. The target terminals (201a, 201b) and the server 202 can exchange information through a communication network, wherein the communication mode adopted by the communication network can comprise the following steps: wireless communication and wired communication.
Illustratively, the target terminals (201a, 201b) may communicate with the server 202 by accessing the network via cellular Mobile communications technology, including, for example, a fifth Generation Mobile networks (5G) technology.
Alternatively, the target terminals (201a, 201b) may communicate with the server 202 by accessing the network via short-range Wireless communication, including, for example, wireless Fidelity (Wi-Fi) technology.
The number of the devices is not limited in the embodiment of the present application, and as shown in fig. 2, the target terminal (201a, 201b) and the server 202 are only used as an example for description, and the devices and their respective functions are briefly described below.
A target terminal (201a, 201b) is a device that can provide voice and/or data connectivity to a user, comprising: a hand-held terminal device, a vehicle-mounted terminal device, etc. having a wireless connection function.
Illustratively, the target terminals (201a, 201b) include, but are not limited to: the Mobile terminal Device comprises a Mobile phone, a tablet computer, a notebook computer, a palm computer, a Mobile Internet Device (MID), a wearable Device, a Virtual Reality (VR) Device, an Augmented Reality (AR) Device, a wireless terminal Device in industrial control, a wireless terminal Device in unmanned driving, a wireless terminal Device in a smart grid, a wireless terminal Device in transportation safety, a wireless terminal Device in a smart city, a wireless terminal Device in a smart home, and the like.
In addition, the target terminals (201a, 201b) may be installed with clients related to business data processing, and the clients may be software, such as Application (APP), browser, short video software, and the like, and may also be web pages, applets, and the like. In the embodiment of the present application, the target terminal (201a, 201b) may enable the client related to the service data processing to send a service request for the target resource scheduling scenario to the server 202.
The server 202 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
It is worth mentioning that, in the embodiment of the present application, the server 202 is configured to respond to a service request initiated by a target object for a target resource scheduling scenario, and obtain target processing information for the service request; the target processing information comprises a target processing flow corresponding to the service request; then, selecting each target resource scheduling operation associated with the target processing flow from a preset resource scheduling operation set; wherein each target resource scheduling operation has at least one operation response acting on the service request, and each operation response is characterized by: the resource corresponding to the corresponding process node in the target processing process is changed; finally, sequentially executing each target resource scheduling operation among the participation objects corresponding to each target resource scheduling operation, so that each participation object controls each target resource scheduling operation to perform operation response respectively, and the service data processing of the service request is realized; and when the target resource scheduling operation is executed once, resource scheduling information corresponding to the corresponding target resource scheduling operation is obtained and cached. Optionally, in this embodiment of the application, the server 202 may be loaded with an information service platform corresponding to the client, and the information service platform may be used to execute service data processing.
The service data processing method provided by the exemplary embodiment of the present application is described below with reference to the above system architecture and the accompanying drawings, it should be noted that the above system architecture is only shown for the convenience of understanding the spirit and principle of the present application, and the embodiment of the present application is not limited in this respect.
Referring to fig. 3, which is a flowchart illustrating an implementation of a service data processing method according to an embodiment of the present application, an execution subject takes a server as an example, and a specific implementation flow of the method is as follows:
s301: and responding to a service request initiated by the target object aiming at the target resource scheduling scene, and acquiring target processing information aiming at the service request.
Specifically, in step S301, the server may create a service request (e.g., a resource scheduling request) initiated by a target object for a target resource scheduling scenario based on a service request (e.g., a resource scheduling request) initiated by the target object for the target resource scheduling scenario from a corresponding client, and corresponding target processing information related to resource scheduling; the target resource scheduling scene can be a legal scene, a fund drawing scene of the cash pool sub-account under a compliance condition and the like.
It should be noted that the target processing information at least includes a target processing flow corresponding to the service request, and the target processing flow is used for implementing corresponding resource scheduling and related service data processing.
In a possible implementation manner, referring to fig. 4, the server may respond to identification information of a target object carried in the service request (for example, a certificate number or a client number that is actively provided by the target object in legal or compliant situations), and after determining the identification information, may obtain target resource requirement information of the target object from the obtained service request, where the target resource requirement information at least includes: the type and amount of demand resources; then, according to the target resource demand type of the target resource demand information and the corresponding relation between the resource demand type and the candidate processing flows, screening out a target processing flow matched with the target resource demand type from a preset candidate processing flow set; further, based on the obtained target processing flow, corresponding target processing information is generated for the service request.
It should be noted that the target processing information is generated for the target object and the service request initiated by the target object, that is, even if different objects initiate the same service request, the corresponding target processing information is also different, thereby ensuring the security of resource scheduling performed by different objects to a certain extent.
Optionally, in order to ensure that the preset candidate processing flow set can meet the resource scheduling requirements related to various service requests as far as possible, before the server responds to a service request initiated by the target object for the target resource scheduling scenario, corresponding candidate processing flows may be generated in advance for the various service requests, and the obtained candidate processing flows are stored in the preset candidate processing flow set.
Specifically, the server may perform the following operations for various service requests, respectively: acquiring candidate resource demand information of a corresponding object from a service request; then, determining each candidate resource scheduling operation corresponding to one service request based on the candidate resource demand type carried by the candidate resource demand information; further, based on each candidate resource scheduling operation and its corresponding process node, a corresponding candidate processing flow is generated for a service request, and the candidate processing flow is stored in a preset candidate processing flow set.
For example, assuming that 3 service requests are taken as an example, the candidate processing flows generated by the server for the 3 service requests and the candidate processing flow set including the candidate processing flows corresponding to the 3 service requests are shown in table 1:
TABLE 1
Figure BDA0003872629930000121
Obviously, based on the above table, after receiving the service request bus.req.type2, the server may obtain the candidate resource information can.res.inf2 of the corresponding object from the service request bus.req.type2, so as to determine the candidate resource scheduling operation set opera.set2 matching the service request bus.req.type2 based on the candidate resource demand type can.req.type2 of the candidate resource information can.res.inf2 and the corresponding relationship between the candidate demand resource type and the candidate resource scheduling operation set, where the candidate resource scheduling operation set opera.set2 includes: service requests for scheduling operation of each candidate resource corresponding to the bus.req.Type2; further, based on the candidate resource scheduling operation set opera.set2 and the corresponding flow node set node.set2, that is, each candidate resource scheduling operation and its corresponding flow node, a candidate processing flow can.pro.flow2 corresponding to the service request bus.req.type2 is generated; and finally, saving the obtained candidate processing flow can.
S302: and selecting each target resource scheduling operation associated with the target processing flow from a preset resource scheduling operation set.
Wherein each target resource scheduling operation has at least one operation response acting on the service request, and each operation response is characterized by: and changing the resources corresponding to the corresponding process nodes in the target processing process.
Illustratively, in a resource scheduling scenario for a funds withdrawal transaction for a real-time cash pool sub-account, each operational response characterizes: in the target processing flow, the fund change condition of the pool account associated with the corresponding flow node, for example, the balance is changed from 27.8 ten thousand yuan to 24.9 ten thousand yuan, and 2.9 ten thousand of fund amount is paid from the pool account corresponding to the current flow node.
S303: and sequentially executing each target resource scheduling operation among the corresponding participating objects of each target resource scheduling operation so that each participating object controls each target resource scheduling operation to perform operation response respectively and realize the service data processing of the service request.
Specifically, in step S303, after selecting each target resource scheduling operation associated with the target processing flow, the server may sequentially execute each target resource scheduling operation according to the respective participating objects corresponding to each target resource scheduling operation and the respective operation execution sequence of each target resource scheduling operation, so that each participating object controls each target resource scheduling operation to perform respective operation response, thereby implementing service data processing on the service request.
It should be noted that, each of the above participating objects respectively executes a corresponding target resource scheduling operation, and in this embodiment of the present application, an execution sequence of each target resource scheduling operation may be determined according to a sequence of each process node in the target processing flow.
Particularly, each time a target resource scheduling operation is executed by the server, the server needs to acquire resource scheduling information corresponding to the corresponding target resource scheduling operation and cache the resource scheduling information, and optionally, after each target resource scheduling operation is executed, the server needs to judge whether the current target resource scheduling operation meets a preset resource scheduling termination condition; if yes, ending the target processing flow; if not, continuing the target resource scheduling operation of the next process node in the target processing process.
For example, in this embodiment of the present application, the preset resource scheduling termination condition may indicate a current target resource scheduling operation, which is a target resource scheduling operation executed correspondingly for a last process node in the target processing process.
Obviously, after each target resource scheduling operation, the current target resource scheduling operation is performed, and if the preset resource scheduling termination condition is met, the process of processing the service data can be stopped in time, so that the system performance is improved to a certain extent.
In one possible implementation manner, the server may implement asynchronous caching of the resource scheduling information by obtaining a node identifier of a corresponding process node of a corresponding target scheduling operation, and caching the resource scheduling information in a cache database set in correspondence to the node identifier, where the node identifier represents: the class of the corresponding target resource scheduling operation.
It should be noted that, when performing asynchronous caching, a distributed thread caching mechanism may be used.
Further, referring to fig. 5, when determining that the current time meets any one of the following two conditions, the server may update the set original resource scheduling information database:
case 1: and when the target resource scheduling operation is determined to be completed, updating the original resource scheduling information database based on the resource scheduling information cached by the cache database of each process node.
Case 2: and when the fact that the corresponding resource scheduling information of each target resource scheduling operation is cached successfully is determined, updating the original resource scheduling information database based on the resource scheduling information cached by the cache database of each process node.
For example, when it is determined that any one of the two conditions is satisfied, the server may send a corresponding distributed application message (i.e., an instruction for registering resource scheduling information in the original resource scheduling information database) to each cache database, thereby implementing asynchronous task processing on each cached resource scheduling information, that is, by cyclically acquiring each node identifier or list, sequentially recording each resource scheduling information.
Referring to fig. 6, in the resource scheduling scenario of the fund withdrawal transaction of the sub-account of the real-time cash pool, based on the above-mentioned service data processing method, an asynchronous processing flag is set, so that when the account is withdrawn, only the withdrawal update database (i.e. the target resource scheduling operation) of the account is required to be performed in real time, and the transaction details (i.e. the resource scheduling information) of the account are temporarily registered by the asynchronous cache, and are repeated in sequence.
Therefore, in a resource scheduling scene of a fund drawing transaction of a sub-account of a real-time cash pool, the business data processing method provided by the embodiment of the application is adopted, namely, the transaction detail registration part of unnecessary real-time serial processing in the financial processing action is separated, the financial processing is realized by utilizing the distributed architecture characteristic and an asynchronous mode, compared with a traditional multi-channel processing mode of registering the transaction details in the real-time cash pool, the processing steps and time length of core transactions are really reduced, the operation frequency of a database is increased, the performance and the processing efficiency of a core system are improved, the risk of transaction overtime is also reduced, and the real-time cash pool business can continuously and efficiently process the business at a high-frequency transaction time point or a hotspot processing time period.
Further, based on the above steps of the service data processing method, referring to fig. 7, which is a logic schematic diagram of a service data processing method provided in the embodiment of the present application, in response to a service request bus.req.type initiated by a target object for a target resource scheduling scene, target processing information tar.pro.inf for the service request bus.req.type is obtained; then, selecting each target resource scheduling operation (such as tar.opera.1, tar.opera.2, and tar.opera.3) associated with the target processing flow tar.pro.flow from the preset resource scheduling operation set res.sch.set, in the target processing information tar.pro.inf; further, each target resource scheduling operation (i.e., tar.opera.1, tar.opera.2, and tar.opera.3) is sequentially executed among the respective participating objects corresponding to each target resource scheduling operation, wherein each time the target resource scheduling operation is executed, resource scheduling information res.sch.inf corresponding to the corresponding target resource scheduling operation is obtained, and the resource scheduling information res.sch.inf is cached.
In summary, in the service data processing method provided in the embodiment of the present application, a service request initiated by a target object for a target resource scheduling scenario is responded, a corresponding target processing flow is obtained from target processing information for the service request, and then each target resource scheduling operation associated with the target processing flow is selected and taken out, so that each target resource scheduling operation is sequentially executed between participating objects corresponding to each target resource scheduling operation, so that each participating object controls each target resource scheduling operation to perform an operation response, thereby implementing service data processing on the service request, where each time a target resource scheduling operation is executed, resource scheduling information corresponding to the corresponding target resource scheduling operation is obtained, and the resource scheduling information is cached.
By adopting the mode, the target resource scheduling operations are sequentially executed among the corresponding participating objects of the target resource scheduling operations, so that the participating objects control the target resource scheduling operations to respectively perform operation response, the service data processing of the service request is realized, and the resource scheduling information corresponding to the corresponding target resource scheduling operations is cached every time the target resource scheduling operations are executed, thereby avoiding the technical defect that in the prior art, the data volume of the service request is large in a specific time range, which causes that a large amount of time is needed to serially record the corresponding resource scheduling information in the process of scheduling the required resources, and further improving the efficiency of service data processing.
Further, based on the same technical concept, the embodiment of the present application provides a service data processing apparatus, where the service data processing apparatus is configured to implement the above method flow of the embodiment of the present application. Referring to fig. 8, the service data processing apparatus includes: an obtaining module 801, a selecting module 802, and a processing module 803, wherein:
an obtaining module 801, configured to obtain target processing information for a service request in response to a service request initiated by a target object for a target resource scheduling scenario; the target processing information comprises a target processing flow corresponding to the service request;
a selecting module 802, configured to select each target resource scheduling operation associated with a target processing flow from a preset resource scheduling operation set; wherein each target resource scheduling operation has at least one operation response acting on the service request, each operation response characterizing: the resource corresponding to the corresponding process node in the target processing process is changed;
a processing module 803, configured to sequentially execute each target resource scheduling operation among the participating objects corresponding to each target resource scheduling operation, so that each participating object controls each target resource scheduling operation to perform an operation response, thereby implementing service data processing on the service request; wherein, when the target resource scheduling operation is executed once, the following operations are respectively executed:
and acquiring resource scheduling information corresponding to the corresponding target resource scheduling operation, and caching the resource scheduling information.
In a possible embodiment, when acquiring target processing information for a service request, the acquiring module 801 is specifically configured to:
responding to the identification information of the target object carried by the service request, and acquiring target resource demand information of the target object from the service request;
screening out a target processing flow matched with the target resource demand type from a preset candidate processing flow set based on the target resource demand type of the target resource demand information;
and generating corresponding target processing information aiming at the service request based on the target processing flow.
In a possible embodiment, before responding to a service request initiated by a target object for a target resource scheduling scenario, the obtaining module 801 is further configured to:
aiming at various service requests, the following operations are respectively executed:
acquiring candidate resource demand information of a corresponding object from a service request;
determining each candidate resource scheduling operation corresponding to one service request based on the candidate resource demand type carried by the candidate resource demand information;
and generating corresponding candidate processing flows aiming at a service request based on each candidate resource scheduling operation and the corresponding flow node thereof, and storing the candidate processing flows into a candidate processing flow set.
In a possible embodiment, after each execution of the target resource scheduling operation, the processing module 803 is further configured to:
judging whether the primary target resource scheduling operation meets a preset resource scheduling termination condition or not;
if yes, ending the target processing flow;
if not, continuing the target resource scheduling operation of the next process node in the target processing process.
In a possible embodiment, when caching the resource scheduling information, the processing module 803 is specifically configured to:
acquiring node identification of a flow node corresponding to the corresponding target scheduling operation;
and caching the resource scheduling information to a cache database set by the corresponding node identifier.
In a possible embodiment, the processing module 803 is further configured to:
when the target resource scheduling operation is determined to be completed, updating the set original resource scheduling information database based on the resource scheduling information cached by the cache database of each process node;
alternatively, the first and second electrodes may be,
and updating the original resource scheduling information database based on the resource scheduling information cached by the respective cache database of each process node when the fact that the respective corresponding resource scheduling information of each target resource scheduling operation is cached successfully is determined.
Based on the same technical concept, the embodiment of the present application further provides an electronic device, and the electronic device can implement the process of the service data processing method provided by the embodiment of the present application. In one embodiment, the electronic device may be a server, a terminal device, or other electronic devices. As shown in fig. 9, the electronic device may include:
at least one processor 901 and a memory 902 connected to the at least one processor 901, in this embodiment, a specific connection medium between the processor 901 and the memory 902 is not limited in this application, and fig. 9 illustrates an example in which the processor 901 and the memory 902 are connected through a bus 900. The bus 900 is shown in fig. 9 by a thick line, and the connection between other components is merely illustrative and not limited thereto. The bus 900 may be divided into an address bus, a data bus, a control bus, etc., and is shown with only one thick line in fig. 9 for ease of illustration, but does not represent only one bus or type of bus. Alternatively, the processor 901 may also be referred to as a controller, without limitation to name a few.
In the embodiment of the present application, the memory 902 stores instructions executable by the at least one processor 901, and the at least one processor 901 can execute one of the service data processing methods discussed above by executing the instructions stored in the memory 902. The processor 901 may implement the functions of the respective modules in the apparatus shown in fig. 8.
The processor 901 is a control center of the apparatus, and may connect various parts of the entire control device by using various interfaces and lines, and perform various functions and process data of the apparatus by executing or executing instructions stored in the memory 902 and calling data stored in the memory 902, thereby performing overall monitoring of the apparatus.
In one possible design, processor 901 may include one or more processing units and processor 901 may integrate an application processor that handles primarily the operating system, user interfaces, applications, etc., and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 901. In some embodiments, the processor 901 and the memory 902 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 901 may be a general-purpose processor, such as a CPU, digital signal processor, application specific integrated circuit, field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like, that may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the service data processing method disclosed in the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
The memory 902, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 902 may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charge Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. The memory 902 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 902 of the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
By programming the processor 901, the code corresponding to a business data processing method described in the foregoing embodiment may be solidified into a chip, so that the chip can execute the steps of a business data processing method of the embodiment shown in fig. 3 when running. How processor 901 is programmed is well known to those skilled in the art and will not be described in detail herein.
Based on the same inventive concept, the present application further provides a storage medium storing computer instructions, which when executed on a computer, cause the computer to execute a service data processing method as discussed above.
In some possible embodiments, the present application further provides that the various aspects of a business data processing method may also be implemented in the form of a program product comprising program code for causing the control apparatus to perform the steps of a business data processing method according to various exemplary embodiments of the present application described above in this specification, when the program product is run on a device.
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more of the units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a server, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user computing device, partly on the user equipment, as a stand-alone software package, partly on the user computing device and partly on a remote computing device, or entirely on the remote computing device or server.
In situations involving remote computing devices, the remote computing devices may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to external computing devices (e.g., through the internet using an internet service provider).
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (15)

1. A method for processing service data is characterized by comprising the following steps:
responding to a service request initiated by a target object aiming at a target resource scheduling scene, and acquiring target processing information aiming at the service request; the target processing information comprises a target processing flow corresponding to the service request;
selecting each target resource scheduling operation associated with the target processing flow from a preset resource scheduling operation set; wherein each target resource scheduling operation has at least one operation response acting on the service request, and each operation response is characterized by: the resource corresponding to the corresponding process node in the target processing process is changed;
sequentially executing each target resource scheduling operation among the corresponding participating objects of each target resource scheduling operation so that each participating object controls each target resource scheduling operation to perform operation response, and service data processing of the service request is realized; wherein, when the target resource scheduling operation is executed once, the following operations are respectively executed:
and acquiring resource scheduling information corresponding to the corresponding target resource scheduling operation, and caching the resource scheduling information.
2. The method of claim 1, wherein the obtaining target processing information for the service request comprises:
responding to the identification information of the target object carried by the service request, and acquiring target resource demand information of the target object from the service request;
screening out a target processing flow matched with the target resource demand type from a preset candidate processing flow set based on the target resource demand type of the target resource demand information;
and generating corresponding target processing information aiming at the service request based on the target processing flow.
3. The method of claim 2, wherein responding to the service request initiated by the target object for the target resource scheduling scenario is preceded by:
aiming at various service requests, the following operations are respectively executed:
acquiring candidate resource demand information of a corresponding object from a service request;
determining each candidate resource scheduling operation corresponding to the service request based on the candidate resource demand type carried by the candidate resource demand information;
and generating a corresponding candidate processing flow aiming at the service request based on each candidate resource scheduling operation and the corresponding flow node thereof, and storing the candidate processing flow into the candidate processing flow set.
4. The method of claim 1, wherein after each execution of a target resource scheduling operation, further comprising:
judging whether the primary target resource scheduling operation meets a preset resource scheduling termination condition or not;
if yes, ending the target processing flow;
if not, continuing the target resource scheduling operation of the next process node in the target processing process.
5. The method of any one of claims 1-4, wherein the buffering the resource scheduling information comprises:
acquiring a node identifier of a process node corresponding to the corresponding target scheduling operation;
and caching the resource scheduling information to a cache database which is set corresponding to the node identification.
6. The method of claim 5, wherein the method further comprises:
when the target resource scheduling operation is determined to be completed, updating the set original resource scheduling information database based on the resource scheduling information cached by the cache database of each process node;
alternatively, the first and second electrodes may be,
and updating the original resource scheduling information database based on the resource scheduling information cached by the respective cache database of each process node when the respective corresponding resource scheduling information of each target resource scheduling operation is determined to be cached successfully.
7. A service data processing apparatus, comprising:
the acquisition module is used for responding to a service request initiated by a target object aiming at a target resource scheduling scene and acquiring target processing information aiming at the service request; the target processing information comprises a target processing flow corresponding to the service request;
the selection module is used for selecting each target resource scheduling operation associated with the target processing flow from a preset resource scheduling operation set; wherein each target resource scheduling operation has at least one operation response acting on the service request, and each operation response is characterized by: the resource corresponding to the corresponding process node in the target processing process is changed;
the processing module is used for sequentially executing each target resource scheduling operation among the corresponding participating objects of each target resource scheduling operation so that each participating object controls each target resource scheduling operation to perform operation response, and business data processing of the business request is realized; wherein, when the target resource scheduling operation is executed once, the following operations are respectively executed:
and acquiring resource scheduling information corresponding to the corresponding target resource scheduling operation, and caching the resource scheduling information.
8. The apparatus of claim 7, wherein when the obtaining target processing information for the service request is performed, the obtaining module is specifically configured to:
responding to the identification information of the target object carried by the service request, and acquiring target resource demand information of the target object from the service request;
screening out a target processing flow matched with the target resource demand type from a preset candidate processing flow set based on the target resource demand type of the target resource demand information;
and generating corresponding target processing information aiming at the service request based on the target processing flow.
9. The apparatus of claim 8, wherein prior to the service request initiated in response to the target object for the target resource scheduling scenario, the obtaining module is further to:
aiming at various service requests, the following operations are respectively executed:
acquiring candidate resource demand information of a corresponding object from a service request;
determining each candidate resource scheduling operation corresponding to the service request based on the candidate resource demand type carried by the candidate resource demand information;
and generating corresponding candidate processing flows aiming at the service request based on the candidate resource scheduling operations and the corresponding flow nodes thereof, and storing the candidate processing flows into the candidate processing flow set.
10. The apparatus of claim 7, wherein after the target resource scheduling operation is performed once, the processing module is further to:
judging whether the primary target resource scheduling operation meets a preset resource scheduling termination condition or not;
if yes, ending the target processing flow;
if not, continuing the target resource scheduling operation of the next process node in the target processing process.
11. The apparatus according to any one of claims 7 to 10, wherein, when the buffering the resource scheduling information, the processing module is specifically configured to:
acquiring a node identifier of a process node corresponding to the corresponding target scheduling operation;
and caching the resource scheduling information to a cache database which is set corresponding to the node identification.
12. The apparatus of claim 11, wherein the processing module is further to:
when the target resource scheduling operation is determined to be completed, updating the set original resource scheduling information database based on the resource scheduling information cached by the cache database of each process node;
alternatively, the first and second electrodes may be,
and updating the original resource scheduling information database based on the resource scheduling information cached by the cache database of each process node when the target resource scheduling operation and the corresponding resource scheduling information are determined to be cached successfully.
13. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1-6 when executing the computer program.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
15. A computer program product, which, when called by a computer, causes the computer to perform the method of any one of claims 1 to 6.
CN202211203627.8A 2022-09-29 2022-09-29 Business data processing method and device, electronic equipment and storage medium Pending CN115509754A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211203627.8A CN115509754A (en) 2022-09-29 2022-09-29 Business data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211203627.8A CN115509754A (en) 2022-09-29 2022-09-29 Business data processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115509754A true CN115509754A (en) 2022-12-23

Family

ID=84507815

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211203627.8A Pending CN115509754A (en) 2022-09-29 2022-09-29 Business data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115509754A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115858134A (en) * 2023-03-02 2023-03-28 苏州浪潮智能科技有限公司 Multi-task resource control method and device for solid state disk
CN116974771A (en) * 2023-09-18 2023-10-31 腾讯科技(深圳)有限公司 Resource scheduling method, related device, electronic equipment and medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115858134A (en) * 2023-03-02 2023-03-28 苏州浪潮智能科技有限公司 Multi-task resource control method and device for solid state disk
CN116974771A (en) * 2023-09-18 2023-10-31 腾讯科技(深圳)有限公司 Resource scheduling method, related device, electronic equipment and medium
CN116974771B (en) * 2023-09-18 2024-01-05 腾讯科技(深圳)有限公司 Resource scheduling method, related device, electronic equipment and medium

Similar Documents

Publication Publication Date Title
CN110134516B (en) Financial data processing method, apparatus, device and computer readable storage medium
US10891161B2 (en) Method and device for virtual resource allocation, modeling, and data prediction
KR102254809B1 (en) Distributed computing resources sharing system and computing apparatus thereof providing reward based on block chain
CN115509754A (en) Business data processing method and device, electronic equipment and storage medium
CN110782240A (en) Service data processing method and device, computer equipment and storage medium
CN110704177B (en) Computing task processing method and device, computer equipment and storage medium
US20210390642A1 (en) Digital service management in edge computing elements of content delivery networks
CN110660466A (en) Personal health data chaining method and system of Internet of things by combining block chains
CN115297008B (en) Collaborative training method, device, terminal and storage medium based on intelligent computing network
CN111813529B (en) Data processing method, device, electronic equipment and storage medium
CN107528822B (en) Service execution method and device
CN113205199A (en) Mobile phone bank foreign currency and cash reservation method and device
US20230325895A1 (en) Systems and methods for dynamic interface generation for commerce platform onboarding
CN112449021B (en) Internet resource screening method and device
CN116860470A (en) Data transmission method, device, computer equipment and storage medium
CN110827142A (en) User credit evaluation method, system, server and storage medium
CN115239188A (en) Business handling method and device, electronic equipment and storage medium
CN116095074A (en) Resource allocation method, device, related equipment and storage medium
CN114170004A (en) Scoring decision-making method, device, equipment and storage medium based on multiple events
CN114170007A (en) Orthogonal easy return message assembly method, program product, medium, and electronic device
CN112330304A (en) Contract approval method and device
CN112101915A (en) Financial service management and control data processing method and device
CN115314258B (en) Method and device for detecting weak password, electronic equipment and storage medium
US10269046B1 (en) Networked environment that enables interaction between content requestors and content creators
CN113965900B (en) Method, device, computing equipment and storage medium for dynamically expanding flow resources

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination