CN115640201B - System performance test method for artificial intelligent server - Google Patents

System performance test method for artificial intelligent server Download PDF

Info

Publication number
CN115640201B
CN115640201B CN202211329264.2A CN202211329264A CN115640201B CN 115640201 B CN115640201 B CN 115640201B CN 202211329264 A CN202211329264 A CN 202211329264A CN 115640201 B CN115640201 B CN 115640201B
Authority
CN
China
Prior art keywords
server
time
reasoning
test
tester
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211329264.2A
Other languages
Chinese (zh)
Other versions
CN115640201A (en
Inventor
董建
徐洋
杨雨泽
鲍薇
张琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Electronics Standardization Institute
Original Assignee
China Electronics Standardization Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Electronics Standardization Institute filed Critical China Electronics Standardization Institute
Priority to CN202211329264.2A priority Critical patent/CN115640201B/en
Publication of CN115640201A publication Critical patent/CN115640201A/en
Application granted granted Critical
Publication of CN115640201B publication Critical patent/CN115640201B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Debugging And Monitoring (AREA)

Abstract

The application relates to the technical field of server performance test, in particular to a system performance test method for an artificial intelligent server, which comprises test software consisting of a Tester program and a stub program, wherein the Tester program and the stub program form a load generator module used in the test method, and load distribution strategies are carried out on a data set interface and a test bottom layer interface of a butt joint tested system, wherein the load distribution strategies comprise continuous or single arrival, fixed period arrival, poisson distribution arrival, peak arrival and off-line arrival.

Description

System performance test method for artificial intelligent server
Technical Field
The application relates to the technical field of server performance test, in particular to a system performance test method for an artificial intelligent server.
Background
As with conventional servers, artificial intelligence servers provide data and information services as we search, chat, and browse web pages. In order to meet the demands of markets and different application scenes, the artificial intelligent server can support a plurality of N processors at most, so that the artificial intelligent server has strong parallel computing capability, withstands the operation capability of tens of personal computers, is superior in strength, is high in expansibility, can provide a precise solution according to the real demands of enterprises, helps the enterprises to realize the requirements of data and resources, and upgrades the status and image of the enterprises.
The artificial intelligent server is mainly presented in the fields of voice recognition, image processing, video imaging, semantic segmentation and the like, particularly in the field of data center calculation, and the data calculation provided by the artificial intelligent server is multi-aspect and comprises intelligent services such as archive analysis, market subdivision, type division and the like, and provides accurate development directions for enterprises through specific analysis, so that the method has more targeted development and improvement.
The artificial intelligent server has the outstanding advantages of huge calculation amount, wide operator category and high energy consumption requirement, is accepted by a plurality of enterprises, and is applied to a plurality of industries such as finance, education, manufacturing, traffic and the like at present, and replaces part of manpower. In future development, the method is more comprehensively distributed in various industries, reduces cost for enterprises, saves energy consumption, improves efficiency, truly eliminates calculation bottleneck period, brings convenience for human beings, and achieves higher-quality technological benefits.
The performance of a server is typically degraded by the combination of hardware, network, application itself, configuration and database, where many systems now implement service clustering in terms of middleware's load balancing policy, and with this implementation of load balancing, if the load is not balanced enough, it is easy to cause anomalies or suspension of certain services under a large number of impacts. Because of the learning and recognition capabilities of the artificial intelligence server, the generated operational load is more than that of the traditional server, and the service scenes of the artificial intelligence server are various, so that how to accurately test the load of the artificial intelligence server is a problem which needs to be faced.
Therefore, in order to solve the above-mentioned problems, the present application proposes a system performance testing method for an artificial intelligence server, which executes different distribution strategies for loads according to different settings, and combines with a comprehensive testing index system to meet testing requirements under different scenarios.
Disclosure of Invention
The application aims to fill the blank of the prior art and provides a system performance testing method for an artificial intelligent server, which can execute different distribution strategies on loads according to different settings so as to meet testing requirements in different scenes.
In order to achieve the above purpose, the application provides a system performance test method for an artificial intelligence server, which comprises test software and test indexes, wherein the test software consists of a Tester program and a stub program, and the Tester is a program operated by a Tester and is responsible for controlling a test process, maintaining test data information and receiving test data sent by the stub program; the stub is a program running on equipment of a test manufacturer and is responsible for executing an actual test program and interfacing with a Tester; the stub programs comprise stub universal layers and stub manufacturer adaptation layers, the stub universal layers are used for flow control and data management, codes are provided by a testing mechanism, compiled into binary programs through C++ language, run on test manufacturer equipment and are started entries, and the stub programs are responsible for calling manufacturer adaptation codes, monitoring processing flows, communicating with a Tester and integrating and transmitting test result data; the Stubes manufacturer adaptation layer is provided by manufacturers and comprises service codes for realizing specific test reasoning or training, a Stubes universal layer for realizing interface and script butt joint is added, and a dotting function is added to acquire information parameters; the Tester program and the stub program constitute a load generator module used in the test method.
The test indexes comprise time, power consumption, actual throughput rate, energy efficiency, elasticity, bearing capacity and video analysis maximum road number;
the time includes:
total time delay is inferred: continuously reasoning the total end-to-end delay for a plurality of times;
end-to-end reasoning delay: the difference between the time the tester sends the sample and the time the result is received;
template transmission delay: the difference between the time the sample is sent by the tester and the time the sample is received by the tester;
the result transmission delay: the difference between the time of the result sent by the tested person and the time of the result received by the tested person;
task dispatch delay: the difference between the time the sample is received by the testee and the time before processing;
pretreatment time delay: the difference between the starting time and the ending time of the sample pretreatment by the testee;
reasoning time delay: the difference between the starting time and the ending time of the reasoning of the testee on a certain sample;
post-treatment delay: the difference between the starting time and the ending time of the sample post-processing by the testee;
sample processing delay: the difference between the starting time and the ending time of the sample processing by the testee, and the processing delay is the combination of preprocessing, reasoning and post-processing time;
dispatch processing delay: the difference between the time when the sample is completely received by the testee and the processing ending time;
processing timeout: the maximum allowable time interval from the sending of the sample to the receiving of the corresponding result by the tester;
the power consumption includes:
AI server stand-alone reasoning average power: average power of a single AI server in a certain reasoning whole course;
AI server data preprocessing average power: the average power of a data preprocessing stage is calculated by a single AI server in a certain reasoning whole course;
AI server infers peak power: in a certain reasoning whole course, a single AI server has the maximum instantaneous power of each part of the server under the full load pressure state;
AI server cluster reasoning average power: AI server cluster, the average power in a certain reasoning whole course;
the actual throughput rate represents the effective computing capacity of the artificial intelligent server system for specific reasoning operation, and the effective computing capacity is improved to achieve the same effect of hardware system capacity expansion; for visual class testing, the units are images/s, for natural language processing class testing, the units are sendees/s, comprising:
AI server system infers actual throughput rate: the AI server system processes the sample number completely for the specific task load in unit time;
AI server system reasoning efficient computing power: the AI server system is a weighted geometric average of the ratio of the actual throughput rate to the baseline throughput rate per task over a given set of tasks S;
the energy efficiency includes:
visual task energy efficiency ratio: the unit is the number of frames of the image processed per second watt;
natural language task energy efficiency ratio: the units are words processed per second watt;
speech task energy efficiency ratio: sentence number processed in watts per second;
industry task energy efficiency ratio: calculating according to the energy efficiency ratio of visual and natural language tasks;
the efficiency is the ratio of the completion of the reasoning task by the AI server system to the cost, the unit is kilowatt-hour per second, and the efficiency comprises:
AI server reasoning efficiency: the ratio of the actual reasoning accuracy of the AI server to the reasoning energy consumption;
the elastic unit is percentage per megabyte, and comprises AI server system reasoning elasticity;
the units of bearing force are megabytes per second, including AI server or cluster reasoning bearing force:
the actual throughput rate of the tested AI server system when the concurrent pressure threshold is above;
the video analysis maximum road number unit is a road and comprises an AI server video analysis maximum road number: and the tested AI server system analyzes the maximum number of paths that the video stream can bear under the given response timeout threshold.
The load generator module performs a load distribution strategy on a data set interface and a test bottom layer interface of a system to be tested, and specifically comprises the following steps:
s1, continuous or single arrival:
the ith job arrives immediately after the ith-1 job is completed, and the job i is not transmitted when the job i-1 is not completed or the overtime control threshold is not reached;
s2, fixed period arrives:
the operation arrives at a fixed period, one operation is arrived at a time;
s3, poisson distribution reaches:
the operation is achieved in poisson distribution;
s4, peak arrival:
in the Poisson distribution arrival mode, j short periods exist, a large number of bursty operations exist in each period, and the periods last for a certain duration and maintain a certain concurrency level;
s5, offline arrival:
the method can be achieved at one time;
the LoadGenerator module develops and provides the interfaces of python, c++, C through c++, for the application of the outer layer to call and support multiple application programs.
The load distribution policy is divided into a synchronous mode and an asynchronous mode according to the difference of job arrival modes.
The synchronous mode comprises a continuous arrival mode, and the sample is serially sent to the tested system for processing, namely the distribution process and the processing process are the same.
The asynchronous mode includes a fixed-cycle, poisson distribution achievement mode in which a fixed time requirement is distributed, and the fixed-cycle mode distributes samples for a fixed period of time, regardless of whether the samples have been processed, so the distribution thread and the processing thread are not the same in the asynchronous mode.
Compared with the prior art, the application solves the defects of the prior art in the performance test method of the artificial intelligent server, and different distribution strategies are executed on the load through different settings so as to meet the test requirements under different scenes.
Drawings
FIG. 1 is a flow chart of the LoadGenerator load generator framework of the present application.
FIG. 2 is a schematic diagram of a LoadGenerator module according to the present application.
FIG. 3 is a flow chart of the synchronous mode of the present application.
Fig. 4 is a schematic diagram of a registration page according to an embodiment of the present application.
FIG. 5 is a schematic diagram of a configuration page according to an embodiment of the present application.
FIG. 6 is a diagram of a test results page according to an embodiment of the present application.
FIG. 7 is a diagram of a test results page indicator according to an embodiment of the present application.
Detailed Description
The application will now be further described with reference to the accompanying drawings.
Referring to fig. 1 to 7, a system performance test method for an artificial intelligence server includes test software composed of a Tester program and a stub program, wherein the Tester is a program operated by a Tester and is responsible for controlling a test process, maintaining test data information and receiving test data sent by the stub program; the stub is a program running on equipment of a test manufacturer and is responsible for executing an actual test program and interfacing with a Tester; the stub programs comprise stub universal layers and stub manufacturer adaptation layers, the stub universal layers are used for flow control and data management, codes are provided by a testing mechanism, compiled into binary programs through C++ language, run on test manufacturer equipment and are started entries, and the stub programs are responsible for calling manufacturer adaptation codes, monitoring processing flows, communicating with a Tester and integrating and transmitting test result data; the Stubes manufacturer adaptation layer is provided by manufacturers and comprises service codes for realizing specific test reasoning or training, a Stubes universal layer for realizing interface and script butt joint is added, and a dotting function is added to acquire information parameters; the Tester program and the stub program constitute a load generator module used in the test method.
The load generator module performs a load distribution strategy on a data set interface and a test bottom layer interface of a system to be tested, and specifically comprises the following steps:
s1, continuous or single arrival:
the ith job arrives immediately after the ith-1 job is completed, and the job i is not transmitted when the job i-1 is not completed or the overtime control threshold is not reached;
s2, fixed period arrives:
the operation arrives at a fixed period, one operation is arrived at a time;
s3, poisson distribution reaches:
the operation is achieved in poisson distribution;
s4, peak arrival:
in the Poisson distribution arrival mode, j short periods exist, a large number of bursty operations exist in each period, and the periods last for a certain duration and maintain a certain concurrency level;
s5, offline arrival:
the method can be achieved at one time;
the LoadGenerator module develops and provides the interfaces of python, c++, C through c++, for the application of the outer layer to call and support multiple application programs.
The load distribution policy is divided into a synchronous mode and an asynchronous mode according to the difference of job arrival modes.
The synchronous mode comprises a continuous arrival mode, and the sample is serially sent to the tested system for processing, namely the distribution process and the processing process are the same.
The asynchronous mode includes a fixed-cycle, poisson distribution achievement mode in which a fixed time requirement is distributed, and the fixed-cycle mode distributes samples for a fixed period of time, regardless of whether the samples have been processed, so the distribution thread and the processing thread are not the same in the asynchronous mode.
And (3) software installation:
1. installing software:
the software consists of two parts, the Tester program and the stub program. Wherein the Tester program is a server and the stub program is a client. Currently software only supports the Linux operating system.
2. The Tester server application program:
as shown in fig. 4, clicking the registration button generates a test ID and generates a configuration file for user registration.
As shown in fig. 5, the user performs the server test through the configuration file, and the test result of the user is displayed on the Tester server after the test is completed.
As shown in fig. 6-7, clicking on the user test ID enters the detailed results page for that user test.
3. Stubes client application:
decompressing the stub application program, wherein the ai-standard-stub is an entry program and is responsible for flow control, communication, data management and the like; the code is a service code catalog which is read only during operation, manufacturer codes must be put in the catalog, the ais-standard-stub can monitor the code catalog, if the code catalog is modified, records are recorded and reported to a tester server; log is a catalog of log files to which logs of vendor business code operations need to be put; the result file catalog comprises a model and an intermediate file, a manufacturer service code needs to be stored in the catalog, and after the test is completed, the AIS-standard-stub automatically packages the catalog and uploads the packaged catalog to a tester end; work is a temporary storage file.
And obtaining a configuration file, and placing the config. Json file generated in the steps below a code folder below the stub client.
Acquiring a data set, downloading an image Net2012 data set, and converting the data set into a TF format
Modify TRAIN_DATA_PATH in config_imagenet2012.Sh under the config folder, run the Stubes application.
The above is only a preferred embodiment of the present application, only for helping to understand the method and the core idea of the present application, and the scope of the present application is not limited to the above examples, and all technical solutions belonging to the concept of the present application belong to the scope of the present application. It should be noted that modifications and adaptations to the present application may occur to one skilled in the art without departing from the principles of the present application and are intended to be within the scope of the present application.
The application solves the defects of the performance test method of the artificial intelligent server in the prior art on the whole, and different distribution strategies are executed on the load through different settings so as to meet the test requirements under different scenes.

Claims (8)

1. The system performance test method for the artificial intelligent server is characterized by comprising test software and test indexes, wherein the test software consists of a Tester program and a stub program; the Tester is a program operated by a Tester and is responsible for controlling a test process, maintaining test data information and receiving test data sent by a stub program; the Stubes are programs running on the equipment of the test manufacturer and are responsible for executing actual test programs and interfacing with a Tester; the Stubes program comprises a Stubes universal layer and a Stubes manufacturer adaptation layer, wherein the Stubes universal layer is used for flow control and data management, codes are provided by a testing mechanism, compiled into binary programs through C++ language, run on test manufacturer equipment and are started entries, and the Stubes program is responsible for calling manufacturer adaptation codes, monitoring processing flows, communicating with a Tester and integrating and transmitting test result data; the Stubes manufacturer adaptation layer is provided by manufacturers and comprises service codes for realizing specific test reasoning or training, interfaces and scripts are added to be in butt joint with the Stubes universal layer, and dotting functions are added to acquire information parameters; the Tester program and the stub program constitute a load generator module used in the test method.
2. The system performance testing method for an artificial intelligence server according to claim 1, wherein the test metrics include time, power consumption, actual throughput, energy efficiency, resilience, bearing capacity, and video analysis maximum number of passes;
the time includes:
total time delay is inferred: continuously reasoning the total end-to-end delay for a plurality of times;
end-to-end reasoning delay: the difference between the time the tester sends the sample and the time the result is received;
template transmission delay: the difference between the time the sample is sent by the tester and the time the sample is received by the tester;
the result transmission delay: the difference between the time of the result sent by the tested person and the time of the result received by the tested person;
task dispatch delay: the difference between the time the sample is received by the testee and the time before processing;
pretreatment time delay: the difference between the starting time and the ending time of the sample pretreatment by the testee; reasoning time delay: the difference between the starting time and the ending time of the reasoning of the testee on a certain sample;
post-treatment delay: the difference between the starting time and the ending time of the sample post-processing by the testee;
sample processing delay: the difference between the starting time and the ending time of the sample processing by the testee, and the processing delay is the combination of preprocessing, reasoning and post-processing time;
dispatch processing delay: the difference between the time when the sample is completely received by the testee and the processing ending time;
processing timeout: the maximum allowable time interval from the sending of the sample to the receiving of the corresponding result by the tester;
the power consumption includes:
AI server stand-alone reasoning average power: average power of a single AI server in a certain reasoning whole course;
AI server data preprocessing average power: the average power of a data preprocessing stage is calculated by a single AI server in a certain reasoning whole course;
AI server infers peak power: in a certain reasoning whole course, a single AI server has the maximum instantaneous power of each part of the server under the full load pressure state;
AI server cluster reasoning average power: AI server cluster, the average power in a certain reasoning whole course;
the actual throughput rate represents the effective computing capacity of the artificial intelligent server system for specific reasoning operation, and the effective computing capacity is improved to achieve the same effect of hardware system capacity expansion; for visual class testing, the units are images/s, for natural language processing class testing, the units are sendees/s, comprising:
AI server system infers actual throughput rate: the AI server system processes the sample number completely for the specific task load in unit time;
AI server system reasoning efficient computing power: the AI server system is a weighted geometric average of the ratio of the actual throughput rate to the baseline throughput rate per task over a given set of tasks S;
the energy efficiency includes:
visual task energy efficiency ratio: the unit is the number of frames of the image processed per second watt;
natural language task energy efficiency ratio: the units are words processed per second watt;
speech task energy efficiency ratio: sentence number processed in watts per second;
industry task energy efficiency ratio: calculating according to the energy efficiency ratio of visual and natural language tasks;
the efficiency is the ratio of the completion of the reasoning task by the AI server system to the cost, the unit is kilowatt-hour per second, and the efficiency comprises:
AI server reasoning efficiency: the ratio of the actual reasoning accuracy of the AI server to the reasoning energy consumption;
the elastic unit is percentage per megabyte, and comprises AI server system reasoning elasticity;
the units of bearing force are megabytes per second, including AI server or cluster reasoning bearing force:
the actual throughput rate of the tested AI server system running above the concurrent pressure threshold.
3. The system performance testing method for an artificial intelligence server according to claim 2, wherein the video analysis maximum number of ways unit is a way, including an AI server video analysis maximum number of ways: and the tested AI server system analyzes the maximum number of paths that the video stream can bear under the given response timeout threshold.
4. The system performance testing method for an artificial intelligence server according to claim 1, wherein the load generator module performs a load distribution policy on a data set interface and a test floor interface of a system under test, and the specific steps include:
s1, continuous or single arrival:
the ith job arrives immediately after the ith-1 job is completed, and the job i is not transmitted when the job i-1 is not completed or the overtime control threshold is not reached;
s2, fixed period arrives:
the operation arrives at a fixed period, one operation is arrived at a time;
s3, poisson distribution reaches:
the operation is achieved in poisson distribution;
s4, peak arrival:
in the Poisson distribution arrival mode, j short periods exist, a large number of bursty operations exist in each period, and the periods last for a certain duration and maintain a certain concurrency level;
s5, offline arrival:
all at once.
5. The system performance testing method for an artificial intelligence server according to claim 2, wherein the LoadGenerator module develops and provides the interfaces of python, c++, C through c++ for the application of the outer layer to call and support a plurality of application programs.
6. The system performance testing method for artificial intelligence server according to claim 4, wherein the load distribution strategy is divided into synchronous mode and asynchronous mode according to different job reaching modes.
7. The system performance testing method of claim 6, wherein the synchronization mode includes a continuous arrival mode in which samples are serially fed into the system under test for processing, i.e., the distribution process and the processing process are the same.
8. The system performance testing method according to claim 6, wherein the asynchronous mode comprises a fixed-cycle, poisson distribution achievement mode in which a fixed time requirement is imposed, the fixed-cycle mode distributing samples for a fixed period of time regardless of whether the samples have been processed, so that the distribution thread and the processing thread are not identical in the asynchronous mode.
CN202211329264.2A 2022-10-27 2022-10-27 System performance test method for artificial intelligent server Active CN115640201B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211329264.2A CN115640201B (en) 2022-10-27 2022-10-27 System performance test method for artificial intelligent server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211329264.2A CN115640201B (en) 2022-10-27 2022-10-27 System performance test method for artificial intelligent server

Publications (2)

Publication Number Publication Date
CN115640201A CN115640201A (en) 2023-01-24
CN115640201B true CN115640201B (en) 2023-12-08

Family

ID=84946866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211329264.2A Active CN115640201B (en) 2022-10-27 2022-10-27 System performance test method for artificial intelligent server

Country Status (1)

Country Link
CN (1) CN115640201B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101026503A (en) * 2006-02-24 2007-08-29 国际商业机器公司 Unit detection method and apparatus in Web service business procedure
CN105279064A (en) * 2015-09-11 2016-01-27 浪潮电子信息产业股份有限公司 Exchange Server pressure testing method based on Windows platform
CN111949493A (en) * 2020-09-16 2020-11-17 苏州浪潮智能科技有限公司 Inference application-based power consumption testing method and device for edge AI server

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101026503A (en) * 2006-02-24 2007-08-29 国际商业机器公司 Unit detection method and apparatus in Web service business procedure
CN105279064A (en) * 2015-09-11 2016-01-27 浪潮电子信息产业股份有限公司 Exchange Server pressure testing method based on Windows platform
CN111949493A (en) * 2020-09-16 2020-11-17 苏州浪潮智能科技有限公司 Inference application-based power consumption testing method and device for edge AI server

Also Published As

Publication number Publication date
CN115640201A (en) 2023-01-24

Similar Documents

Publication Publication Date Title
CN106844198B (en) Distributed dispatching automation test platform and method
US8020044B2 (en) Distributed batch runner
CN109327509A (en) A kind of distributive type Computational frame of the lower coupling of master/slave framework
CN107483297B (en) Active monitoring system and method for quality of service carried on embedded equipment
CN112929187A (en) Network slice management method, device and system
CN115310954B (en) IT service operation maintenance method and system
CN112560522A (en) Automatic contract input method based on robot client
CN111324460B (en) Power monitoring control system and method based on cloud computing platform
CN107463490B (en) Cluster log centralized collection method applied to platform development
CN110895506A (en) Construction method and construction system of test data
CN115640201B (en) System performance test method for artificial intelligent server
CN114124747B (en) Flow pressure measurement system
CN108762932A (en) A kind of cluster task scheduling system and processing method
CN116737560B (en) Intelligent training system based on intelligent guide control
CN116661978B (en) Distributed flow processing method and device and distributed business flow engine
CN109561346A (en) A kind of distributed analytic method and system of video
CN115056234B (en) RPA controller scheduling method and system based on event-driven and infinite state machine
CN116089079A (en) Big data-based computer resource allocation management system and method
CN116208643A (en) Equipment state data acquisition method, industrial Internet of things platform and related devices
KR20230100260A (en) System for managing Robotic Process Automation System and Driving method thereof
CN115687054A (en) Self-adaptive test method and device based on service segmentation and restoration
CN113411382B (en) Real-time data acquisition system and method based on network equipment F5
CN111399971A (en) Network element state analyzing method, device and storage medium
CN114936098B (en) Data transfer method, device, back-end equipment and storage medium
CN117421255B (en) Automatic inspection method, device and equipment for interface and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant