CN111949493A - Inference application-based power consumption testing method and device for edge AI server - Google Patents

Inference application-based power consumption testing method and device for edge AI server Download PDF

Info

Publication number
CN111949493A
CN111949493A CN202010972701.7A CN202010972701A CN111949493A CN 111949493 A CN111949493 A CN 111949493A CN 202010972701 A CN202010972701 A CN 202010972701A CN 111949493 A CN111949493 A CN 111949493A
Authority
CN
China
Prior art keywords
server
power consumption
inference
reasoning
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010972701.7A
Other languages
Chinese (zh)
Inventor
李磊
王月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202010972701.7A priority Critical patent/CN111949493A/en
Publication of CN111949493A publication Critical patent/CN111949493A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3058Monitoring arrangements for monitoring environmental properties or parameters of the computing system or of the computing system component, e.g. monitoring of power, currents, temperature, humidity, position, vibrations
    • G06F11/3062Monitoring arrangements for monitoring environmental properties or parameters of the computing system or of the computing system component, e.g. monitoring of power, currents, temperature, humidity, position, vibrations where the monitored property is the power consumption
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/2205Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing using arrangements specific to the hardware being tested
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/2273Test methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/046Forward inferencing; Production systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Power Sources (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a method and a device for testing power consumption of an edge AI server based on inference application, comprising the following steps: firstly, a client server program simulates a multi-user task request of a user terminal, the client server program sends the multi-user task request to an inference server in an edge AI server, and the client server program carries out multi-level load pressurization on the inference server through the multi-user task request; secondly, the reasoning server carries out reasoning calculation according to the multi-user task request and schedules a plurality of processes to run simultaneously; and thirdly, the power consumption measuring device is responsible for collecting the power consumption value in the reasoning calculation process of the reasoning server. Through the mode, the load state of the edge AI server in practical application can be simulated by setting various task scenes, the power consumption value of the server is evaluated, and the evaluation result can provide scientific reference for product design and application design.

Description

Inference application-based power consumption testing method and device for edge AI server
Technical Field
The invention relates to the field of server power consumption testing, in particular to a method and a device for testing power consumption of an edge AI server based on inference application.
Background
The existing server power consumption test technical scheme is SpecPower, a performance/power consumption ratio benchmark test developed by SPEC organization, which is used for evaluating the power consumption condition of a server running Java-based application programs. SPECpower _ ssj2008 uses JDK of standard Java to calculate the performance of the whole server, and obtains the test mode of the workload/energy consumption ratio of the server according to the power consumption of 11 different workload regions, it uses specjbb as the workload, first runs for 3 times with full load in real time, obtains the highest performance value of the system by averaging, then the system uses this as reference, according to 100%, 90%, 80% … 10%, 0% runs the workload, the utilization rate of the system also decreases in turn, the performance running result will be recorded in ssj _ ops mode. And meanwhile, a power meter connected with a system power supply can record the power condition of the system in real time, and finally the system can accumulate and divide the performance and the power to obtain a performance power consumption ratio.
However, SpecPower is designed mainly for CPU-based loads and is suitable for traditional server products with CPU as the computing core. However, the edge AI server is mainly characterized by having a dedicated computing accelerator card component, the main computation is concentrated on the accelerator card, and the edge AI server is often in an environment with limited power consumption and relatively high ambient temperature for task processing, so it is very important to correctly evaluate the performance power consumption ratio of the edge server, but the prior art is not applicable. Edge calculation and AI are both emerging technology directions in recent years, and edge AI calculation is the product of cross fusion of the two. At present, no mature power consumption test scheme can be matched with the application characteristics of the edge AI server, and the test result of the existing scheme has low referential property.
Disclosure of Invention
The invention mainly solves the technical problem of providing a method and a device for testing the power consumption of an edge AI server based on inference application, which can simulate the load state of the edge AI server in practical application and evaluate the power consumption value of the server.
In order to solve the technical problems, the invention adopts a technical scheme that: an edge AI server power consumption test method based on inference application comprises the following steps: s100, simulating a multi-user task request of a user terminal by a client server program, sending the multi-user task request to an inference server in an edge AI server by the client server program, and pressurizing the inference server by the client server program through multi-user task request in a multi-level mode; s200, the reasoning server performs reasoning calculation according to the multi-user task request, and schedules a plurality of processes to operate simultaneously; s300, the power consumption measuring device collects power consumption values in the reasoning and calculating process of the reasoning server.
Further, when the client server program simulates a multi-user task request of the user terminal in step S100, a data set exists in the client server, and the data set includes inference data and the number and types of tasks of the multi-user task request; the client server program is software in computer equipment, and the technology in the software in the computer equipment adopts Python Multi-process technology.
Further, the client server is a computer device; the reasoning data comprises: COCO dataset, SQuAD dataset, wmt dataset.
Further, the COCO data set stores image data for target detection; the SQuAD data set stores sound data for reading and understanding; the wmt data set stores text data for machine translation.
Further, when the client program performs multi-level load pressurization on the inference server in step S100, the client program gradually increases the number of tasks until the load of the inference server reaches the maximum, the client program stores the number of tasks into a plurality of json files according to the gradually increased number of tasks, the client program generates task requests of a plurality of load scenes according to the plurality of json files, and the client program performs inference calculation on the inference server according to the task requests of the plurality of load scenes, thereby implementing multi-level load pressurization on the inference server.
Further, the inference server is based on a container implementation form; the reasoning server comprises a transceiver, a data acquisition unit, a task scheduler, a container mirror image library and a process pool.
Further, the transceiver is responsible for starting and closing the data collector; the transceiver receives a multi-user task request transmitted by a client program and transmits the multi-user task request to the task scheduler; the task scheduler is mainly responsible for counting and managing running multi-user task requests.
Further, a frame mirror image and a model program are stored in the container mirror image library, and the frame mirror image and the model program are used for reasoning and calculating of the application scene.
Further, the process pool is mainly responsible for performing reasoning calculation on the multi-user task request and feeding back a result to the transceiver.
An edge AI server power consumption testing arrangement based on reasoning application includes: the edge AI server is provided with the reasoning server, the reasoning server and the client server are communicated through a network, the reasoning server is connected with the power consumption measuring device through a power supply line, and a power supply interface of the reasoning server is connected to a socket provided by the power consumption measuring device. The reasoning server comprises a transceiver, a data acquisition unit, a task scheduler, a container mirror image library and a process pool, and performs reasoning calculation tasks; the client server is a common computer device, and software running in the computer device is used for simulating a multi-user task request of a user terminal; the power consumption measuring device is a power meter and can acquire the power of the reasoning server in real time; the power consumption test of the edge AI server is completed by a device which is composed of the client server, the inference server in the edge AI server and the power consumption measuring device.
The invention has the beneficial effects that: the invention can evaluate the typical application power consumption of the edge AI server by simulating the real edge AI calculation request. The method has a very valuable data reference for product development and application development.
Drawings
FIG. 1 is a flowchart illustrating a method for testing power consumption of an edge AI server according to a preferred embodiment of the present invention;
FIG. 2 is a flowchart of inference server work in the method for testing power consumption of an edge AI server based on inference application according to the present invention;
FIG. 3 is a schematic diagram of an inference server module in the method for testing power consumption of an edge AI server based on inference application according to the present invention;
fig. 4 is a diagram of an architecture of an edge AI server power consumption testing device based on inference application according to the present invention.
Detailed Description
The following detailed description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings, will make the advantages and features of the invention easier to understand by those skilled in the art, and thus will clearly and clearly define the scope of the invention.
Referring to fig. 1 and 3, an embodiment of the present invention includes:
an edge AI server power consumption test method based on inference application comprises the following steps: firstly, a client server program simulates a multi-user task request of a user terminal, the client server program sends the multi-user task request to an inference server in an edge AI server, and the client server program carries out multi-level load pressurization on the inference server through the multi-user task request; secondly, the reasoning server carries out reasoning calculation according to the multi-user task request and schedules a plurality of processes to run simultaneously; and thirdly, the power consumption measuring device is responsible for collecting the power consumption value in the reasoning calculation process of the reasoning server.
In order to effectively evaluate the power consumption of the edge AI server in a common application scenario, various computing tasks need to be simulated through an inference server and a client running program, and power consumption values under different loads are recorded by a power consumption acquisition device.
It is well known that different scenarios require the use of different models for the AI inference task. Also, there are a variety of frameworks for operational models, such as Tensorflow, TensorRT, Pytrch, MXNET, etc. To enable support for various models and frameworks, the inference server takes the form of a container-based implementation.
Referring to fig. 2, the workflow of the inference server is described in detail as follows:
in the step of S01, the transceiver receives an image or voice or text reasoning task sent by the client and starts a data acquisition program at the same time; the transceiver receives the client reasoning task through HTTP/REST or GRPC protocol, starts a data acquisition program when detecting the reasoning task, and starts to monitor and record the power consumption of the whole machine and the utilization rate of the calculation accelerator card in real time.
In step S02, the transceiver passes the inference task to the task scheduler. The task scheduler maintains the running state of the process pool, including the number and type of process instances, and whether the instances are occupied by tasks.
In step S03, the task scheduler determines whether the current process instance meets the inference requirement according to the statistical result of the process instances running in the current process pool.
In the step of S04, if the inference requirement is satisfied, the system directly responds to the client inference task; and if the inference requirement is not met, the task scheduler requests the system to call resources in the container mirror library and starts a new process instance.
In step S05, when the inference task is finished, the system sends the inference result to the transceiver, and simultaneously notifies the task scheduler to close the unnecessary process instances, and maintains the basic process instances.
In the step S06, the transceiver feeds the inference result back to the client and closes the data acquisition program;
hardware level
The client server is a common computer device, and software in the computer device simulates a user request in an edge AI computing scene and sends the user request to an inference server in the edge AI server.
Referring to fig. 3, an inference server in an edge AI server is a computer device suitable for an edge computing scenario, and is composed of 5 logic units, namely, a transceiver, a data collector, a task scheduler, a container mirror library and a process pool, wherein the transceiver unit receives data, such as images, texts and voice files, transmitted from a client server program, and transmits the data to the task scheduler in a classified manner; in addition, the transceiver enables the data collector once it receives the original inference task; the task scheduler is mainly responsible for counting and managing the running container instances. First, it maintains a basic process pool to respond to different reasoning tasks, and when the task request is dense or the existing process instance does not meet the requirement, the system will call the resource in the container mirror library to start the new process instance. When the load of the requested task is lighter, the task scheduler requests to close redundant instances to reduce the system expense and reduce the power consumption; different frame images and model programs are stored in the container image library and are used for carrying out operation on different application scenes; the process pool comprises a plurality of active container instances, receives and loads the inference task in real time, feeds the result back to the transceiver, and closes the data acquisition unit after the transceiver feeds the result back to the client.
The inference server is connected with the power consumption measuring device through a power supply line, and the power consumption device collects the power of the inference server in real time to complete power consumption testing of the edge AI server.
Software layer
The role played by the client server is to simulate user requests in an edge AI computation scenario. In a real application scenario, the inference task of a single edge AI server may come from multiple user terminals. The terminals can collect data in various forms, and the oriented application scenes are various. Obviously, it is not practical to build such a real hardware architecture in a laboratory, so in the present invention, a client server program is employed to simulate a terminal multi-user request. In an actual application scenario, data to be inferred are all real-time pictures, texts, voices and the like acquired by a sensor. The client terminal adopted in the invention prepares various data sets in advance, reads the data sets through a program and sends the data sets to the reasoning server.
Specifically, a plurality of data sets including COCO data set storage image data are prepared in advance in client software for target detection; the SQuAD data set stores sound data for reading and understanding; wmt the dataset stores text data for machine translation.
The client program is a program written in a Python language, but multithreading in the Python cannot utilize the advantage of multi-core, and if the resource of a multi-core CPU is required to be fully used, most of the conditions in the Python need to use multiple processes. Python provides multiprocessing. The multiprocessing module is used to open a sub-process and execute our customized tasks such as functions in the sub-process, and is similar to the programming interface of the multithreading module threading. Therefore, the client program adopts Python Multi-process technology, simultaneously starts a plurality of request processes, and sends the number of tasks and the types of the tasks in the inference task to the inference server based on HTTP/REST or GRPC protocol. Typical load scenarios include 100%, 50%, 0%; in order for the inference server to achieve the above several typical loads, a calibration procedure is required.
The COCO data set is a large image data set of a COCO database published by Microsoft, and a client program can directly use a special python api to facilitate people to directly read image data for target detection.
And in the calibration process, the client program gradually increases the number of requests until the load of the computing accelerator in the inference server reaches the maximum, defines the number of the task requests at the moment as 100%, takes the number as a base line, and defines the number as 50% after halving, and stores the number as a JSON file. After the calibration is finished, the program generates task requests of different load scenes of 100%, 50% and 0% according to the generated JSON file, and the task requests respectively last for 10 min.
The reasoning server runs reasoning application, and the reasoning application is characterized in that the reasoning application is used as a resident service program to respond to the computing requirement sent by the client, and can schedule a plurality of processes to run on the computing accelerator card simultaneously according to the number and types of tasks. Specifically, the method can respond to reasoning calculation tasks of a plurality of scenes such as voice, text, images and the like at the same time.
Referring to fig. 4, an apparatus for testing power consumption of an edge AI server based on inference application includes: the edge AI server is provided with the reasoning server, the reasoning server and the client server are communicated through a network, the reasoning server is connected with the power consumption measuring device through a power supply line, and a power supply interface of the reasoning server is connected to a socket provided by the power consumption measuring device. The reasoning server comprises a transceiver, a data acquisition unit, a task scheduler, a container mirror image library and a process pool, and is used for carrying out reasoning calculation tasks; the client server is a common computer device, and software running in the computer device is used for simulating a client request; the power consumption measuring device is a power meter and can acquire the power of the reasoning server in real time, which is a basic function provided by the power meter; the power consumption test aiming at the edge AI server is completed by a device which is composed of the client server, the inference server in the edge AI server and the power consumption measuring device.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An edge AI server power consumption test method based on inference application is characterized by comprising the following steps: s100, simulating a multi-user task request of a user terminal by a client server program, sending the multi-user task request to an inference server in an edge AI server by the client server program, and pressurizing the inference server by the client server program through multi-user task request in a multi-level mode; s200, the reasoning server performs reasoning calculation according to the multi-user task request, and schedules a plurality of processes to operate simultaneously; s300, the power consumption measuring device collects power consumption values in the reasoning and calculating process of the reasoning server.
2. The method for testing the power consumption of the edge AI server based on inference application as claimed in claim 1, wherein: when the client server program simulates a multi-user task request of the user terminal in the step S100, a data set exists in the client server, and the data set comprises inference data and the number and types of tasks of the multi-user task request; the client server program is software in computer equipment, and the technology in the software in the computer equipment adopts Python Multi-process technology.
3. The method for testing the power consumption of the edge AI server based on inference application as claimed in claim 2, wherein: the client server is computer equipment; the reasoning data comprises: COCO dataset, SQuAD dataset, wmt dataset.
4. The method for testing the power consumption of the edge AI server based on the inference application as claimed in claim 3, wherein: the COCO data set stores image data for target detection; the SQuAD data set stores sound data for reading and understanding; the wmt data set stores text data for machine translation.
5. The method for testing the power consumption of the edge AI server based on inference application as claimed in claim 1, wherein: when the client program performs multi-level load pressurization on the inference server in step S100, the client program gradually increases the number of tasks until the load of the inference server reaches the maximum, the client program stores the number of tasks into a plurality of json files according to the gradually increased number of tasks, the client program generates task requests of a plurality of load scenes according to the plurality of json files, and the client program performs inference calculation on the inference server according to the task requests of the plurality of load scenes, thereby implementing multi-level load pressurization on the inference server.
6. The method for testing the power consumption of the edge AI server based on inference application as claimed in claim 1, wherein: the inference server is based on a container implementation form; the reasoning server comprises a transceiver, a data acquisition unit, a task scheduler, a container mirror image library and a process pool.
7. The method for testing the power consumption of the edge AI server based on inference application as claimed in claim 6, wherein: the transceiver is responsible for starting and closing the data acquisition unit; the transceiver receives a multi-user task request transmitted by a client program and transmits the multi-user task request to the task scheduler; the task scheduler is mainly responsible for counting and managing running multi-user task requests.
8. The method for testing the power consumption of the edge AI server based on inference application as claimed in claim 6, wherein: and the container mirror image library is used for storing a frame mirror image and a model program, and the frame mirror image and the model program are used for reasoning and calculating an application scene.
9. The method for testing the power consumption of the edge AI server based on inference application as claimed in claim 6, wherein: the process pool is mainly responsible for carrying out reasoning calculation on the multi-user task request and feeding back a result to the transceiver.
10. An edge AI server power consumption testing arrangement based on reasoning application, characterized by, includes: the edge AI server is provided with the reasoning server, the reasoning server and the client server are communicated through a network, the reasoning server is connected with the power consumption measuring device through a power supply line, and a power supply interface of the reasoning server is connected to a socket provided by the power consumption measuring device. The reasoning server comprises a transceiver, a data acquisition unit, a task scheduler, a container mirror image library and a process pool, and performs reasoning calculation tasks; the client server is a common computer device, and software running in the computer device is used for simulating a multi-user task request of a user terminal; the power consumption measuring device is a power meter and can acquire the power of the reasoning server in real time; the power consumption test of the edge AI server is completed by a device which is composed of the client server, the inference server in the edge AI server and the power consumption measuring device.
CN202010972701.7A 2020-09-16 2020-09-16 Inference application-based power consumption testing method and device for edge AI server Withdrawn CN111949493A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010972701.7A CN111949493A (en) 2020-09-16 2020-09-16 Inference application-based power consumption testing method and device for edge AI server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010972701.7A CN111949493A (en) 2020-09-16 2020-09-16 Inference application-based power consumption testing method and device for edge AI server

Publications (1)

Publication Number Publication Date
CN111949493A true CN111949493A (en) 2020-11-17

Family

ID=73357471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010972701.7A Withdrawn CN111949493A (en) 2020-09-16 2020-09-16 Inference application-based power consumption testing method and device for edge AI server

Country Status (1)

Country Link
CN (1) CN111949493A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115640201A (en) * 2022-10-27 2023-01-24 中国电子技术标准化研究院 System performance testing method for artificial intelligence server
CN116723191A (en) * 2023-08-07 2023-09-08 深圳鲲云信息科技有限公司 Method and system for performing data stream acceleration calculations using acceleration devices

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115640201A (en) * 2022-10-27 2023-01-24 中国电子技术标准化研究院 System performance testing method for artificial intelligence server
CN115640201B (en) * 2022-10-27 2023-12-08 中国电子技术标准化研究院 System performance test method for artificial intelligent server
CN116723191A (en) * 2023-08-07 2023-09-08 深圳鲲云信息科技有限公司 Method and system for performing data stream acceleration calculations using acceleration devices
CN116723191B (en) * 2023-08-07 2023-11-10 深圳鲲云信息科技有限公司 Method and system for performing data stream acceleration calculations using acceleration devices

Similar Documents

Publication Publication Date Title
US6772107B1 (en) System and method for simulating activity on a computer network
CN105760286B (en) Application database dynamic property detection method and detection device
US7035766B1 (en) System and method for diagnosing computer system operational behavior
WO2020147336A1 (en) Micro-service full-link monitoring system and method
CN109165168A (en) A kind of method for testing pressure, device, equipment and medium
CN111563014A (en) Interface service performance test method, device, equipment and storage medium
CN110517148B (en) Control method, system and device for executing quantitative transaction strategy
CN110321273A (en) A kind of business statistical method and device
CN111949493A (en) Inference application-based power consumption testing method and device for edge AI server
Lee et al. On the user–scheduler dialogue: studies of user-provided runtime estimates and utility functions
CN111221729B (en) Automatic testing method and system for separating platform service from testing service
CN113986746A (en) Performance test method and device and computer readable storage medium
CN104683181B (en) A kind of method for monitoring performance, equipment and system
CN110147315A (en) Concurrency performance test method, device, computer equipment and storage medium
CN108549592A (en) A kind of monitoring method and monitoring device, application server of database connection pool
CN112650676A (en) Software testing method, device, equipment and storage medium
CN113448988B (en) Training method and device of algorithm model, electronic equipment and storage medium
CN116662132A (en) Evaluation method, virtual deployment method, computer device, and storage medium
CN113849356B (en) Equipment testing method and device, electronic equipment and storage medium
CN116257226A (en) Data verification method and device, electronic equipment and storage medium
CN115293750A (en) AI (AI) middle desk-based intelligent auditing system, method and device
CN114298313A (en) Artificial intelligence computer vision reasoning method
CN113138917A (en) Performance test platform
CN113742408A (en) Data interaction method based on Protobuf protocol dynamic analysis
CN113742226B (en) Software performance test method and device, medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20201117