CN115357493A - Test method, test device, electronic equipment and storage medium - Google Patents

Test method, test device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115357493A
CN115357493A CN202210999579.1A CN202210999579A CN115357493A CN 115357493 A CN115357493 A CN 115357493A CN 202210999579 A CN202210999579 A CN 202210999579A CN 115357493 A CN115357493 A CN 115357493A
Authority
CN
China
Prior art keywords
test
container
task
target
stage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210999579.1A
Other languages
Chinese (zh)
Inventor
汪鹏
迪力亚尔·帕尔哈提
申志鹏
黄明明
杨娟娟
车婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210999579.1A priority Critical patent/CN115357493A/en
Publication of CN115357493A publication Critical patent/CN115357493A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3664Environments for testing or debugging software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3696Methods or tools to render software testable

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The disclosure provides a testing method, a testing device, electronic equipment and a storage medium, and relates to the technical fields of voice recognition, voice synthesis, cloud storage, cloud computing and the like. The scheme is as follows: acquiring task parameters associated with task identifiers of tasks to be tested; loading a target test case according to the test case identifier in the task parameter; loading a target test set according to the test set identification in the task parameter; sending test data in a target test set to a first container of services in at least one deployment service terminal based on a target test case; and obtaining a test result obtained by performing target test matched with the task to be tested on the service in the at least one first container by adopting the test data. Therefore, the target test set and the target test case matched with the task to be tested are obtained, the service in the service end deployed in the first container is subjected to target test according to the target test case and the target test set, and the test pertinence and reliability can be achieved.

Description

Test method, test device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to the field of speech recognition, speech synthesis, cloud storage, cloud computing, and the like, and in particular, to a testing method, an apparatus, an electronic device, and a storage medium.
Background
The server is automatically tested, and the performance, stability and pressure resistance of the server can be determined, whether abnormity occurs or not can be determined. For example, a test environment can be established, a test tool of the latest version can be pulled, and a test script can be executed by calling the test tool to realize the simulation test of the server, so as to obtain a test result, and thus, the performance, the pressure resistance and the like of the server can be determined according to the test result.
Disclosure of Invention
The disclosure provides a test method, a test device, an electronic device and a storage medium.
According to an aspect of the present disclosure, there is provided a test method including:
acquiring a task to be tested, and acquiring a task parameter associated with a task identifier according to the task identifier of the task to be tested, wherein the task parameter comprises a test case identifier and a test set identifier;
loading a target test case matched with the test case identifier according to the test case identifier;
loading a target test set matched with the test set identification according to the test set identification;
sending the test data in the target test set to a first container of service in at least one deployment service terminal based on the target test case;
and obtaining a test result obtained by performing target test matched with the task to be tested on the service in the at least one first container by adopting the test data.
According to another aspect of the present disclosure, there is provided a test apparatus including:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a task to be tested and acquiring a task parameter associated with a task identifier according to the task identifier of the task to be tested, and the task parameter comprises a test case identifier and a test set identifier;
the loading module is used for loading a target test case matched with the test case identifier according to the test case identifier and loading a target test set matched with the test set identifier according to the test set identifier;
a sending module, configured to send, based on the target test case, the test data in the target test set to a first container served in at least one deployment server;
and the second acquisition module is used for acquiring a test result obtained by performing target test matched with the task to be tested on the service in the at least one first container by adopting the test data.
According to still another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a testing method set forth in the above aspect of the disclosure.
According to yet another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium of computer instructions for causing a computer to perform the testing method set forth in the above aspect of the present disclosure.
According to yet another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the testing method set forth in the above-mentioned aspect of the present disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a diagram of a testing framework of a server;
FIG. 2 is a diagram illustrating various stages of server side testing;
FIG. 3 is a diagram of an L2-level automated test architecture;
fig. 4 is a schematic flow chart of a testing method according to a first embodiment of the disclosure;
fig. 5 is a schematic flow chart of a testing method according to a second embodiment of the disclosure;
fig. 6 is a schematic flow chart of a testing method provided in a third embodiment of the present disclosure;
fig. 7 is a schematic flowchart of a testing method according to a fourth embodiment of the disclosure;
fig. 8 is a schematic flowchart of a testing method provided in the fifth embodiment of the present disclosure;
fig. 9 is a schematic flowchart of a testing method according to a sixth embodiment of the disclosure;
fig. 10 is a schematic flowchart of a testing method provided in a seventh embodiment of the disclosure;
FIG. 11 is a schematic overall flowchart of a server-side automated test provided by the embodiment of the present disclosure;
FIG. 12 is a flowchart illustrating an execution of a scheduling module according to an embodiment of the disclosure;
FIG. 13 is a flowchart illustrating the execution of a test tool according to an embodiment of the present disclosure;
FIG. 14 is a block diagram illustrating an overall architecture of an automated test platform according to an embodiment of the present disclosure;
FIG. 15 is a diagram illustrating a server-side automated testing framework provided by an embodiment of the present disclosure;
fig. 16 is a schematic structural diagram of a testing apparatus according to an eighth embodiment of the present disclosure;
FIG. 17 shows a schematic block diagram of an example electronic device that may be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
As shown in fig. 1, a testing framework of a server may first need to set up a testing environment of the server, construct test data, then simulate a user by a testing script or agent (an agent, software or hardware entity capable of autonomous activity) to read the test data, initiate a test request to a service in the testing environment, and then calculate a specific index by analyzing a result returned by the service, thereby generating a test report.
Among them, the test scenario generally includes the following three types:
first, function test: and triggering the function specified by the server by constructing specific parameters and data, and checking the returned result.
Second, performance testing: and testing the performance of the server under different concurrencies through the test set.
Third, pressure test: and testing the pressure resistance of the server by continuously sending the test data in the test set at large concurrence.
Based on the testing framework shown in fig. 1, the server-side testing automation may include five stages as shown in fig. 2:
the first stage is as follows: l0 (manual test). And (4) no assembly line is adopted, the test environment is set up, environment deployment is carried out by manually copying machine resources on the assembly line by using scripts, and the test scripts are locally executed for testing.
And a second stage: l1 (pipeline assist). And the single module assembly line, the agent self-operation and maintenance, and the scheduling and distribution self-operation and maintenance.
And a third stage: l2 (partially automated). And in the multi-module integrated assembly line, the agent adopts a unified mirror image file to schedule, distribute and maintain the operation.
A fourth stage: l3 (limited automation). The test service assembly line is used for agent self service deployment, unified scheduling is carried out through an independent scheduling module, test environment building and tool scheduling are completed through manual triggering, and automatic test from test execution to index calculation is realized, so that limited automation is realized.
And a fifth stage: l4 (fully automatic). The engineering assembly line automatically triggers and executes the test without manual intervention.
Currently, the server side test belongs to an L2-level automated test, and as shown in fig. 3, the L2-level automated test adopts a multi-module integrated pipeline for testing. The whole testing process comprises the following steps:
1) Configuring a server-side architecture topology, and applying for each module container resource (manual work);
2) Configuring information (manual work) such as branches and versions of the test module;
3) Configuring information (including test case selection, machine path of a test set, and the like) required by each stage of the production line according to test requirements (manual operation);
4) A pipelined execution test (automatic) is initiated.
The execution flow of each stage in the pipeline comprises the following steps:
1. and (3) environment construction: and requesting the test environment platform to perform resource deployment on the test environment which is applied in advance through the self-defined parameters.
2. And (4) functional test: the method comprises the steps of pulling a latest branch version of a test tool code library, reading user configuration parameters, selecting a test case and a test set list (comprising an index (such as an ID) and a storage path) which are pre-stored in a test tool, pulling test data (comprising audio, text and the like) in the test set according to the test set list, locally executing the test data on a resource container provided by a production line through a script, checking a test result and generating a report.
3. And (3) performance testing: the latest branch version of the test tool code library is pulled, user configuration parameters are read, a test case and a test set list which are pre-stored in the test tool are selected, test data (including audio, text and the like) in the test set are pulled, and the test data are locally executed on a resource container provided by a production line through a script, performance indexes are calculated, and a report is generated.
4. And (3) pressure testing: the latest branch version of the test tool code library is pulled, user configuration parameters are read, a test case and a test set list which are pre-stored in the test tool are selected, test data (including audio, text and the like) in the test set are pulled, the test data are locally executed on a resource container provided by a production line through a script, and the number of rounds of sending the test set is appointed or the test set is sent continuously according to the parameters configured by the user.
However, the pipeline configuration specificity of the L2-level automation test is strong, 10 services correspond to 10 pipelines, scheduling is distributed from operation and maintenance, scheduling strategies in each direction are different, and the operation difficulty of a user is high. Moreover, the test tool does not realize servitization, the coupling degree of each module is high, namely the test tool integrates a test set, a test case, test execution and index calculation into a whole, the expandability is poor, the configuration modification needs to be carried out again to submit a test tool code base, and increasingly complex and variable cases, data and scenes cannot be dealt with.
Further, there are also the following problems: 1) Each stage in the assembly line has high work repeatability, and the construction and execution time is long; 2) The method has the advantages that project flow control is avoided, the data analysis difficulty among the test versions is high, the test threshold is high, and the pipeline test execution failure rate is high; 3) Index acquisition and calculation are complicated, test results are not persistent, and test data analysis difficulty is high; 4) The expansibility is poor, and the difficulty of accessing other services is high; 5) The problems of inconsistent use versions of test tools, multiple copies of test data, multiple parameter configurations and the like exist.
Therefore, in view of at least one of the above problems, the present disclosure provides a testing method, an apparatus, an electronic device, and a storage medium.
The test method, apparatus, electronic device, and storage medium of the embodiments of the present disclosure are described below with reference to the accompanying drawings.
Fig. 4 is a schematic flow chart of a testing method according to a first embodiment of the disclosure.
The testing method of the embodiment of the disclosure can be applied to a server.
As shown in fig. 4, the test method may include the steps of:
step 401, acquiring a task to be tested, and acquiring a task parameter associated with a task identifier according to the task identifier of the task to be tested, wherein the task parameter includes a test case identifier and a test set identifier.
In the embodiment of the present disclosure, the task to be tested may be created by a relevant person, and the task identifier of the task to be tested is used to uniquely identify the task to be tested, for example, the task identifier may be a task ID.
In the embodiment of the present disclosure, the task parameter associated with the task identifier may be obtained or loaded according to the task identifier of the task to be tested. For example, when a task to be tested is created, task parameters may be set by relevant personnel according to manual experience, or task parameters corresponding to different testing tasks may be pre-configured through a configuration template and stored in the cloud platform, so that in the present disclosure, task parameters corresponding to the task to be tested may be loaded from the cloud platform. The task parameters may include a test case identifier (such as a test case ID) and a test set identifier (such as a test set ID).
It should be noted that the task parameters may include not only the test case identifier and the test set identifier, but also other parameters related to the task to be tested, for example, when the test type of the task to be tested is a stress test, the task parameters may also include a concurrency number, that is, the number of subsequently sent test data, an upper limit set by the concurrency number, and other parameters.
And 402, loading a target test case matched with the test case identifier according to the test case identifier.
In the embodiment of the disclosure, the target test case matched with the test case identifier can be loaded according to the test case identifier.
As an example, the correspondence between the case identifier and the test case may be stored in the cloud platform, so that in the present disclosure, the correspondence may be queried according to the test case identifier to obtain the test case corresponding to the test case identifier, and the test case is recorded as the target test case in the present disclosure.
And 403, loading a target test set matched with the test set identifier according to the test set identifier.
In the embodiment of the present disclosure, a target test set matching the test set identifier may be loaded according to the test set identifier.
As an example, the corresponding relationship between the test set identifier and the test set may be stored in the cloud platform, so that in the present disclosure, the corresponding relationship may be queried according to the test set identifier in the task parameter to obtain the test set corresponding to the test set identifier, which is denoted as a target test set in the present disclosure.
Wherein the target test set may include a plurality of test data, which may include, but is not limited to, audio, video, text, pictures, and the like.
Step 404, based on the target test case, sending the test data in the target test set to the first container served in the at least one deployment server.
In the embodiment of the present disclosure, resources and services in the service end may be deployed in at least one container (referred to as a first container in the present disclosure), where the resources may include a CPU (Central Processing Unit), a GPU (Graphic Processing Unit), an FPGA (Field Programmable Gate Array), a memory, and the like, and the services may include a transmission service, a voice recognition service (or referred to as a voice recognition engine), a voice synthesis service, and the like.
In the embodiment of the present disclosure, the test data in the target test set may be sent to the first container served in the at least one deployment server based on the target test case.
Step 405, obtaining a test result obtained by performing a target test matched with the task to be tested on the service in the at least one first container by using the test data.
The target test is determined according to a test type of a task to be tested, wherein the test type may include, but is not limited to, a performance test, a stress test, a function test, an anomaly test, a stability test, and the like.
For example, the test type is a performance test, the target test is a performance test, for example, the test type is a pressure test, the target test is a pressure test, for example, the test type is a function test, and the target test is a pressure test.
In the embodiment of the disclosure, the test data may be used to perform a target test matching the task to be tested on the service in the at least one first container to obtain a test result. Therefore, the simulation test of the service end can be realized by carrying out the target test on the service in the at least one first container, so that the performance, the pressure resistance and the like of the service end can be determined according to the test result.
According to the testing method of the embodiment, task parameters related to task identifiers are obtained according to the task identifiers of the tasks to be tested; loading a target test case according to the test case identifier in the task parameter; loading a target test set according to the test set identification in the task parameter; based on the target test case, sending test data in a target test set to a first container served in at least one deployment service terminal; and obtaining a test result obtained by performing target test matched with the task to be tested on the service in the at least one first container by adopting the test data. In the disclosure, since the test sets and the test cases corresponding to different test tasks may be different, when the target test matching the task to be tested is performed on the server, the target test set and the target test case matching the task to be tested are obtained, and the target test is performed on the service in the server deployed in the first container according to the target test case and the target test set, so that the pertinence and reliability of the test can be realized, and thus the performance, the pressure resistance and the like of the server can be accurately and reliably determined according to the test result.
In the technical scheme of the present disclosure, the processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the related user are all performed under the premise of obtaining the consent of the user, and all meet the regulations of the related laws and regulations, and do not violate the good custom of the public order.
In order to clearly illustrate how the test data in the target test set is sent to the first container served in the at least one deployment server based on the target test case in the above embodiments, the present disclosure further provides a test method.
Fig. 5 is a schematic flow chart of a testing method provided in the second embodiment of the disclosure.
As shown in fig. 5, the test method may include the steps of:
step 501, a task to be tested is obtained, and a task parameter associated with a task identifier is obtained according to the task identifier of the task to be tested, wherein the task parameter includes a test case identifier and a test set identifier.
And 502, loading a target test case matched with the test case identifier according to the test case identifier.
And 503, loading the target test set matched with the test set identifier according to the test set identifier.
For the explanation of steps 501 to 503, reference may be made to the related description in any embodiment of the present disclosure, which is not described herein again.
Step 504, according to the task type of the task to be tested, determining a target testing tool matched with the task type.
In the embodiment of the disclosure, when the tasks to be tested are different, the task types may be different, and the target test tools matched with the task types may be different. For example, when the task to be tested is a voice processing test task (e.g., a voice Recognition test task (asr), a voice synthesis test task, etc.), the target test tool may be a voice processing test tool (e.g., a voice Recognition test tool, a voice synthesis test tool), and further, for example, when the task to be tested is a Remote Dictionary service (db) test task, the target test tool may be a redis test tool, and further, for example, when the task to be tested is a link test (e.g., an lc (long connection) link test) task, the target test tool may be an lc link test tool, and further, for example, when the task to be tested is an interface test task, the target test tool may be an interface test tool, and further, for example, when the task to be tested is a link test task (e.g., when the client a initiates a voice call or a video call to the client B, audio and video data of the client a and the client B may be forwarded through a service terminal, and the audio and video call data may be listed as an audio and video call data in the audio and video call scene.
In the embodiment of the disclosure, the corresponding relationship between different task types and the testing tool may be configured in advance, so that in the disclosure, after the task type of the task to be tested is determined, the corresponding relationship may be queried according to the task type of the task to be tested, so as to determine the target testing tool matched with the task type of the task to be tested.
Still taking the example above as an example, assume that the task type of the voice processing test task is type 1, where this type 1 corresponds to the voice processing test tool, and assume that the task type of the redis test task is type 2, where this type 2 corresponds to the redis test tool. If the task type of the task to be tested is type 1, it may be determined that the target testing tool is a voice processing testing tool.
And 505, determining a second container from the candidate containers according to the running state of the candidate containers for deploying the target test tool.
The running state may include an idle state and an occupied state.
In the embodiment of the present disclosure, the number of the candidate containers for deploying the target test tool may be at least one, and the second container may be determined from the candidate containers according to the operating states of the candidate containers for deploying the target test tool, where the operating state of the second container is an idle state.
In a possible implementation manner of the embodiment of the present disclosure, at least one third container may be determined from each candidate container according to an operation state of each candidate container deploying the target testing tool, where the operation state of the third container is an idle state, and then, the second container may be determined from each third container according to a testing type of a task to be tested.
The test types may include a function test, a pressure test, and a performance test, among others.
For example, in a stress test scenario, test data needs to be continuously sent with a larger concurrent number (for example, the target test tool needs to simulate 100 clients, 200 clients, and the like), and at this time, the second container needs to have enough space to store log data corresponding to the target test tool, so that a third container with a larger resource amount can be selected as the second container.
For the performance test and the functional test, one container may be randomly selected from the third containers as the second container, or the third container with a larger resource amount may be selected as the second container.
In another possible implementation manner of the embodiment of the present disclosure, the task to be tested may be associated with at least one tested task, and a third container may be determined from each candidate container according to the operation state of each candidate container deploying the target testing tool, where the operation state of the third container is an idle state, and then, a container called by the tested task may be determined from each third container and used as the second container.
For example, each test task created by the same person may be associated with the same project, each test task under the project has an association relationship, and each test task under the project may be scheduled to the same container, so that after a certain test task in the project completes a test, for a to-be-tested task that does not complete the test, the container called by the test task may be called.
Therefore, the container for deploying the required test tool can be determined according to different modes, and the flexibility and the applicability of the method can be improved.
Step 506, a target testing tool in the second container is called to construct a testing scene matched with the testing type of the task to be tested according to the target testing case.
In the embodiment of the disclosure, a target test tool in the second container may be called to construct a test scenario matched with the test type of the task to be tested according to the target test case.
As an example, the target test case may include two parts, one of which is configuration parameters for constructing a test scenario, and the other of which is code logic for comparing differences between test results and label information on test data.
For example, when the task to be tested is a speech recognition test task, the test result may include speech recognition performed on the test data by the service in the at least one first container, and the obtained speech recognition result and the label information may be text information labeled on the test data. The code logic is to compare the difference between the speech recognition result and the text information.
According to the method and the device for testing the task, the configuration parameters can be obtained from the target test case, and the target testing tool in the second container is called to construct the testing scene matched with the testing type of the task to be tested according to the configuration parameters.
For example, the test type of the task to be tested is a configuration parameter in a target test case corresponding to the pressure test, and may be different from the configuration parameter in the target test case corresponding to the test type of the task to be tested as the performance test.
For example, for a stress test, test data needs to be continuously sent without interruption, a test scenario for continuously sending the test data may be constructed according to configuration parameters in a target test case, and for a functional test and a performance test, a test scenario for sending the test data in a short period may be constructed according to configuration parameters in the target test case.
For another example, test data is taken as audio to perform an example, and in a speech recognition scenario, if it is necessary to wake up once and perform speech recognition continuously, a scenario for continuously performing audio transmission may be constructed according to configuration parameters in a target test case.
In conclusion, the test scene can be effectively constructed according to the configuration parameters in the target test case, so that the function, performance or pressure test of the server can be effectively carried out.
Step 507, in the test scenario, a target test tool in the second container is called to send the test data in the target test set to at least one first container.
In the embodiment of the present disclosure, in a test scenario, a target test tool in a second container may be called, and test data in a target test set may be sent to at least one first container. That is to say, in the present disclosure, the simulation of the client operation may be performed by the target testing tool, and the test data in the target test set is sent to the first container serving in the deployment server, so as to implement the simulation test on the server.
The client refers to a software program and the like which are operated on the electronic equipment and provide services for users.
The electronic device may be any device with computing capability, for example, a personal computer, a mobile terminal, and the like, and the mobile terminal may be a hardware device with various operating systems, touch screens, and/or display screens, such as a mobile phone, a tablet computer, a personal digital assistant, a wearable device, and the like.
And step 508, obtaining a test result obtained by performing target test matched with the task to be tested on the service in the at least one first container by using the test data.
For the explanation of step 508, reference may be made to the related description in any embodiment of the present disclosure, and details are not repeated herein.
The test method of the embodiment of the disclosure can decouple the test case and the test set from the test tool, and when dealing with increasingly complex and variable cases, data and scenes, only the test case and the test set need to be updated independently, and the test tool does not need to be modified and configured, so that the iteration difficulty can be reduced, and the applicability and the expandability of the method can be improved.
In order to clearly illustrate how the test result is obtained in any embodiment of the present disclosure, the present disclosure also provides a test method.
Fig. 6 is a schematic flow chart of a testing method provided in the third embodiment of the present disclosure.
As shown in fig. 6, the test method may include the steps of:
step 601, acquiring a task to be tested, and acquiring a task parameter associated with a task identifier according to the task identifier of the task to be tested, wherein the task parameter includes a test case identifier and a test set identifier.
And step 602, loading a target test case matched with the test case identifier according to the test case identifier.
Step 603, loading the target test set matched with the test set identifier according to the test set identifier.
Step 604, based on the target test case, sending the test data in the target test set to the first container served in the at least one deployment server.
For the explanation of steps 601 to 604, reference may be made to the related description in any embodiment of the present disclosure, which is not described herein again.
Step 605, receiving a response result sent by the at least one first container, wherein the response result is generated by the service in the at least one first container in response to the test data.
In embodiments of the present disclosure, the service in the at least one first container may generate a response result in response to the test data.
As an example, when the task to be recognized is a speech recognition test task, the test data may be audio, and the service in the at least one first container may perform speech recognition on the test data to obtain a response result (i.e., a speech recognition result).
As another example, when the task to be recognized is a speech synthesis task, the test data may be text, and the service in the at least one first container may perform speech synthesis on the test data to obtain a response result (i.e., a speech synthesis result).
In one possible implementation manner of the embodiment of the present disclosure, the response result sent by the at least one first container may be received by the second container. That is, after the service in the at least one first container generates the response result, the response result may be sent to the second container, so that the server may receive the response result sent by the at least one first container through the second container.
Step 606, in the process that the service in the at least one first container responds to the test data, performing operation monitoring on the at least one first container to obtain a monitoring result.
In this embodiment of the present disclosure, in the process of responding to the test data by the service in the at least one first container, the service end may perform operation monitoring on the at least one first container to obtain a monitoring result.
The monitoring result may include monitoring information about resources (such as resources of a CPU, a GPU, and a memory) occupied by the service response test data in the at least one first container, and functions (such as a call function, a voice call function, and a search function) called by the service response test data in the at least one first container.
Step 607, the response result and the monitoring result are used as the test result.
In the embodiment of the disclosure, the response result and the monitoring result can be used as the test result, so that the performance, the pressure resistance and the like of the service end can be verified according to the test result.
According to the test method, the test result not only comprises the response result generated by the service in the first container responding to the test data, but also comprises the monitoring result obtained by monitoring the operation of the first container, and the richness of the test result can be improved, so that the performance, the pressure resistance and the like of the service end are determined according to the test result, and the accuracy and the reliability of the determination result can be improved.
In a possible implementation manner of the embodiment of the present disclosure, when the test type of the task to be tested is a performance test, the test result may be analyzed to determine the performance of the server. The above process is described in detail with reference to example four.
Fig. 7 is a schematic flowchart of a testing method provided in the fourth embodiment of the present disclosure.
As shown in fig. 7, the test method may include the steps of:
step 701, acquiring a task to be tested, and acquiring a task parameter associated with a task identifier according to the task identifier of the task to be tested, wherein the task parameter includes a test case identifier and a test set identifier.
And 702, loading a target test case matched with the test case identifier according to the test case identifier, and loading a target test set matched with the test set identifier according to the test set identifier.
Step 703, determining a target testing tool matched with the task type according to the task type of the task to be tested.
Step 704, determining a second container from the candidate containers according to the operation status of the candidate containers for deploying the target testing tool.
Step 705, a target testing tool in the second container is called to construct a testing scene matched with the testing type of the task to be tested according to the target testing case.
Wherein, the test type can be a performance test of the first stage.
Step 706, in the test scenario, the target test tool in the second container is called to send the test data in the target test set to at least one first container.
For explanation of steps 701 to 706, reference may be made to relevant descriptions in any embodiment of the present disclosure, which are not described herein again.
Step 707, receiving the response result of the first stage sent by the at least one first container through the second container.
Wherein the response result of the first stage is generated by the first stage processing of the test data by the service in the at least one first container.
In an embodiment of the disclosure, a service in at least one first container may perform a first stage processing on test data to generate a first stage response result.
For example, the test data is used as audio for example, the first stage may be a speech recognition stage, and for example, the test data is used as text for example, and the first stage may be a speech synthesis stage.
Step 708, during the first-stage processing of the test data by the service in the at least one first container, performing operation monitoring on the at least one first container to obtain a first-stage monitoring result.
In the embodiment of the present disclosure, in the process that the service in the at least one first container performs the first-stage processing on the test data, the service end may perform operation monitoring on the at least one first container to obtain a monitoring result of the first stage.
The monitoring result of the first stage may include monitoring information such as resources occupied by the service in the at least one first container when the test data is processed in the first stage, and functions (such as a call function, a voice call function, a search function, and the like) called by the service in the at least one first container when the test data is processed in the first stage.
And step 709, generating a test result according to the response result of the first stage and the monitoring result of the first stage.
In the embodiment of the present disclosure, a test result may be generated according to the response result and the monitoring result of the first stage.
As an example, when the service in the at least one first container is used only for the first stage processing of the test data, the response result of the first stage and the monitoring result of the first stage may be used as the test result.
And when the service in the at least one first container is not only used for performing the first-stage processing on the test data, but also used for performing other-stage processing on the test data, the response result of the other stage sent by the at least one first container received by the second container may be obtained, where the response result of the other stage is generated by performing other-stage processing on the test data by the service in the at least one first container. In addition, during the process that the service in the at least one first container performs other-stage processing on the test data, the at least one first container is monitored to obtain the monitoring result of other stages, so that the response result and the monitoring result corresponding to the first stage and the response result and the monitoring result corresponding to other stages can be used as the test result.
Step 710, a target test tool in the second container is called to execute the target test case to determine a first difference between a response result of the first stage in the test result and an expected result of the first stage marked on the test data.
In embodiments of the present disclosure, a target test tool in the second container may be invoked to execute code logic in the target test case to determine a first difference between the response result of the first stage and an expected result of the first stage noted on the test data.
For example, taking the task to be tested as a voice recognition test task, the response result of the first stage may include voice recognition of the test data by the service in the at least one first container, and the obtained voice recognition result, the expected result of the first stage may be text information marked on the test data (i.e. audio).
Step 711, determining a first test index corresponding to the performance test of the first stage according to the first difference.
In the embodiment of the present disclosure, the first test indicator is in a negative correlation with the first difference, that is, the smaller the first difference, the larger the first test indicator is, and conversely, the larger the first difference, the smaller the first test indicator is.
Step 712, analyzing the monitoring result of the first stage in the test result to determine the resource occupied by the service response test data in the at least one first container.
In the embodiment of the present disclosure, the monitoring result in the first stage may be analyzed to obtain the resource occupied by the service response test data in the at least one first container.
Step 713, determining a second test indicator corresponding to the performance test of the first stage according to the resources occupied by the service in the at least one first container.
In this embodiment of the disclosure, a second test index corresponding to the performance test of the first stage may be determined according to the resource occupied by the service in the at least one first container. The second test index and the occupied resource amount are in a negative correlation relationship, namely the smaller the occupied resource amount is, the larger the second test index is, and on the contrary, the larger the occupied resource amount is, the smaller the second test index is.
And 714, determining the performance of the server at the first stage according to the first test index and the second test index.
In the embodiment of the present disclosure, the performance of the server at the first stage may be determined according to the first test index and the second test index.
For example, the larger the first test index is, the better the performance of the server at the first stage is, otherwise, the smaller the first test index is, the worse the performance of the server at the first stage is, and for example, the larger the second test index is, the better the performance of the server at the first stage is, otherwise, the smaller the second test index is, the worse the performance of the server at the first stage is.
According to the test method disclosed by the embodiment of the disclosure, by decoupling the test execution and the index calculation, when increasingly complex and changeable use cases, data and scenes are dealt with, only the test use cases and the test sets are required to be independently updated, and the modification and configuration of the test tool are not required, so that the applicability and the expandability of the method can be improved.
In order to clearly illustrate how the performance of the server in the first stage is determined in any embodiment of the present disclosure, the present disclosure also proposes a test method.
Fig. 8 is a schematic flowchart of a testing method provided in the fifth embodiment of the present disclosure.
As shown in fig. 8, the test method may include the steps of:
step 801, acquiring a task to be tested, and acquiring a task parameter associated with a task identifier according to the task identifier of the task to be tested, wherein the task parameter includes a test case identifier and a test set identifier.
And 802, loading a target test case matched with the test case identifier according to the test case identifier, and loading a target test set matched with the test set identifier according to the test set identifier.
And 803, determining a target test tool matched with the task type according to the task type of the task to be tested.
Step 804, determining a second container from the candidate containers according to the operating state of the candidate containers for deploying the target test tool.
And 805, calling a target test tool in the second container to construct a test scene matched with the test type of the task to be tested according to the target test case.
Step 806, in the test scenario, invoking the target test tool in the second container to send the test data in the target test set to the at least one first container.
In step 807, the response result of the first stage sent by the at least one first container is received by the second container.
Wherein the response result of the first stage is generated by the first stage processing of the test data by the service in the at least one first container.
Step 808, during the first-stage processing of the test data by the service in the at least one first container, performing operation monitoring on the at least one first container to obtain a monitoring result of the first stage.
And step 809, generating a test result according to the response result of the first stage and the monitoring result of the first stage.
Step 810, invoking the target test tool in the second container to execute the target test case to determine a first difference between the response result of the first stage and the expected result of the first stage marked on the test data.
Step 811, according to the first difference, a first test index corresponding to the performance test of the first stage is determined.
Step 812, analyzing the monitoring result of the first stage to determine the resource occupied by the service response test data in the at least one first container.
Step 813, determining a second test indicator corresponding to the performance test of the first stage according to the resource occupied by the service in the at least one first container.
For the explanation of steps 801 to 813, reference may be made to the related description in any embodiment of the present disclosure, which is not described herein again.
In step 814, the response time of the first packet, the total execution duration and the hard delay of the test data are determined.
In this embodiment of the present disclosure, the first packet response time is a duration between a first time and a second time, where the first time is a time when the target test tool in the second container sends a first data packet in the test data, and the second time is a time when the second container receives the first data packet in the response result of the first stage. The size of each data packet may be fixed, and may include one frame of data, or may also include multiple frames of data, which is not limited in this disclosure.
That is, the first packet response time is the time interval between the second container sending the first packet and receiving the first packet.
In the embodiment of the present disclosure, the total execution duration is a duration between the first time and a third time, where the third time is a time when the last data packet in the response result of the first stage is received by the second container.
That is, the total execution duration is the time interval between the second container sending the first packet and receiving the last packet in the first stage response result.
In an embodiment of the disclosure, the hard delay is a time period between a fourth time and a third time, wherein the fourth time is a time when the target test tool in the second container sends a last data packet in the test data.
That is, the hard delay is the time interval between the second container sending the last packet to the receipt of the last packet in the first stage's response result.
Step 815, determining a third test index corresponding to the performance test of the first stage according to at least one of the response time of the first frame of the test data, the total execution duration and the hard delay.
In the embodiment of the present disclosure, the third test indicator corresponding to the performance test may be determined according to at least one of the response time of the first frame of the test data, the total execution duration, and the hard delay.
The third test index and the first frame response time are in a negative correlation relationship, namely the shorter the first frame response time is, the larger the third test index is, and on the contrary, the longer the first frame response time is, the smaller the third test index is.
The third test index and the total execution duration are in a negative correlation relationship, that is, the shorter the total execution duration is, the larger the third test index is, and conversely, the longer the total execution duration is, the smaller the third test index is.
The third test index and the hard delay are in a negative correlation relationship, namely the shorter the hard delay is, the larger the third test index is, and conversely, the longer the hard delay is, the smaller the third test index is.
Step 816, determining the performance of the server at the first stage according to the first test index, the second test index and the third test index.
In the embodiment of the disclosure, the performance of the server at the first stage may be determined according to the first test index, the second test index and the third test index.
For example, the larger the first test index is, the better the performance of the server in the first stage is, whereas the smaller the first test index is, the worse the performance of the server in the first stage is; for another example, the larger the second test index is, the better the performance of the server in the first stage is, otherwise, the smaller the second test index is, the worse the performance of the server in the first stage is; for another example, the larger the third test index is, the better the performance of the server in the first stage is, whereas the smaller the third test index is, the worse the performance of the server in the first stage is.
According to the testing method, the performance of the server is determined according to the plurality of testing indexes, and the accuracy and reliability of the determination result can be improved.
In a possible implementation manner of the embodiment of the present disclosure, when the test type of the task to be tested is a performance test, the performance of the server may be determined according to the response result and the monitoring result. The above process is described in detail with reference to example six.
Fig. 9 is a schematic flowchart of a testing method provided in the sixth embodiment of the present disclosure.
As shown in fig. 9, the test method may include the steps of:
step 901, acquiring a task to be tested, and acquiring a task parameter associated with a task identifier according to the task identifier of the task to be tested, wherein the task parameter includes a test case identifier and a test set identifier.
And 902, loading a target test case matched with the test case identifier according to the test case identifier, and loading a target test set matched with the test set identifier according to the test set identifier.
Step 903, determining a target test tool matched with the task type according to the task type of the task to be tested.
Step 904, determining a second container from the candidate containers according to the operating status of the candidate containers for deploying the target test tool.
Step 905, calling a target test tool in the second container to construct a test scene matched with the test type of the task to be tested according to the target test case.
The test type may be a second stage performance test, which may also be referred to as a functional test.
Step 906, in the test scenario, invoking the target test tool in the second container to send the test data in the target test set to at least one first container.
For explanation of steps 901 to 906, reference may be made to relevant descriptions in any embodiment of the present disclosure, and details are not described herein.
Step 907, receiving the response result of the second stage sent by the at least one first container through the second container.
Wherein the response result of the second stage is generated by the second stage processing of the test data by the service in the at least one first container.
In the disclosed embodiment, the service in the at least one first container may perform a second stage process on the test data to generate a second stage response result.
For example, the test data is used as audio for example, the first stage in the above embodiment may be a speech recognition stage, and the second stage may be a stage for performing further processing based on the speech recognition result, for example, a telephone dialing stage, a search stage, and the like.
For example, assuming that the test data (audio) is "call to zhang san", in the first stage, the audio may be subjected to speech recognition based on a speech recognition technology through the service in the first container to obtain a speech recognition result, for example, the response result of the first stage may be the speech recognition result; in the second stage, a call function may be called according to the voice recognition result to make a call to zhang san, for example, the response result of the second stage may be "call to zhang san" or "call to zhang san".
For another example, assuming that the test data (audio) is "search for popular dramas", in the first stage, the audio may be subjected to speech recognition based on a speech recognition technology through the service in the first container to obtain a speech recognition result, for example, the response result in the first stage may be the speech recognition result; in the second stage, a search function may be invoked to search for popular dramas according to the above-mentioned speech recognition result, for example, the response result of the second stage may be a search result or "searched popular dramas".
Step 908, during the second-stage processing of the test data by the service in the at least one first container, performing operation monitoring on the at least one first container to obtain a second-stage monitoring result.
In the embodiment of the present disclosure, in the process that the service in the at least one first container performs the second-stage processing on the test data, the service end may perform operation monitoring on the at least one first container to obtain a monitoring result of the second stage.
At step 909, the target test tool in the second container is invoked to execute the target test case to determine a second difference between the second stage response result and the expected result of the second stage marked on the test data.
In embodiments of the present disclosure, the target test tool in the second container may be invoked to execute code logic in the target test case to determine a second difference between the response results of the second stage and the expected results of the second stage noted on the test data.
For example, the expected result of the second stage is "zhang san" and the response result of the second stage is "zhang san" that obviously has a larger second difference due to different call objects.
In step 910, the monitoring result of the second stage is analyzed to determine a first function called by the service in the at least one first container in response to the test data.
In the embodiment of the present disclosure, the monitoring result of the second stage may be analyzed to obtain a first function called by the service in the at least one first container in response to the test data.
Step 911, a third difference between the first function and the second function of the second stage marked on the test data is determined.
In the embodiment of the present disclosure, the test data may be marked with a second function to be called in the second stage, for example, assuming that the test data (audio) is "call to zhang san", and the second function of the second stage marked on the test data is "call function".
In an embodiment of the present disclosure, a third difference between the first function and the second function of the second stage noted on the test data may be determined.
And step 912, determining the performance of the server at the second stage according to the second difference and/or the third difference.
In the embodiment of the present disclosure, the performance of the server in the second stage may be determined according to the second difference and/or the third difference.
For example, the smaller the second difference is, the better the performance of the server at the second stage is, whereas, the larger the second difference is, the worse the performance of the server at the second stage is, and for another example, the smaller the third difference is, the better the performance of the server at the second stage is, whereas, the larger the third difference is, the worse the performance of the server at the second stage is.
As an example, the performance test of the second stage may be referred to as a functional test, and in a case that the second difference is smaller than a set first difference threshold, and/or the third difference is smaller than a set second difference threshold, it is determined that the server passes the functional test; and determining that the service end fails the function test under the condition that the second difference is greater than or equal to the first difference threshold value and/or the third difference is greater than or equal to the second difference threshold value.
According to the testing method, the performance of the server is determined according to the multiple data, and the accuracy and reliability of the determination result are improved.
In a possible implementation manner of the embodiment of the present disclosure, when the test type is a pressure test, whether the server passes the pressure test may be determined according to the monitoring result. The above process is described in detail with reference to the seventh embodiment.
Fig. 10 is a schematic flowchart of a testing method provided in the seventh embodiment of the present disclosure.
Step 1001, acquiring a task to be tested, and acquiring a task parameter associated with a task identifier according to the task identifier of the task to be tested, wherein the task parameter includes a test case identifier and a test set identifier.
And step 1002, loading a target test case matched with the test case identifier according to the test case identifier.
And 1003, loading a target test set matched with the test set identifier according to the test set identifier.
Step 1004, based on the target test case, sending the test data in the target test set to a first container served in at least one deployment server.
Step 1005, obtaining a test result obtained by performing a target test matched with the task to be tested on the service in the at least one first container by using the test data.
The test result comprises a response result and a monitoring result, wherein the response result is generated by the service in the at least one first container in response to the test data, and the monitoring result is obtained by monitoring the operation of the at least one first container in the process that the service in the at least one first container responds to the test data.
For the explanation of steps 1001 to 1005, reference may be made to the related description in any embodiment of the present disclosure, which is not described herein again.
Step 1006, analyzing the monitoring result to determine whether the service in the at least one first container has an abnormal operation in the process of responding to the test data.
In the embodiment of the present disclosure, the monitoring result may be analyzed to determine whether an operation exception occurs in the service in the at least one first container in the process of responding to the test data, for example, determine whether an exception such as a deadlock or a crash occurs in the service in the at least one first container.
The deadlock can be determined by resources occupied by the service, for example, if the amount of the resources occupied by the service is 0, the deadlock of the service can be determined; a crash may be determined by an exception crash code.
Step 1007, responding to the service in the at least one first container running abnormally, determining that the service end fails the pressure test.
In the embodiment of the present disclosure, when the service in the at least one first container has an abnormal operation, it may be determined that the service end fails the pressure test, that is, it is determined that the pressure resistance of the service end is low and not as expected.
And step 1008, responding to the condition that the service in the at least one first container does not have the abnormal operation, and determining that the service end passes the pressure test.
In the embodiment of the disclosure, when the service in the at least one first container is not abnormal in operation, it may be determined that the service end passes the pressure test, that is, it is determined that the pressure resistance of the service end is higher, which is in line with the expectation.
In any embodiment of the present disclosure, the test result may be stored in correspondence with the task identifier. Therefore, the test result is stored persistently, so that subsequent data analysis can be performed conveniently, the server can be optimized according to the analysis result, and the performance, the pressure resistance and the like of the server are improved.
The testing method disclosed by the embodiment of the disclosure can not only realize performance testing and function testing on the server side, but also realize pressure testing on the server side, and can improve the flexibility and applicability of the method.
In any embodiment of the present disclosure, the automated testing of the server provided by the present disclosure may include: the method comprises the following steps of test project management, automatic test environment construction, test tool unified scheduling, test set cloud storage and loading, test case hosting, test task execution, test environment index monitoring, test result persistent storage, test index calculation and the like.
The overall execution flow may be as shown in fig. 11, and includes two parts, the first part is: and (6) building a test environment. The container resource refers to a machine resource, and since the server side test is chained, services with different functions are deployed on different machines, and different services or machines are connected with each other. Therefore, before the simulation test is carried out on the service end, when the test environment is built, the machine can be virtualized into at least one container, and different service resources are deployed in the container.
And (3) extracting the testing version, namely: after applying for the container resource, since the services in the container have different versions, the latest version of the service can be obtained, and the latest version of the service is deployed in the container.
Resource packaging, which means: since the latest version of the acquired service is code provided by developers, the code needs to be packaged into a resource package and deployed in a container.
Test environment deployment, which means: the container may be controlled to decompress the resource package and initiate a service in the container.
The second part is: and testing the server.
The log information refers to a log of the test tool, and is used for recording information such as operation behavior information of the test tool and data packets received by the test tool. That is to say, when the server is subjected to the simulation test, if an abnormality occurs, the server may be abnormal, or the test tool may be abnormal, at this time, it may be determined whether the test tool is abnormal according to the log information of the test tool, if the test tool is not abnormal, it is determined that the server is abnormal, and if the test tool is abnormal, it is necessary to further investigate whether the server is abnormal.
The test environment container monitoring means that all containers on the server side are monitored to obtain a monitoring result.
It should be noted that fig. 11 only exemplifies the test types of the test tasks as the function test, the performance test and the stress test, but the disclosure is not limited thereto, and in practical applications, the test types may also include an exception test, a stability test and the like.
The execution flow of the scheduling module (share-agent) may be as shown in fig. 12. When the test task is a voice recognition test task, a voice recognition test tool can be adopted to test the server; when the test task is a redis test task, a redis test tool can be adopted to test the service end; when the test task is a link test (such as lc link test) task, an lc link test tool can be used to test the service end; when the test task is an audio/video link test task, an audio/video link test tool (such as a common (comm) test tool) can be used for testing the server.
As shown in fig. 12, the scheduling module mainly includes the following functions:
parameter checking: checking the task type, the test type, the task identification and the query interface;
vessel screening (otherwise known as example screening): according to the task type, determining a testing tool matched with the task type, acquiring an idle container for deploying the testing tool, checking the state (idle state and occupied state) of the container, and distributing a container request.
Taking the test tool as an example of the speech recognition test tool shown in fig. 12, the execution flow of the test tool may be as shown in fig. 13. Wherein, clientimpl: the tool interface implementation class is used for defining the flow and the method of each test type; asrBase: a speech recognition base class; asrffuncition: a speech recognition function test subclass inherits the AsrBase; asrPerform: the speech recognition performance test subclass inherits the AsrBase; asrStress: the speech recognition performance test subclass inherits the AsrBase; asrCore: a speech recognition process control class in streaming mode; apiCore: interface-mode speech recognition process control class.
The AsrBase in fig. 13 is configured to load task parameters (including a test set identifier, a case identifier, a concurrency number (the number of clients that a test tool needs to simulate), and a configuration upper limit of the concurrency number corresponding to a container) according to the task identifier (obtained from the platform), load a test set according to the test set identifier (obtained from the redis cache), and load a test case according to the case identifier (obtained from the case hosting platform).
The AsrCore is used for constructing a data sending queue, constructing a data receiving queue, dotting time and carrying out data interaction with a server. That is, in the voice recognition scenario, the client sends streaming data to the server, and the client sends voice data to the server and receives the voice recognition result returned by the server, so that in the test scenario, the streaming data can be sent and received by constructing a data sending queue and a data receiving queue.
As an example, the overall architecture of the automated testing platform may be as shown in fig. 14, where the resource overhead indicator refers to a resource occupied by a service in each container obtained by monitoring each container, and for example, the resource overhead indicator may be a second testing indicator.
The character accuracy refers to the accuracy of the recognition result obtained by recognizing the audio of different characters in the test data (such as audio). For example, the audio includes a dialog between a doctor and a patient, the test data is identified, which part of the audio belongs to the dialog between the doctor and the patient can be determined, and which part of the audio belongs to the dialog between the patient can be determined, and the accuracy of character identification can be determined by comparing the identification result with the labeled information on the test data.
The confidence degree is to perform confidence judgment on the test data (such as audio), judge whether the test data is man-machine interaction dialog or noise data (the noise data does not need to respond), and determine the confidence degree of the test data according to the judgment result and the labeling information on the test data. For example, if the result of the determination is that the test data is a human-computer interaction session and the labeled information is that the test data is noisy data, the confidence is low, and if the result of the determination is that the test data is a human-computer interaction session and the labeled information is that the test data is a human-computer interaction session, the confidence is high.
The spoken language indicator is obtained by calculating a standard degree of pronunciation in test data (such as audio) to obtain a first calculation value indicating the standard degree of pronunciation, and determining the spoken language indicator according to the first calculation value and a second calculation value indicating the standard degree of pronunciation marked on the test data. For example, a spoken language indicator may be determined based on a difference between the first calculated value and the second calculated value, wherein the spoken language indicator is inversely related to the difference.
Other characteristic indexes refer to other indexes besides the indexes mentioned above, such as time consumption indexes, which are denoted as third test indexes in the present disclosure.
Based on the architecture shown in fig. 14, taking test data as an audio example, an automated test framework may be as shown in fig. 15. The service index collection refers to indexes returned by the server, such as role indexes and the like; the index of monitoring and processing the container resource refers to an actively acquired index, such as a resource overhead index determined according to a monitoring result; the characteristic index may include an index related to the link, such as QPS (query rate per second) and the like.
It should be noted that fig. 15 only exemplifies the test types of the test tasks as the function test, the performance test and the stress test, but the present disclosure is not limited thereto, and the test types may also include an exception test, a stability test, and the like in an actual application.
Ws in fig. 15 is a short term for websocket (a Protocol for performing full duplex communication over a single TCP (Transmission Control Protocol)), and qnet is a weak network.
In summary, the test method provided by the present disclosure has the following advantages:
1. the test automation level can be upgraded from an L2 level (partial automation) to an L3 level (limited automation), and a bottom layer service is provided for the realization of an L4 level (full automation); for example, for server testing of voice recognition, voice synthesis, natural language processing, security, and the like, an automated level upgrade may be implemented.
2. And (3) multi-service support: and the platform design realizes the quick access of different services.
3. And (3) reducing the test threshold: agent unified scheduling, version unified management, agent test function platform presentation and low-threshold service test realization. The testing process is standardized, the testing threshold is greatly reduced, the operation cost is reduced, and the usability is improved.
4. And (3) fast loading of data: and data cloud storage and quick reading are realized through a secondary cache mechanism of the redis + cloud.
5. And (3) test result persistence: the test result is stored in the database, so that the data analysis is convenient. The test period is clear, the test flow and the project control are realized, the test result is persistent, and the data analysis is easy.
6. Modular design: in the original test scheme, the test tool integrates a test case, a test set and index calculation, the coupling degree among all modules is high, the iteration difficulty is high, the test tool is only responsible for executing the test in the scheme provided by the disclosure, the test set and the test case are managed by a platform, the test index calculation is responsible for an index calculation module of the platform, and the coupling degree is greatly reduced.
7. The test process is modularized, the service is easy to access, the test tool is light in weight, and the test engineering is cloud.
8. And the dynamic scheduling of the test resources saves the resource cost. For example, when the number of concurrencies is small, the server can be tested through one second container, and when the number of concurrencies is large, the server can be tested through a plurality of second containers in a coordinated manner. For another example, when the service in the first container completes the test, part of the resources may be dynamically released, for example, the service may release the resources within a period of time without receiving the test data, or the resources in the second container may release the resources in the container if the resources in the second container are not used within a period of time.
Corresponding to the test method provided in the embodiment of fig. 4 to 10, the present disclosure also provides a test apparatus, and since the test apparatus provided in the embodiment of the present disclosure corresponds to the test method provided in the embodiment of fig. 4 to 10, the implementation of the test method is also applicable to the test apparatus provided in the embodiment of the present disclosure, and will not be described in detail in the embodiment of the present disclosure.
Fig. 16 is a schematic structural diagram of a testing apparatus according to an eighth embodiment of the present disclosure.
As shown in fig. 16, the test apparatus 1600 may include: a first obtaining module 1601, a loading module 1602, a sending module 1603, and a second obtaining module 1604.
The first obtaining module 1601 is configured to obtain a task to be tested, and obtain a task parameter associated with a task identifier according to the task identifier of the task to be tested, where the task parameter includes a test case identifier and a test set identifier.
And a loading module 1602, configured to load the target test case matching the test case identifier according to the test case identifier, and load the target test set matching the test set identifier according to the test set identifier.
The sending module 1603 is configured to send the test data in the target test set to a first container served in at least one deployment server based on the target test case.
The second obtaining module 1604 is configured to obtain a test result obtained by performing a target test matched with the task to be tested on the service in the at least one first container by using the test data.
In a possible implementation manner of the embodiment of the present disclosure, the sending module 1603 may include:
and the first determining unit is used for determining a target testing tool matched with the task type according to the task type of the task to be tested.
And the second determining unit is used for determining a second container from the candidate containers according to the running state of the candidate containers for deploying the target testing tool.
And the construction unit is used for calling a target test tool in the second container to construct a test scene matched with the test type of the task to be tested according to the target test case.
And the sending unit is used for calling the target testing tool in the second container to send the testing data in the target testing set to at least one first container in the testing scene.
In a possible implementation manner of the embodiment of the present disclosure, the construction unit is specifically configured to: acquiring configuration parameters from a target test case; and calling a target testing tool in the second container to construct a testing scene matched with the testing type of the task to be tested according to the configuration parameters.
In a possible implementation manner of the embodiment of the present disclosure, the second determining unit is specifically configured to: determining a third container from each candidate container according to the running state of the candidate container for deploying the target testing tool, wherein the running state of the third container is an idle state; the second container is determined from the third containers according to the type of test.
In a possible implementation manner of the embodiment of the present disclosure, the task to be tested is associated with at least one tested task, and the second determining unit is specifically configured to: determining a third container from each candidate container according to the running state of the candidate container for deploying the target test tool, wherein the running state of the third container is an idle state; and determining the second container called by the tested task from the third containers.
In a possible implementation manner of the embodiment of the present disclosure, the second obtaining module 1604 is specifically configured to: receiving, by the second container, a response result sent by the at least one first container, wherein the response result is generated by the service in the at least one first container in response to the test data; in the process that the service in the at least one first container responds to the test data, monitoring the operation of the at least one first container to obtain a monitoring result; and taking the response result and the monitoring result as a test result.
In a possible implementation manner of the embodiment of the present disclosure, the service in the at least one first container is used to perform a first-stage processing on the test data, and the test type is a first-stage performance test, where the test apparatus 1600 may further include:
and the first execution module is used for calling the target test tool in the second container to execute the target test case so as to determine a first difference between the response result of the first stage and the expected result of the first stage marked on the test data.
And the first determining module is used for determining a first test index corresponding to the performance test of the first stage according to the first difference.
And the first analysis module is used for analyzing the monitoring result in the first stage so as to determine resources occupied by the service response test data in at least one first container.
And the second determining module is used for determining a second test index corresponding to the performance test in the first stage according to the resources occupied by the service in the at least one first container.
And the third determining module is used for determining the performance of the server at the first stage according to the first test index and the second test index.
In a possible implementation manner of the embodiment of the present disclosure, the third determining module is specifically configured to: determining the first packet response time, the total execution time and the hard delay of the test data; the first packet response time is the duration from the first time to the second time, the total execution duration is the duration from the first time to the third time, and the hard delay is the duration from the fourth time to the third time; the first time is the time when the target testing tool in the second container sends the first data packet in the testing data, the second time is the time when the second container receives the first data packet in the response result of the first stage, the third time is the time when the second container receives the last data packet in the response result of the first stage, and the fourth time is the time when the target testing tool in the second container sends the last data packet in the testing data; determining a third test index corresponding to the performance test of the first stage according to at least one of the response time of the first frame of the test data, the total execution duration and the hard delay; and determining the performance of the server at the first stage according to the first test index, the second test index and the third test index.
In a possible implementation manner of the embodiment of the present disclosure, the service in the at least one first container is used to perform the second-stage processing on the test data, and the test type is a performance test of the second stage, and the test apparatus 1600 may further include:
and the second execution module is used for calling the target test tool in the second container to execute the target test case so as to determine a second difference between the response result of the second stage and the expected result of the second stage marked on the test data.
And the second analysis module is used for analyzing the monitoring result of the second stage so as to determine the first function called by the service in the at least one first container in response to the test data.
A fourth determining module for determining a third difference between the first function and a second function of a second stage noted on the test data.
And the fifth determining module is used for determining the performance of the server at the second stage according to the second difference and/or the third difference.
In a possible implementation manner of the embodiment of the present disclosure, the test type is a pressure test, and the test apparatus 1600 may further include:
and the third analysis module is used for analyzing the monitoring result so as to determine whether the service in the at least one first container has abnormal operation in the process of responding to the test data.
And the sixth determining module is used for determining that the service end fails the pressure test in response to the abnormal operation of the service in the at least one first container.
And the seventh determining module is used for responding to the condition that the service in the at least one first container does not have the abnormal operation, and determining that the service end passes the pressure test.
In a possible implementation manner of the embodiment of the present disclosure, the test apparatus 1600 may further include:
and the storage module is used for correspondingly storing the test result and the task identifier.
According to the testing device, task parameters related to task identifiers are obtained according to the task identifiers of the tasks to be tested; loading a target test case according to the test case identifier in the task parameter; loading a target test set according to the test set identification in the task parameter; sending test data in a target test set to a first container serving in at least one deployment server based on the target test case; and obtaining a test result obtained by performing target test matched with the task to be tested on the service in the at least one first container by adopting the test data. In the disclosure, since the test sets and the test cases corresponding to different test tasks may be different, when the target test matching the task to be tested is performed on the server, the target test set and the target test case matching the task to be tested are obtained, and the target test is performed on the service in the server deployed in the first container according to the target test case and the target test set, so that the pertinence and reliability of the test can be realized, and thus the performance, the pressure resistance and the like of the server can be accurately and reliably determined according to the test result.
To implement the above embodiments, the present disclosure also provides an electronic device, which may include at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to perform the testing method according to any of the above embodiments of the disclosure.
In order to achieve the above embodiments, the present disclosure also provides a non-transitory computer readable storage medium storing computer instructions for causing a computer to execute the testing method proposed in any one of the above embodiments of the present disclosure.
In order to implement the above embodiments, the present disclosure also provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the computer program implements the testing method proposed by any of the above embodiments of the present disclosure.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 17 shows a schematic block diagram of an example electronic device that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 17, the electronic apparatus 1700 includes a computing unit 1701 that can perform various appropriate actions and processes in accordance with a computer program stored in a ROM (Read-Only Memory) 1702 or a computer program loaded from a storage unit 1708 into a RAM (Random Access Memory) 1703. In the RAM 1703, various programs and data necessary for the operation of the electronic apparatus 1700 can also be stored. The computing unit 1701, the ROM 1702, and the RAM 1703 are connected to each other through a bus 1704. An I/O (Input/Output) interface 1705 is also connected to the bus 1704.
Various components in the electronic device 1700 are connected to the I/O interface 1705, including: an input unit 1706 such as a keyboard, a mouse, and the like; an output unit 1707 such as various types of displays, speakers, and the like; a storage unit 1708 such as a magnetic disk, optical disk, or the like; and a communication unit 1709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 1709 allows the electronic device 1700 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 1701 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of the computing Unit 1701 include, but are not limited to, a CPU (Central Processing Unit), a GPU (Graphic Processing Units), various dedicated AI (Artificial Intelligence) computing chips, various computing Units running machine learning model algorithms, a DSP (Digital Signal Processor), and any suitable Processor, controller, microcontroller, and the like. The computing unit 1701 executes various methods and processes described above, such as the test method described above. For example, in some embodiments, the testing methods described above may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1708. In some embodiments, part or all of a computer program may be loaded and/or installed onto the electronic device 1700 via the ROM 1702 and/or the communication unit 1709. When the computer program is loaded into RAM 1703 and executed by computing unit 1701, one or more steps of the testing method described above may be performed. Alternatively, in other embodiments, the computing unit 1701 may be configured to perform the above-described test method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be realized in digital electronic circuitry, integrated circuitry, FPGAs (Field Programmable Gate arrays), ASICs (Application-Specific Integrated circuits), ASSPs (Application Specific Standard products), SOCs (System On Chip, system On a Chip), CPLDs (Complex Programmable Logic devices), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a RAM, a ROM, an EPROM (Electrically Programmable Read-Only-Memory) or flash Memory, an optical fiber, a CD-ROM (Compact Disc Read-Only-Memory), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a Display device (e.g., a CRT (Cathode Ray Tube) or LCD (Liquid Crystal Display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: LAN (Local Area Network), WAN (Wide Area Network), internet, and blockchain Network.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in a conventional physical host and a VPS (Virtual Private Server). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be noted that artificial intelligence is a subject for studying a computer to simulate some human thinking processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), and includes both hardware and software technologies. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, machine learning/deep learning, a big data processing technology, a knowledge map technology and the like.
Cloud computing (cloud computing) refers to a technology architecture that accesses a flexibly extensible shared physical or virtual resource pool through a network, where resources may include servers, operating systems, networks, software, applications, storage devices, and the like, and may be deployed and managed in an on-demand, self-service manner. Through the cloud computing technology, high-efficiency and strong data processing capacity can be provided for technical application and model training of artificial intelligence, block chains and the like.
According to the technical scheme of the embodiment of the disclosure, task parameters associated with task identifiers are obtained according to the task identifiers of the tasks to be tested; loading a target test case according to the test case identifier in the task parameter; loading a target test set according to the test set identification in the task parameter; sending test data in a target test set to a first container serving in at least one deployment server based on the target test case; and obtaining a test result obtained by performing target test matched with the task to be tested on the service in the at least one first container by adopting the test data. In the disclosure, since the test sets and the test cases corresponding to different test tasks may be different, when the target test matching the task to be tested is performed on the server, the target test set and the target test case matching the task to be tested are obtained, and the target test is performed on the service in the server deployed in the first container according to the target test case and the target test set, so that the pertinence and reliability of the test can be realized, and thus the performance, the pressure resistance and the like of the server can be accurately and reliably determined according to the test result.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions proposed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (25)

1. A method of testing, comprising:
acquiring a task to be tested, and acquiring a task parameter associated with a task identifier according to the task identifier of the task to be tested, wherein the task parameter comprises a test case identifier and a test set identifier;
loading a target test case matched with the test case identifier according to the test case identifier;
loading a target test set matched with the test set identification according to the test set identification;
sending the test data in the target test set to a first container served in at least one deployment server based on the target test case;
and obtaining a test result obtained by performing target test matched with the task to be tested on the service in the at least one first container by adopting the test data.
2. The method of claim 1, wherein the sending the test data in the target test set to a first container served in at least one deployment service based on the target test case comprises:
determining a target testing tool matched with the task type according to the task type of the task to be tested;
determining a second container from each of the candidate containers according to the operating status of the candidate container in which the target test tool is deployed;
calling a target test tool in the second container to construct a test scene matched with the test type of the task to be tested according to the target test case;
and under the test scene, calling a target test tool in the second container to send the test data in the target test set to the at least one first container.
3. The method of claim 2, wherein the invoking of the target testing tool in the second container constructs a testing scenario matching the testing type of the task to be tested according to the target testing case, comprising:
acquiring configuration parameters from the target test case;
and calling a target testing tool in the second container to construct a testing scene matched with the testing type of the task to be tested according to the configuration parameters.
4. The method of claim 2, wherein said determining a second container from each of said candidate containers based on an operational status of the candidate container in which said target test tool is deployed comprises:
determining a third container from each candidate container according to the running state of the candidate container for deploying the target testing tool, wherein the running state of the third container is an idle state;
determining the second container from each of the third containers according to the test type.
5. The method of claim 2, wherein the task to be tested is associated with at least one tested task, and wherein determining a first container from each of the candidate containers based on an operational state of the candidate container in which the target testing tool is deployed comprises:
determining a third container from each candidate container according to the running state of the candidate container for deploying the target testing tool, wherein the running state of the third container is an idle state;
and determining a second container called by the tested task from each third container.
6. The method of claim 2, wherein said obtaining test results from a targeted test using said test data on services in said at least one first container that match said task to be tested comprises:
receiving, by the second container, a response result sent by the at least one first container, wherein the response result is generated by a service in the at least one first container in response to the test data;
in the process that the service in the at least one first container responds to the test data, performing operation monitoring on the at least one first container to obtain a monitoring result;
and taking the response result and the monitoring result as the test result.
7. The method of claim 6, wherein the service in the at least one first container is used for a first phase of processing the test data, the test type being a performance test of the first phase, the method further comprising:
invoking a target test tool in the second container to execute the target test case to determine a first difference between the response result of the first stage and an expected result of the first stage marked on the test data;
determining a first test index corresponding to the performance test of the first stage according to the first difference;
analyzing the monitoring result of the first stage to determine the resources occupied by the service in the at least one first container responding to the test data;
determining a second test index corresponding to the performance test of the first stage according to the resources occupied by the service in the at least one first container;
and determining the performance of the server at the first stage according to the first test index and the second test index.
8. The method of claim 7, wherein the determining the performance of the server at the first stage according to the first test metric and the second test metric comprises:
determining the first packet response time, the total execution duration and the hard delay of the test data; the first packet response time is a duration between a first time and a second time, the total execution duration is a duration between the first time and a third time, and the hard delay is a duration between a fourth time and the third time; the first time is a time when a target test tool in the second container sends a first data packet in the test data, the second time is a time when the second container receives the first data packet in the response result of the first stage, the third time is a time when the second container receives a last data packet in the response result of the first stage, and the fourth time is a time when the target test tool in the second container sends the last data packet in the test data;
determining a third test index corresponding to the performance test of the first stage according to at least one of the response time of the first frame of the test data, the total execution duration and the hard delay;
and determining the performance of the server at the first stage according to the first test index, the second test index and the third test index.
9. The method of claim 6, wherein the service in the at least one first container is used for a second stage of processing the test data, the test type being a performance test of the second stage, the method further comprising:
invoking a target testing tool in the second container to execute the target test case to determine a second difference between the response result of the second stage and the expected result of the second stage marked on the test data;
analyzing the monitoring result of the second stage to determine a first function called by the service in the at least one first container in response to the test data;
determining a third difference between the first function and a second function of the second stage noted on the test data;
and determining the performance of the server at the second stage according to the second difference and/or the third difference.
10. The method of claim 6, wherein the test type is a stress test, the method further comprising:
analyzing the monitoring result to determine whether the service in the at least one first container has abnormal operation in the process of responding to the test data;
responding to the service in the at least one first container to generate an abnormal operation, and determining that the service end fails the pressure test;
and in response to the service in the at least one first container not having an abnormal operation, determining that the service end passes the pressure test.
11. The method according to any one of claims 1-10, wherein the method further comprises:
and correspondingly storing the test result and the task identifier.
12. A test apparatus, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a task to be tested and acquiring task parameters related to a task identifier according to the task identifier of the task to be tested, and the task parameters comprise a test case identifier and a test set identifier;
the loading module is used for loading a target test case matched with the test case identifier according to the test case identifier and loading a target test set matched with the test set identifier according to the test set identifier;
a sending module, configured to send, based on the target test case, the test data in the target test set to a first container served in at least one deployment server;
and the second acquisition module is used for acquiring a test result obtained by performing target test matched with the task to be tested on the service in the at least one first container by adopting the test data.
13. The apparatus of claim 12, wherein the means for transmitting comprises:
the first determining unit is used for determining a target testing tool matched with the task type according to the task type of the task to be tested;
a second determination unit configured to determine a second container from each of the candidate containers in accordance with an operation state of the candidate container in which the target test tool is deployed;
the construction unit is used for calling a target test tool in the second container to construct a test scene matched with the test type of the task to be tested according to the target test case;
and the sending unit is used for calling a target test tool in the second container to send the test data in the target test set to the at least one first container in the test scene.
14. The apparatus according to claim 13, wherein the building unit is specifically configured to:
acquiring configuration parameters from the target test case;
and calling a target testing tool in the second container to construct a testing scene matched with the testing type of the task to be tested according to the configuration parameters.
15. The apparatus according to claim 13, wherein the second determining unit is specifically configured to:
determining a third container from each candidate container according to the running state of the candidate container for deploying the target testing tool, wherein the running state of the third container is an idle state;
determining the second container from each of the third containers according to the test type.
16. The apparatus of claim 13, wherein the task to be tested is associated with at least one tested task, and the second determining unit is specifically configured to:
determining a third container from each candidate container according to the running state of the candidate container for deploying the target testing tool, wherein the running state of the third container is an idle state;
and determining a second container called by the tested task from each third container.
17. The apparatus according to claim 13, wherein the second obtaining module is specifically configured to:
receiving, by the second container, a response result sent by the at least one first container, wherein the response result is generated by a service in the at least one first container in response to the test data;
in the process that the service in the at least one first container responds to the test data, monitoring the operation of the at least one first container to obtain a monitoring result;
and taking the response result and the monitoring result as the test result.
18. The apparatus of claim 17, wherein the service in the at least one first container is configured to perform a first stage of processing on the test data, the test type being a performance test of the first stage, the apparatus further comprising:
a first execution module, configured to invoke a target test tool in the second container to execute the target test case, so as to determine a first difference between a response result of the first stage and an expected result of the first stage marked on the test data;
a first determining module, configured to determine, according to the first difference, a first test index corresponding to the performance test at the first stage;
a first analyzing module, configured to analyze the monitoring result at the first stage to determine a resource occupied by the service in the at least one first container in response to the test data;
a second determining module, configured to determine, according to the resource occupied by the service in the at least one first container, a second test indicator corresponding to the performance test at the first stage;
and the third determining module is used for determining the performance of the server at the first stage according to the first test index and the second test index.
19. The apparatus of claim 18, wherein the third determining module is specifically configured to:
determining the first packet response time, the total execution duration and the hard delay of the test data; wherein the first packet response time is a duration between a first time and a second time, the total execution duration is a duration between the first time and a third time, and the hard delay is a duration between a fourth time and the third time; the first time is a time when a target test tool in the second container sends a first data packet in the test data, the second time is a time when the second container receives the first data packet in the response result of the first stage, the third time is a time when the second container receives a last data packet in the response result of the first stage, and the fourth time is a time when the target test tool in the second container sends the last data packet in the test data;
determining a third test index corresponding to the performance test of the first stage according to at least one of the response time of the first frame of the test data, the total execution duration and the hard delay;
and determining the performance of the server at the first stage according to the first test index, the second test index and the third test index.
20. The apparatus of claim 17, wherein the service in the at least one first container is configured to perform a second stage of processing on the test data, wherein the test type is the second stage of performance testing, the apparatus further comprising:
a second execution module, configured to invoke a target testing tool in the second container to execute the target test case, so as to determine a second difference between the response result of the second stage and the expected result of the second stage marked on the test data;
the second analysis module is used for analyzing the monitoring result of the second stage so as to determine a first function called by the service in the at least one first container in response to the test data;
a fourth determining module for determining a third difference between the first function and a second function of the second stage noted on the test data;
a fifth determining module, configured to determine, according to the second difference and/or the third difference, performance of the server at the second stage.
21. The apparatus of claim 17, wherein the test type is a pressure test, the apparatus further comprising:
the third analysis module is used for analyzing the monitoring result so as to determine whether the service in the at least one first container has abnormal operation in the process of responding to the test data;
the sixth determining module is used for responding to the abnormal operation of the service in the at least one first container, and determining that the service end fails the pressure test;
and the seventh determining module is used for determining that the service end passes the pressure test in response to the condition that the service in the at least one first container does not have the abnormal operation.
22. The apparatus of any one of claims 12-21, wherein the apparatus further comprises:
and the storage module is used for correspondingly storing the test result and the task identifier.
23. An electronic device, wherein the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-11.
24. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-11.
25. A computer program product comprising a computer program which, when being executed by a processor, carries out the steps of the method according to any one of claims 1-11.
CN202210999579.1A 2022-08-19 2022-08-19 Test method, test device, electronic equipment and storage medium Pending CN115357493A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210999579.1A CN115357493A (en) 2022-08-19 2022-08-19 Test method, test device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210999579.1A CN115357493A (en) 2022-08-19 2022-08-19 Test method, test device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115357493A true CN115357493A (en) 2022-11-18

Family

ID=84001912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210999579.1A Pending CN115357493A (en) 2022-08-19 2022-08-19 Test method, test device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115357493A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117389813A (en) * 2023-11-07 2024-01-12 中科驭数(北京)科技有限公司 RDMA test method, RDMA test device, electronic equipment and computer storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117389813A (en) * 2023-11-07 2024-01-12 中科驭数(北京)科技有限公司 RDMA test method, RDMA test device, electronic equipment and computer storage medium

Similar Documents

Publication Publication Date Title
EP3859533A2 (en) Method and apparatus for testing map service, electronic device, storage medium and computer program product
CN111309343B (en) Development deployment method and device
US10949218B2 (en) Generating an execution script for configuration of a system
CN109977012B (en) Joint debugging test method, device, equipment and computer readable storage medium of system
CN111966465B (en) Method, system, equipment and medium for modifying host configuration parameters in real time
EP3920500A1 (en) Method and apparatus for verifying operation state of application
CN112199355B (en) Data migration method and device, electronic equipment and storage medium
CN114564374A (en) Operator performance evaluation method and device, electronic equipment and storage medium
CN114328132A (en) Method, device, equipment and medium for monitoring state of external data source
CN115357493A (en) Test method, test device, electronic equipment and storage medium
CN112416747A (en) Test case execution method, device, equipment and medium
CN114116487B (en) Pressure testing method and device, electronic equipment and storage medium
CN113535560B (en) Test execution method, device, storage medium and computing equipment
CN115391204A (en) Test method and device for automatic driving service, electronic equipment and storage medium
CN115328891A (en) Data migration method and device, storage medium and electronic equipment
CN115050396A (en) Test method and device, electronic device and medium
CN114756301A (en) Log processing method, device and system
CN114185641A (en) Virtual machine cold migration method and device, electronic equipment and storage medium
CN113079046A (en) Data access method and device, electronic equipment and medium
CN112579402A (en) Method and device for positioning faults of application system
CN113961405B (en) State switching instruction verification method and device, electronic equipment and storage medium
CN112416744A (en) Test control system, method and equipment
CN113568797B (en) Testing method and device of intelligent interaction system, electronic equipment and medium
CN113836291B (en) Data processing method, device, equipment and storage medium
CN114566148B (en) Cluster voice recognition service, detection method and device thereof and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination