CN114185765A - Test data processing method and device, electronic equipment and storage medium - Google Patents

Test data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114185765A
CN114185765A CN202111153287.8A CN202111153287A CN114185765A CN 114185765 A CN114185765 A CN 114185765A CN 202111153287 A CN202111153287 A CN 202111153287A CN 114185765 A CN114185765 A CN 114185765A
Authority
CN
China
Prior art keywords
data
target
test
parameter
test data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111153287.8A
Other languages
Chinese (zh)
Inventor
黄云鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
58tongcheng Information Technology Co ltd
Original Assignee
58tongcheng Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 58tongcheng Information Technology Co ltd filed Critical 58tongcheng Information Technology Co ltd
Priority to CN202111153287.8A priority Critical patent/CN114185765A/en
Publication of CN114185765A publication Critical patent/CN114185765A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4204Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus
    • G06F13/4221Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being an input/output bus, e.g. ISA bus, EISA bus, PCI bus, SCSI bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention provides a test data processing method and device, electronic equipment and a storage medium, and relates to the technical field of computers. The method comprises the following steps: acquiring target test data of a tested system, wherein the target test data comprises: a plurality of interface parameters of a target interface of the system to be tested and a test value of each interface parameter; extracting features of the target test data to obtain feature data, wherein the feature data are used for reflecting whether the target test data have interface parameters which need parameter verification and the test values do not pass the parameter verification; and inputting the characteristic data into a test result prediction model to obtain an expected test result, wherein the expected test result is used for indicating whether the target test data is a normal case or not. The invention improves the generation efficiency of the test data to a certain extent.

Description

Test data processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a test data processing method and apparatus, an electronic device, and a storage medium.
Background
Most software products today are composed of a number of different systems. A large amount of data interaction is required to be carried out between systems or between modules in the systems through interfaces so as to realize the functions of products. Therefore, it is very important to perform interface tests on a software product to verify the operational capability of the software product. Interface testing refers to testing for testing capabilities of data exchange, transfer, and control between interfaces. Among them, the interface test requires a large amount of test data. The test data comprises test data used for inputting the tested system and marking information of the test data, and the marking information indicates the type of the use case of the test data. The use case types include a normal use case and an abnormal use case. The actual test result obtained when the test data of the normal case is used for testing the tested system is a normal result; and adopting the test data of the abnormal case to test the tested system to obtain an actual test result which is an abnormal result. And verifying the actual test result after the test data is input into the tested system to be tested by the tester by using the test data and the test result type indicated by the marking information of the test data. And judging whether the tested system is normal or not according to the checking result.
The current complete generation process of test data generally comprises: and manually compiling and generating test data for inputting the tested system according to the test requirements, and predicting the test result of the test data. Therefore, the test data is marked by adopting the artificially predicted test result. However, such a generation method mainly depends on the generation of the test data performed manually, and the efficiency of generating the test data is low.
Disclosure of Invention
In view of this, the present application provides a method and an apparatus for processing test data, an electronic device, and a storage medium, which improve the generation efficiency of the test data to a certain extent.
According to a first aspect of the present application, there is provided a test data processing method, the method comprising:
acquiring target test data of a tested system, wherein the target test data comprises: a plurality of interface parameters of a target interface of the system to be tested and a test value of each interface parameter;
extracting features of the target test data to obtain feature data, wherein the feature data are used for reflecting whether the target test data have interface parameters which need parameter verification and the test values do not pass the parameter verification;
and inputting the characteristic data into a test result prediction model to obtain an expected test result, wherein the expected test result is used for indicating whether the target test data is a normal case or not.
Optionally, the performing feature extraction on the target test data to obtain feature data includes:
aiming at any target parameter verification in multiple parameter verifications, generating first data corresponding to the target parameter verification when determining that target interface parameters needing the target parameter verification do not exist in the target test data;
when it is determined that the target test data has the target interface parameters needing to be subjected to the target parameter verification, matching the target test values of the target interface parameters with verification conditions of the target parameter verification;
when the target test value meets the verification condition, generating the first data corresponding to the target parameter verification;
when the target test value does not meet the verification condition, generating second data corresponding to the target parameter verification;
generating characteristic data, wherein the characteristic data comprises data corresponding to each generated parameter check.
Optionally, before the performing the feature extraction on the target test data to obtain the feature data, the method further includes:
acquiring test information of the system under test, wherein the test information comprises constraint information of each interface parameter, and the constraint information comprises at least one of the following information: the method comprises the following steps of (1) parameter type, parameter length range, whether a parameter is a mandatory parameter, whether parameter value is allowed to be empty, whether the parameter value is unique, and whether the parameter needs to be verified with strong legality; the checking condition of the parameter checking is generated based on whether the constraint information is met;
the determining that no target interface parameter requiring the target parameter verification exists in the target test data includes:
and when the constraint information of each interface parameter does not include the constraint information corresponding to the target parameter verification, determining that no target interface parameter needing the target parameter verification exists in the target test data.
Optionally, before the obtaining target test data of the system under test, the method further includes:
acquiring test information of the tested system, wherein the test information comprises: each interface parameter, a test value set of each interface parameter and a target coefficient, wherein the target coefficient is less than or equal to the number of the interface parameters;
carrying out Cartesian product operation on the test value set of each interface parameter to obtain a plurality of groups of initial test data, wherein any group of initial test data comprises the test value of each interface parameter;
aiming at any group of initial test data, carrying out permutation and combination on a plurality of test values of each target system in the initial test data to obtain at least one group of pairing data corresponding to the initial test data;
and screening the multiple groups of initial test data by adopting a pairing algorithm based on multiple groups of pairing data corresponding to the initial test data to obtain the target test data.
Optionally, after the screening, by using the pairing algorithm, the multiple sets of initial test data based on the multiple sets of pairing data corresponding to each of the initial test data to obtain the target test data, the method further includes:
testing the system to be tested by adopting the target test data to obtain the coverage rate of the current code;
when the coverage rate of the current code is smaller than that of a target code, updating the target coefficient to enable the updated target coefficient to be larger than the target coefficient before updating;
and repeating the step of arranging and combining the test values in the initial test data to test the tested system by adopting the target test data to obtain the current code coverage rate until the current code coverage rate is greater than or equal to the target code coverage rate, and determining the test data of the tested system as the target test data.
Optionally, before the feature data is input into the test result prediction model to obtain the expected test result, the method includes:
obtaining a plurality of groups of test sample data, wherein the test sample data comprises: the method comprises the following steps that a plurality of interface parameters of an interface, test values of the interface parameters and label data are obtained, wherein the label data are used for indicating that the test sample data are normal cases or abnormal cases;
extracting the characteristics of the test sample data to obtain sample characteristic data, wherein the sample characteristic data is used for reflecting whether the test sample data has interface parameters which need parameter verification and the test value does not pass the parameter verification;
and training a machine learning model by adopting the characteristic sample data to obtain the test result prediction model.
Optionally, the machine learning algorithm for training the machine learning model includes: random forest algorithms, support vector machine algorithms, or convolutional neural network algorithms.
According to a second aspect of the present application, there is provided a test data generation apparatus, the apparatus comprising:
an obtaining module, configured to obtain target test data of a system under test, where the target test data includes: a plurality of interface parameters of a target interface of the system to be tested and a test value of each interface parameter;
the extraction module is used for extracting the characteristics of the target test data to obtain characteristic data, wherein the characteristic data is used for reflecting whether the target test data has interface parameters which need to be subjected to parameter verification and the test values do not pass the parameter verification;
and the model prediction module is used for inputting the characteristic data into a test result prediction model to obtain an expected test result, and the expected test result is used for indicating whether the target test data is a normal case or not.
In a third aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the test data processing method according to the first aspect.
In a fourth aspect, the present application provides an electronic device comprising: a processor, a memory and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the test data processing method according to the first aspect when executing the program.
Aiming at the prior art, the method has the following advantages:
according to the test data processing method and device, the electronic device and the storage medium, the feature data is obtained by performing feature extraction on the obtained target test data, and the feature data is used for reflecting whether the target test data has interface parameters which need to be subjected to parameter verification and the test values do not pass the parameter verification. And inputting the characteristic data into the test result prediction model to obtain an expected test result. The expected test result is used for indicating whether the target test data is a normal case. In the technical scheme, the expected test result corresponding to the target test data is determined by adopting the test result prediction model, and compared with a mode of artificially predicting the test result of the test data, the prediction time of the preset test result corresponding to the target test data is shortened, and the prediction efficiency of the test data is improved. And furthermore, the complete generation process of the test data is simplified, and the generation efficiency of the test data is improved.
Drawings
FIG. 1 is a flowchart of a test data processing method according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of another test data processing method provided by the embodiments of the present application;
FIG. 3 is a system diagram of a test platform according to an embodiment of the present disclosure;
fig. 4 is a block diagram of a test data generation apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making an invasive task, are within the scope of the present invention.
The test data processing method provided by the embodiment of the application can be applied to electronic equipment. The electronic device can be a personal computer, a server or a cloud server cluster and the like. Alternatively, the embodiment of the present application provides an implementation environment of a test data processing method. The implementation environment includes: the terminal is connected with the server through a wired network or a wireless network.
The terminal may be a terminal held by a tester, and may be equipped with a test platform. The data transmission to the server side through the test platform, the receiving of various kinds of operation aiming at the test platform triggered by a tester, the display of the data transmitted by the server side through the test platform and the like are achieved. The service end can provide services of the test platform, and the test data processing method provided by the embodiment of the application is realized. By way of example, the terminal may be a mobile phone, a computer, a wearable device, or the like. The server can be a server, a service cluster comprising a plurality of servers, or a cloud server cluster.
Referring to fig. 1, a flowchart of a test data processing method according to an embodiment of the present application is shown. The test data processing method may be applied to the electronic device or the server in the above implementation environment, and the following embodiment describes an example in which the method is applied to the server. As shown in fig. 1, the test data processing method may include:
step 101, obtaining target test data of a tested system. The target test data includes: the method comprises the steps of obtaining a plurality of interface parameters of a target interface of a tested system and testing values of the interface parameters.
In the embodiment of the application, the target test data of the system under test may be generated by the server, or the target test data may also be uploaded to the server by the staff. The server may obtain one or more sets of target test data. Each set of target test data may include: the method comprises the steps of obtaining a plurality of interface parameters of a target interface of a system to be tested and test values of the interface parameters.
The target interface of the system under test may refer to all interfaces of the system under test. Alternatively, the target interface of the system under test may be one or more interfaces selected in advance. The target interface may be determined based on actual test requirements of the system under test, e.g., the target interface may be a first invoked interface of a plurality of consecutively invoked interfaces of the system under test. It should be noted that, when the number of target interfaces is multiple, the target test data further includes: and the interface identifier of the target interface and the corresponding relation between the interface identifier and the interface parameter.
Optionally, the interface parameter may be an interface request parameter. The target test data may further include: target interface information such as a Uniform Resource Locator (URL) address of the target interface, a name of the target interface, a system to which the target interface belongs, and a request method of the target interface.
And 102, extracting the characteristics of the target test data to obtain characteristic data. The characteristic data is used for reflecting whether the target test data has interface parameters which need to be subjected to parameter verification and the test values do not pass the parameter verification.
In the embodiment of the present application, the check items for performing parameter check on the interface parameters may be one item or multiple items. When the check item is multiple items, that is, the interface parameter can perform multiple parameter checks, the feature data may include data corresponding to each parameter check. The feature data may be used to reflect whether there is an interface parameter that needs to be subjected to parameter verification and the test value does not pass the parameter verification in the target test data for each parameter verification.
Optionally, the interface parameters of the interface may have constraint information for limiting the values thereof. A check condition for parameter checking is generated based on whether the constraint information is satisfied.
For example, the constraint information of the interface parameters of the interface may include at least one of the following information: the method comprises the steps of parameter type, parameter length range, whether the parameter is a mandatory parameter or not, whether parameter value is allowed to be empty or not, whether the parameter value is unique or not and whether the parameter needs strong validity check or not. The verification condition for verifying the at least one parameter may include:
and (3) checking conditions corresponding to the reference types: whether the parameter value meets the check conditions corresponding to the parameter type and the parameter length range or not is determined as follows: whether the parameter value accords with the parameter length range or not and whether the parameter value is allowed to be null or not are determined: whether the parameter value meets the corresponding checking condition that whether the parameter value is empty or not and the parameter value is not allowed to be empty: whether the parameter value meets the check conditions corresponding to the parameters which are not null and must be filled is determined as follows: whether the parameter is the necessary filling parameter or not and the parameter does not need to be the checking condition corresponding to the necessary filling parameter: whether the parameter is a non-mandatory parameter or not and the parameter value is the only corresponding check condition: whether the parameter value is a unique value or not and the parameter value is not unique corresponding to a check condition: whether the parameter value has no unique value or not and whether the parameter needs a verification condition with strong validity verification: whether the parameter value passes the strong validity check or not and the parameter does not need the check condition corresponding to the strong validity check: whether the parameter value passes the validity check or not is not needed.
For example, the check condition for checking 10 parameters sequentially includes: whether the parameter value meets the parameter type specified in the constraint information, whether the parameter value meets the parameter length range specified in the constraint information, whether the parameter value meets the non-null specified in the constraint information, whether the parameter value is the unique value specified in the constraint information, whether the parameter value is the non-unique value specified in the constraint information, whether the parameter value meets the pass validity strong check specified in the constraint information, whether the parameter value meets the pass validity check specified in the constraint information, whether the parameter is the mandatory parameter specified in the constraint information, and whether the parameter is the non-mandatory parameter specified in the constraint information.
In an optional implementation manner, for any item standard parameter verification in the multiple item parameter verifications, the process of obtaining the feature data by the server side performing feature extraction on the target test time data may include:
step 1021A, judging whether the target test data has target interface parameters needing target parameter verification. If not, go to step 1022A; if yes, go to step 1023A.
In the embodiment of the present application, there are various implementation manners for the server to determine whether the target test data has the target interface parameter that needs to be subjected to the target parameter verification, and two alternative implementation manners are described below as an example.
In a first optional implementation manner, since the verification condition of the parameter verification is generated based on whether constraint information of the interface parameters is satisfied, it may be determined whether constraint information of each interface parameter in the target test data includes constraint information corresponding to the target parameter verification, so as to determine whether a target interface parameter that needs to be verified by the target parameter exists in the target test data.
When the constraint information of each interface parameter in the target test data does not include the constraint information corresponding to the target parameter verification, it indicates that the value of each interface parameter is not limited to the constraint information corresponding to the target parameter verification, and the target test data does not have the target interface parameter needing the target parameter verification. When the constraint information of each interface parameter in the target test data includes constraint information corresponding to target parameter verification, it is indicated that interface parameters whose values are limited by the constraint information corresponding to the target parameter verification exist in each interface parameter of the target test data, it is determined that target interface parameters which need to be subjected to the target parameter verification exist in the target test data, and it can be subsequently determined whether the values of the interface parameters satisfy the constraint information corresponding to the target parameter verification.
Illustratively, the target test data includes a first interface parameter and a second interface parameter. The constraint information of the first interface parameter includes: the parameter type, the parameter length range and the parameter value are not allowed to be null. The constraint information of the second interface parameter includes: parameter type, parameter length range. And if the target parameter verification indicates whether the parameter value is not null, the target constraint information corresponding to the target parameter verification indicates that the parameter value is not allowed to be null. The constraint information of the first interface parameter includes target constraint information, and the server determines that the target test data includes a target interface parameter that needs to be checked. And if the target parameter verification is that whether the parameter value is the unique value or not, the target constraint information corresponding to the target parameter verification is that the parameter value is the unique value. The constraint information of the first interface parameter and the second interface parameter does not include constraint information with a parameter value being a unique value, and the server determines that the target interface parameter needing target parameter verification does not exist in the target test data.
In this embodiment of the present application, before step 102, the method further includes: and acquiring test information of the tested system. The test information includes constraint information of each interface parameter in the target test data. The constraint information includes at least one of the following information: the type of the interface parameter, the length of the interface parameter, whether the value of the interface parameter is allowed to be empty, whether the value of the interface parameter is unique, and whether the interface parameter needs to be checked with strong validity. The test information may be transmitted to the server through the terminal in advance by the developer. For example, the test information may be input to the terminal by a developer in a test information input interface of a test platform mounted on the terminal. And after receiving the test information input by the developer, the terminal sends the test information to the server.
In a second optional implementation manner, the server may store interface verification correspondence between various interface parameters and the verification identifier of the parameter verification required by the server. The server side can obtain the target type of the interface parameter corresponding to the verification identification of the target parameter verification by inquiring the stored interface verification corresponding relation. And judging whether the interface parameters of the target type exist in the interface parameters included in the target test data. And when the interface parameters of the target type do not exist in the interface parameters included in the target test data, determining that the target interface parameters needing target parameter verification do not exist in the target test data. When the interface parameters of the target type exist in the interface parameters included in the target test data, determining that the target interface parameters needing target parameter verification exist in the target test data.
The type of the interface parameter may refer to an interface type to which the interface parameter belongs, or the type of the interface parameter may also indicate a parameter type of the interface parameter, and the like. The corresponding relationship between the type of the interface parameter and the check identifier in the interface check corresponding relationship may be one-to-one, one-to-many, or multiple-to-one. For example, the interface verification correspondence stored by the server includes: the first check mark corresponds to a first type, a second type and a third type of the interface parameter; the second check-up identification corresponds to the second type, the fourth type, and the fifth type of the interface parameter. It is assumed that the target test data includes a first interface parameter and a second interface parameter. And the type of the first interface parameter and the second interface parameter is a first type. And the check identifier for checking the target parameter is the second check identifier.
The server may obtain the second type, the fourth type, and the fifth type corresponding to the second check identifier indicating the target parameter check from the interface check correspondence. The server acquires that the types of the first interface parameter and the second interface parameter in the target test data are the first type, and determines that the interface parameters belonging to the second type, the fourth type and the fifth type do not exist in the interface parameters included in the target test data. And further determining that the target interface parameters needing target parameter verification do not exist in the target test data.
Step 1022A, generate the first data corresponding to the target parameter verification.
In the embodiment of the application, when the target test data does not have the target interface parameter needing target parameter verification, first data corresponding to the target parameter verification is generated. The first data is used for indicating that no interface parameter needing parameter verification exists in the target testing data. Alternatively, the first data may be a number, a letter, or the like. For example, the first data is 0.
And step 1023A, matching the target test value of the target interface parameter with the verification condition of the target parameter verification.
In the embodiment of the application, the server can compare whether the target test value of the target interface parameter meets the verification condition. In an example, it is assumed that the check condition of the target parameter check is whether the parameter value satisfies the parameter length range specified in the constraint information. The length range of the value of the target interface parameter in the constraint information of the target interface parameter is [1,50] characters. If the character length of the target test value is within [1,50], indicating that the target test value is successfully matched with the verification condition, and determining that the target test value meets the verification condition. And if the character length of the target test value is not in [1,50], indicating that the target test value is not successfully matched with the verification condition, and determining that the target test value does not meet the verification condition.
Step 1024A, when the target test value meets the verification condition, generating first data corresponding to target parameter verification; and when the target test value does not meet the verification condition, generating second data corresponding to the target parameter verification.
In the embodiment of the application, when the target test value meets the verification condition, first data corresponding to target parameter verification is generated. The first data is also used for indicating that the target test data has interface parameters which need to be subjected to parameter verification and the test values pass the parameter verification. And when the target test value does not meet the verification condition, generating second data corresponding to the target parameter verification. The second data is used for indicating whether the target test data has the interface parameters which need to be subjected to parameter verification and the test values do not pass the parameter verification.
And 1025A, generating characteristic data, wherein the characteristic data comprises data corresponding to each generated parameter check.
In the embodiment of the application, the server generates the feature data based on the data (the first data or the second data) corresponding to each parameter check generated by the parameter check. For example, it is assumed that the first data is 0 and the second data is 1. The characteristic data of the target test data includes: isTypeErr ═ 0, isLongErr ═ 0, isEmpty ═ 0, isNotemtpy ═ 0, isTheOne ═ 0, isNotTheOne ═ 0, isErrCheck ═ 0, isNotErrCheck ═ 0, isNull ═ 1, isNotNull ═ 0. Wherein the istipeerr is a parameter check condition that whether the parameter value meets the parameter type specified in the constraint information. isLongErr is a parameter check that the check condition is whether the parameter value satisfies the parameter length range specified in the constraint information. isEmpty is the check condition that whether the parameter value meets the non-null parameter check specified in the constraint information. The isnotempy is the check condition that whether the parameter value meets the non-null parameter check specified in the constraint information. The isteone is the parameter check that the check condition is whether the parameter value is the only value specified in the constraint information. The isNotTheOne is the parameter check that the check condition is whether the parameter value is the specified parameter without the unique value in the constraint information. The isErrCheck is the parameter check that whether the parameter value meets the requirement of passing the strong validity check specified in the constraint information. The isNotErrCheck is the check condition that whether the parameter value meets the parameter check specified in the constraint information and does not need to pass the validity check. isNull is a parameter check in which the check condition is whether the parameter is a padding parameter specified in the constraint information. isNotNull is a parameter check in which the check condition is whether a parameter is an unnecessary padding parameter specified in the constraint information.
And 103, inputting the characteristic data into the test result prediction model to obtain an expected test result. The expected test result is used for indicating whether the target test data is a normal case.
In the embodiment of the application, the test result prediction model is used for outputting the type of an expected test result obtained when any tested system is tested by adopting the test data according to the input feature data of the test data. The types of expected test results include normal and abnormal. And if the type of the expected test result is normal, the expected test result indicates that the target test data is a normal case. If the type of the expected test result is abnormal, the expected test result indicates that the target test data is an abnormal case. The test result prediction model is obtained by training a plurality of groups of test sample data, and each group of sample data in the plurality of groups of test sample data can comprise a plurality of interface parameters of the interface, test values of the interface parameters and label data. The label data is used for indicating that the test sample data is a normal case or an abnormal case. Optionally, each set of test sample data may include interface parameters of one or more interfaces.
Optionally, before step 103, the method further includes: and the server executes the training process of the test result prediction model. In an alternative implementation, the training process of the test result prediction model may include the following steps S1 to S3.
In step S1, multiple sets of test sample data are acquired. The test sample data includes: the interface testing method comprises a plurality of interface parameters of an interface, testing values of the interface parameters and label data, wherein the label data are used for indicating that the data of a testing sample is a normal case or an abnormal case.
In the embodiment of the present application, the test sample data may be collected test sample data of any system under test. The multiple sets of test sample data may be test sample data of the same system under test. Alternatively, the multiple sets of test sample data may be test sample data of different systems under test.
In step S2, feature extraction is performed on the test sample data to obtain sample feature data. The sample characteristic data is used for reflecting whether the interface parameters which need to be subjected to parameter verification and have the test values which do not pass the parameter verification exist in the test sample data.
The explanation and implementation of this step may refer to step 102 to perform feature extraction on the target test data, so as to obtain an explanation and implementation of the feature data, which is not described in detail in this embodiment of the present application.
In step S3, the machine learning model is trained using the sample feature data to obtain a test result prediction model.
Optionally, the machine learning algorithm for training the machine learning model includes: a random forest algorithm, a support vector machine algorithm, or a convolutional neural network algorithm. For example, the process of training the initial learning model by the server using the sample feature data to obtain the test result prediction model may include: and respectively inputting a plurality of interface parameters of the interfaces in the plurality of groups of sample characteristic data into the machine learning model to obtain expected test results corresponding to each group of sample characteristic data. And inputting the expected test result corresponding to each group of sample characteristic data and the test result indicated by the label data corresponding to each group of sample characteristic data into a target loss function, and determining a loss value, wherein the target loss function is a loss function of the test result prediction model. And when the loss value does not meet the preset requirement, the prediction precision of the current machine learning model does not meet the preset precision requirement, and parameters in the machine learning model are adjusted according to the optimizer to obtain the adjusted machine learning model. And repeating the process of inputting a plurality of interface parameters of the interface in the test sample data into the adjusted machine learning model until the loss value is determined until the loss value reaches the preset requirement. And when the loss value reaches the preset requirement, indicating that the prediction precision of the current machine learning model reaches the preset precision requirement, and taking the current machine learning model as a test result prediction model.
In this embodiment of the present application, in another optional implementation manner, the training process of the test result prediction model may include:
and acquiring multiple groups of preset characteristic data. Each group of preset characteristic data comprises: and verifying the corresponding data and the second label data corresponding to the group of preset characteristic data by each parameter in the multi-parameter inspection. And training the machine learning model by adopting a plurality of groups of preset characteristic data to obtain a test result prediction model. And the data corresponding to each parameter check is the first data or the second data. The second label data is used for indicating that the test data corresponding to the preset feature data is a normal use case or an abnormal use case. Optionally, the machine learning algorithm for training the machine learning model includes: a random forest algorithm, a support vector machine algorithm, or a convolutional neural network algorithm.
For example, the process of training the machine learning model by the server using multiple groups of preset feature data to obtain the test result prediction model may include: and respectively inputting data corresponding to the parameter verification included in each preset characteristic data in the multiple groups of preset characteristic data into the machine learning model to obtain an expected test result corresponding to the group of preset characteristic data. And inputting the expected test result corresponding to each group of preset characteristic data and the preset test result indicated by the second label data corresponding to each group of preset characteristic data into a second target loss function, and determining a loss value. And when the loss value does not reach the preset requirement, the prediction precision of the current machine learning model does not reach the preset precision requirement, and parameters in the machine learning model are adjusted according to the optimizer to obtain the adjusted machine learning model. And repeating the process of inputting the data corresponding to the verification of each parameter in the preset characteristic data to the adjusted machine learning model until the loss value is determined until the loss value meets the preset requirement. And when the loss value reaches the preset requirement, the prediction accuracy of the current machine learning model reaches the preset accuracy requirement, and the current machine learning model is used as a test result prediction model.
In the embodiment of the application, in the latter implementation manner of training the test result prediction model, because multiple groups of preset feature data are directly constructed as the training data of the test result prediction model, compared with the former implementation manner of training the test result prediction model, the process of extracting features of test sample data to obtain sample feature data is not required to be executed, and the training process of the test result prediction model is simplified. And, because each parameter checks the correspondent data, compare with the simple structure of the test value of each interface parameter, therefore, compare with the course of constructing the test sample data, the course of constructing and preserving the characteristic data is simpler, the training data generating efficiency is higher.
In summary, the test data processing method provided in the embodiment of the present application obtains the feature data by performing feature extraction on the obtained target test data, where the feature data is used to reflect whether there is an interface parameter that needs to be subjected to parameter verification and a test value does not pass the parameter verification in the target test data. And inputting the characteristic data into the test result prediction model to obtain an expected test result. The expected test result is used to indicate whether the target test data is a normal use case. In the technical scheme, the expected test result corresponding to the target test data is determined by adopting the test result prediction model, and compared with a method for artificially predicting the test result of the test data, the prediction time of the preset test result corresponding to the target test data is shortened, and the prediction efficiency is improved.
Please refer to fig. 2, which shows a flowchart of another test data processing method provided in the embodiment of the present application. The test data processing method may be applied to the electronic device or the server in the above implementation environment, and the following embodiment describes an example in which the method is applied to the server. As shown in fig. 2, the test data processing method may include:
step 201, obtaining test information of a system under test, where the test information includes: the method comprises the following steps of obtaining interface parameters of an interface, a test value set of the interface parameters and target coefficients, wherein the target coefficients are less than or equal to the number of the interface parameters.
In the embodiment of the application, the test information may be written into the terminal by the developer and sent to the server through the terminal. Optionally, the test value set of each interface parameter may include a normal test value set used for forming a normal case and an abnormal test set used for forming an abnormal case. Therefore, the final test data generated based on the test value set of each interface parameter can be a normal case or an abnormal case, and the test requirement of the tested system is better met.
Step 202, performing cartesian product operation on the test value sets of each interface parameter to obtain multiple sets of initial test data, where any set of initial test data includes the test values of each interface parameter.
In the embodiment of the application, the service end carries out Cartesian product operation on the test value set of each interface parameter to obtain a Cartesian product. And constructing initial test data corresponding to each group of elements based on multiple elements of Cartesian products. Illustratively, the test information includes a first interface parameter, a second interface parameter, and a third interface parameter of the interface. The test value set a of the first interface parameters is { a1, a2 }. The test value set B of the second interface parameter is { B1, B2 }. The set of test values C for the third interface parameter is { C1, C2 }.
Then the cartesian product a × B × C of the test value set of the first interface parameter, the test value set of the second interface parameter, and the test value set of the third interface parameter { (a1, B1, C1), (a1, B1, C2), (a1, B2, C1), (a1, B2, C2), (a2, B1, C1), (a2, B1, C2), (a2, B2, C1), (a2, B2, C2) }. Eight groups of elements based on the Cartesian product A multiplied by B respectively construct eight groups of initial test data corresponding to each group of elements.
Referring to table 1, table 1 shows values of a first interface parameter, a second interface parameter, and a third interface parameter in eight sets of initial test data (from the first set of initial test data to the eighth set of initial test). As shown in table 1, the first set of initial test data includes: a first interface parameter a 1; a second interface parameter b 1; the third interface parameter c 1. The second set of initial test data was: a first interface parameter a 1; a second interface parameter b 1; the third interface parameter c 2.
TABLE 1
First interface parameter Second interface parameter Third interface parameter
First set of initial test data a1 b1 c1
Second set of initial test data a1 b1 c2
Third set of initial test data a1 b2 c1
Fourth set of initial test data a1 b2 c2
Fifth set of initial test data a2 b1 c1
Sixth set of initial test data a2 b1 c2
Seventh set of initial test data a2 b2 c1
Eighth set of initial test data a2 b2 c2
Step 203, aiming at any group of initial test data, arranging and combining the test values of each target coefficient in the initial test data to obtain at least one group of paired data corresponding to the initial test data.
In the embodiment of the present application, the value of the target coefficient may be [2, i ], where i is the number of the interface parameters. In an example, assuming that the target coefficient is 2, for any set of initial test data, the server side combines each test value in the set of test data two by two to obtain a plurality of sets of paired data. For example, in the example shown in step 202, a set of initial test data corresponding to the first set of elements (a1, b1, c1) is illustrated. And combining every two test values in the set of initial test data to obtain three sets of paired data. The three sets of paired data respectively include: first set of paired data: a1, b 1; second set of paired data: a1, c 1; third set of paired data: b1, c 1.
And 204, screening the multiple groups of initial test data by adopting a pairing algorithm based on the multiple groups of pairing data corresponding to the initial test data to obtain target test data.
In the embodiment of the application, a pair (pair) algorithm is an algorithm based on mathematical statistics and optimized by a traditional orthogonal analysis method. The server side adopts a pairing algorithm to screen multiple groups of initial test data based on multiple groups of pairing data corresponding to the initial test data, and the process of obtaining target test data comprises the following steps: executing at least one screening process until a plurality of groups of pairing data corresponding to each group of initial test data have no repeated pairing data, wherein the screening process comprises the following steps:
aiming at any one group of target pair data of at least one group of pair data of the target initial test data in the multiple groups of initial test data, comparing the target pair data with the corresponding pair data in other groups of initial test data respectively, wherein the other groups of initial test data are the initial test data except the target initial test data in the multiple groups of initial test data, and the arrangement positions of the corresponding two pairs of pair data in the initial test data before the combination of the test values are the same.
And when each group of paired data of the target initial test data has the same paired data, deleting the target initial test data in the multiple groups of initial test data, and when the number of the initial test data included in the residual initial test data is greater than 0, taking any one group of initial test data in the residual initial test data as the target test data. Or when the number of the initial test data included in the remaining initial test data is equal to 0, determining that the multiple sets of the pairing data corresponding to the initial test data have no repeated pairing data. Wherein, the remaining initial test data refers to initial test data that is not the target initial test data in the plurality of sets of test data.
For example, the server uses the eighth set of initial test data as the target initial test data, and then the corresponding remaining initial test data (the first set of initial test data to the seventh set of initial test data) are obtained. The three sets of paired data of the target initial test data are respectively: first set of paired data: a2, b 2; second set of paired data: a2, c 2; third set of paired data: b2, c 2. The first set of junction pair data a2, b2 is compared with a first set of junction pair data in the first through seventh sets of initial test data, respectively. The first set of pair data a2, b2 of the seventh set of test data is determined to be the same as the first set of pair data a2, b2 of the target initial test data.
The second set of junction pair data a2, c2 is compared with the second set of junction pair data in the first to seventh sets of initial test data, respectively. The second set of paired data a2, c2 of the sixth set of test data is determined to be identical to the second set of paired data a2, c2 of the target initial test data.
The third set of junction pair data b2, c2 is compared with a third set of junction pair data in the first to seventh sets of initial test data, respectively. The third set of pair data b2, c2 of the fourth set of test data is determined to be identical to the third set of pair data b2, c2 of the target initial test data.
Then the same pair data exists in all three sets of pair data of the target initial test data, the eighth set of initial test data is deleted, and the number of initial test data included in the remaining initial test data is 7. And 7 is greater than 0, the seventh set of initial test data among the remaining initial test data (the first set of initial test data to the seventh set of initial test data) is taken as the target initial test data. At this time, the corresponding remaining initial test data includes (first to sixth sets of initial test data). And if the remaining initial test data has the pair data which is repeated with each pair data of the seventh group of initial test data, deleting the seventh group of initial test data. And if no pairing data which is duplicated with any pairing data in the groups of pairing data of the seventh group of initial test data exists in the rest initial test data, taking the seventh group of initial test data as the target test data. Sequentially taking each group of initial test data in the remaining initial test data as target initial test data, and repeating the step of judging whether the remaining initial test data has repeated pairing data with each group of pairing data of the target initial test data or not until the number of the initial test data included in the remaining initial test data is equal to 0 so as to complete screening of multiple groups of initial test data to obtain the target test data: a seventh set of initial test data, a sixth set of initial test data, a fifth set of initial test data, a third set of initial test data, a second set of initial test data, and a first set of initial test data.
And step 205, testing the system to be tested by adopting the target test data to obtain the coverage rate of the current code.
In the embodiment of the application, the code coverage is a measure in software testing and is used for describing the proportion of a tested part of source code in a program of a system to be tested to all the source code.
Optionally, the step of testing the system under test by the server using the target test data to obtain the current code coverage may include: the server can insert a flag bit into each line of source code of the tested system. In the process of testing a system to be tested by adopting target test data, a server side monitors the state of the zone bit of each line of source codes in real time, wherein when a certain line of source codes is executed, the state of the zone bit of the line is an executed state; when a certain line of source code is not executed, the state of the flag bit of the line is an unexecuted state. And after detecting that the test is finished, the server side counts the states of the zone bits of the source codes of all rows. And determining the current code coverage rate according to the statistical result.
Of course, the server may also obtain the current code coverage after the target test data is used to test the system under test in other manners. The other mode comprises that the server side executes code coverage rate statistical software in the process of testing the tested system by adopting the target test data. And after the test is finished, acquiring the current code coverage rate counted by the code coverage rate counting software.
And step 206, judging whether the coverage rate of the current code is less than the coverage rate of the target code. If yes, go to step 207; if not, go to step 208.
In the embodiment of the application, the target code coverage rate may be sent to the server side by the developer through the terminal. Target code coverage is the lowest code coverage requirement of the system under test in test requirements. The server can compare the current code coverage rate with the target code coverage rate. And when the current code coverage rate is less than the target code coverage rate, the minimum code coverage rate requirement of the tested system is not met when the target test data is used for testing. And when the current code coverage rate is greater than or equal to the target code coverage rate, the requirement of the tested system on the lowest code coverage rate is met when the target test data is used for testing.
And step 207, updating the target coefficient to enable the updated target coefficient to be larger than the target coefficient before updating. Return to execute step 203.
In this embodiment of the application, the larger the value of the target system is, the larger the number of the target test data generated in the foregoing steps 201 to 204 is, and further, the higher the code coverage of the tested system is when the target test data is used for testing. In practical application, when the value of the target system is equal to the number of the interface parameters, the full target test data can be generated by using the steps 201 to 204, and the code coverage of the system to be tested is 1 when the full target test data is used for testing.
Based on this, when the current code coverage rate is smaller than the target code coverage rate, which indicates that the lowest code coverage rate requirement of the tested system is not met when the target test data is used for testing, the target coefficient can be updated, so that the updated target coefficient is larger than the target coefficient before updating. For example, when the current code coverage rate is smaller than the target code coverage rate, the server adds 1 to the target coefficient to obtain an updated target coefficient.
And step 208, determining the test data of the tested system as target test data.
In the embodiment of the application, when the current code coverage is greater than or equal to the target code coverage, which indicates that the lowest code coverage requirement of the tested system is met when the target test data is used for testing, the target test data is used as the final test data of the tested system. Therefore, under the condition of ensuring the target coverage rate, the data volume of the target test data generated based on the pairing algorithm is small, and the generation quality of the target test data is improved. And compared with the mode of manually writing and generating the test data in the related technology, the test data generation efficiency is improved. And when the target test data is adopted to carry out tests such as interface smoking test, interface regression test and the like on the tested system, the data volume of the target test data is relatively small, so that the test time is shortened, and the test efficiency is improved.
And step 209, acquiring target test data of the tested system.
And step 210, performing feature extraction on the target test data to obtain feature data. The characteristic data is used for reflecting whether the target test data has interface parameters which need to be subjected to parameter verification and the test values do not pass the parameter verification.
The explanation and implementation manner of step 210 may refer to the explanation and implementation manner of step 102, which is not limited in this application.
And step 211, inputting the characteristic data into the test result prediction model to obtain an expected test result. The expected test result is used for indicating whether the target test data is a normal case.
The explanation and implementation manner of step 211 may refer to the explanation and implementation manner of step 103, which is not limited in this application.
In the embodiment of the application, the feature data is obtained by performing feature extraction on the obtained target test data, and the feature data is used for reflecting whether the target test data has the interface parameters which need to be subjected to parameter verification and the test values do not pass the parameter verification. And inputting the characteristic data into the test result prediction model to obtain an expected test result. The expected test result is used to indicate whether the target test data is a normal use case. In the technical scheme, the expected test result corresponding to the target test data is determined by adopting the test result prediction model, so that the prediction time of the preset test result corresponding to the target test data is shortened and the prediction efficiency of the test data is improved compared with a mode of artificially predicting the test result of the test data. And the complete generation process of the test data is simplified, and the generation efficiency of the test data is improved.
Referring to fig. 3, a system architecture diagram of a test platform according to an embodiment of the invention is shown. In the embodiment of the present application, the test data processing method is applied to the test platform shown in fig. 3 as an example, and the test data processing method is further described. As shown in fig. 3, the test platform 300 includes: access stratum 301, site stratum 302, data processing stratum 303, service stratum 304, data store stratum 305, and deployment environment 306.
Wherein, the access stratum 301 includes: a platform page 3011, a coverage collection agent module 3012, and a traffic collection agent module 3013. The platform pages 3011 include an interface information management page, a user management page, a data icon display page, and a data export page. The developer can write data such as test information and target coverage rate in the interface information management page displayed by the terminal. So that the terminal receives data such as test information and target coverage in the interface information management page and sends the received data to the server. The server side can execute the test data processing method based on the data conveniently. The coverage collection agent module 3012 is used to determine code coverage during testing. For example, the coverage collection agent module 3012 is configured to obtain current code coverage when the target test data tests the system under test.
The site layer 302 includes: a data access module 3021 and an access data storage module 3022. The data access module 3021 may provide functions of data statistics, request routing, external interfaces, and proxy data collection and transmission to the access data storage module. The access data storage module 3022 may include a mySQL database for storing proxy data. The proxy data may include data collected by the coverage collection proxy module 3012 and the traffic collection proxy module 3013, among other things.
The data processing layer 303 includes: a data processing module 3031 and a training sample data storage module 3032. The data processing module 3031 may provide functions of data adoption, data processing, data feature extraction, and sample construction. The training sample data storage module 3032 may include a mySQL database for storing sample data.
The service layer 304 includes: an agent data collection and statistics module 3041, an execution module 3042, a use case input generation module 3043, a coverage rate feedback control module 3044, a data post-processing module 3045, a prediction model module 3046, and a data storage control module 3047. The agent data collection statistic module 3041 is used for collecting the coverage rate introduced by the coverage rate collection agent module 3012 and for collecting the traffic introduced by the traffic collection agent module 3013. The use case input generation module 3043 is used to generate target test data. The execution module 3042 is configured to provide the target test data to the outside. The coverage feedback control module 3044 is configured to transmit the code coverage to the use case input generation module 3043. The data post-processing module 3045 is configured to perform feature extraction on the target test data to obtain feature data. The prediction model module 3046 is configured to output a prediction evaluation result according to the input feature data. The data storage control module 3047 is configured to transmit the target test data and the corresponding prediction evaluation result to a database of the data storage layer, so that the database stores the target test data and the corresponding prediction evaluation result.
Data storage layer 305 includes: a first example database 3051, a second example database 3052, and an interface document database 3053. The first example database 3051 may be a mySQL database for storing target test data and/or corresponding predicted evaluation results in mySQL format. The second example database 3052 may include a table (excel) database, a Structured Query Language (SQL) database, and other databases. The tabular database is used for storing target test data and/or corresponding prediction evaluation results in a tabular format. The SQL database is used to store the target test data and/or the corresponding prediction evaluation results in SQL format. Other databases are used to store target test data and/or corresponding predictive assessment results in formats other than mySQL, tabular, and SQL formats. The interface document database 3053 may be a mySQL database for storing interface documents, i.e., test information, in a mySQL format.
The deployment environment 306 may be a Community Enterprise Operating System (centros). The centros is a type of Linux operating system. The centros can load Spring Boot framework, flash framework and model operating environment. The Spring Boot framework is a set of open source framework based on Spring. Flash is a lightweight web application framework written using Python.
The platform page 3011 may obtain test information of the system under test. The test information is sequentially transmitted to the case input generation module 3043 and the interface document database 3053 through the data access module 3021 and the data processing module 3031, respectively. The interface document database 3053 stores the received test information of the system under test.
The use case input generating module 3043 may execute the processes from step 202 to step 204 to obtain the target test data after obtaining the test information of the system under test. The use case input generation module 3043 transmits the target test data to the execution module 3042. The execution module 3042 transmits the target test data to the system under test to test the system under test with the target test data.
The coverage collection agent module 3012 is configured to determine a current code coverage of the system under test after the system under test is tested by using the target test data, and transmit the current code coverage to the agent data collection statistic module 3041 through the data access module 3021 and the data processing module 3031. So that the agent data collecting and counting module 3041 transmits the coverage rate of the current code to the use case input generating module 3043 through the coverage rate feedback control module 3044.
The use case input generation module 3043 executes the above steps 206 to 208 to determine whether the current code coverage is less than the target code coverage. And when the current code coverage rate is smaller than the target code coverage rate, updating the target coefficient to enable the updated target coefficient to be larger than the target coefficient before updating, and executing the steps 203 to 204 to obtain target test data. The use case input generating module 3043 repeatedly executes the process of transmitting the target test data to the executing module 3042, and the use case input generating module 3043 executes the process of the foregoing step 206 to step 208 to determine whether the current code coverage is smaller than the target code coverage. Until the current code coverage is greater than or equal to the target code coverage.
When the current code coverage rate is greater than or equal to the target code coverage rate, the use case input generation module 3043 determines that the target test data is the target test data of the system under test. And transmits the target test data to the data post-processing module 3045. The data post-processing module 3045 performs feature extraction on the target test data to obtain feature data, and transmits the feature data to the prediction model module 3046. The prediction model module 3046 inputs the feature data into the test result prediction model to obtain an expected test result. The data storage control module 3047 sends the target test data of the system under test and/or the corresponding expected test result to the first case database 3051 and the second case database 3052.
In summary, in the test data processing method provided in the embodiment of the present invention, the feature data is obtained by performing feature extraction on the obtained target test data, where the feature data is used to reflect whether there is an interface parameter that needs to be subjected to parameter verification and a test value does not pass the parameter verification in the target test data. And inputting the characteristic data into the test result prediction model to obtain an expected test result. The expected test result is used to indicate whether the target test data is a normal use case. In the technical scheme, the expected test result corresponding to the target test data is determined by adopting the test result prediction model, and compared with a method for artificially predicting the test result of the test data, the prediction time of the preset test result corresponding to the target test data is shortened, and the prediction efficiency of the test data is improved. And the complete generation process of the test data is simplified, and the generation efficiency of the test data is improved.
Referring to fig. 4, a block diagram of a test data processing apparatus according to an embodiment of the invention is shown. As shown in fig. 4, the test data processing apparatus 400 includes:
an obtaining module 401, configured to obtain target test data of a system under test, where the target test data includes: the method comprises the steps of obtaining a plurality of interface parameters of a target interface of a system to be tested and test values of the interface parameters.
The extraction module 402 is configured to perform feature extraction on the target test data to obtain feature data, where the feature data is used to reflect whether an interface parameter that needs to be subjected to parameter verification and whose test value does not pass the parameter verification exists in the target test data.
And the model prediction module 403 is configured to input the feature data into the test result prediction model to obtain an expected test result, where the expected test result is used to indicate whether the target test data is a normal case.
Optionally, the extracting module 402 is further configured to:
aiming at any one target parameter check in the multiple parameter checks, when determining that the target test data does not have a target interface parameter needing the target parameter check, generating first data corresponding to the target parameter check;
when determining that target interface parameters needing target parameter verification exist in the target test data, matching the target test values of the target interface parameters with the verification conditions of the target parameter verification;
when the target test value meets the verification condition, generating first data corresponding to target parameter verification;
when the target test value does not meet the check condition, generating second data corresponding to the target parameter check;
and generating characteristic data, wherein the characteristic data comprises data corresponding to each generated parameter check.
Optionally, the obtaining module 401 is further configured to obtain test information of the system under test, where the test information includes constraint information of each interface parameter, and the constraint information includes at least one of the following information: the method comprises the following steps of (1) determining the type of a parameter, the length range of the parameter, whether the parameter is a mandatory parameter, whether the value of the parameter is allowed to be empty, whether the value of the parameter is unique, and whether the parameter needs to be verified with strong validity; a check condition for parameter checking is generated based on whether constraint information is satisfied;
the extracting module 402 is further configured to: and when the constraint information of each interface parameter does not include the constraint information corresponding to the target parameter verification, determining that no target interface parameter needing the target parameter verification exists in the target test data.
Optionally, the obtaining module 401 is further configured to obtain test information of the system under test, where the test information includes: each interface parameter, the test value set of each interface parameter and a target coefficient, wherein the target coefficient is less than or equal to the number of the interface parameters;
the device still includes: and the operation module is used for carrying out Cartesian product operation on the test value set of each interface parameter to obtain a plurality of groups of initial test data, wherein any group of initial test data comprises the test value of each interface parameter.
And the combination module is used for arranging and combining the test values of each target coefficient in the initial test data aiming at any group of initial test data to obtain at least one group of pairing data corresponding to the initial test data.
And the screening module is used for screening the multiple groups of initial test data based on the multiple groups of knot pair data corresponding to the initial test data by adopting a knot pair algorithm to obtain target test data.
Optionally, the apparatus further comprises:
and the test module is used for testing the system to be tested by adopting the target test data to obtain the coverage rate of the current code.
And the updating module is used for updating the target coefficient when the current code coverage rate is smaller than the target code coverage rate, so that the updated target coefficient is larger than the target coefficient before updating.
And the execution module is used for repeatedly executing the step of arranging and combining the test values in the initial test data to test the tested system by adopting the target test data to obtain the current code coverage rate until the current code coverage rate is greater than or equal to the target code coverage rate, and determining the test data of the tested system as the target test data.
Optionally, the obtaining module 401 is further configured to obtain multiple sets of test sample data, where the test sample data includes: the interface test system comprises a plurality of interface parameters of an interface, test values of the interface parameters and label data, wherein the label data are used for indicating that test sample data are normal cases or abnormal cases.
The extracting module 402 is further configured to perform feature extraction on the test sample data to obtain sample feature data, where the sample feature data is used to reflect whether there is an interface parameter that needs to be subjected to parameter verification and the test value does not pass the parameter verification in the test sample data.
The device still includes: and the training module is used for training the machine learning model by adopting the characteristic sample data to obtain a test result prediction model.
Optionally, the machine learning algorithm for training the machine learning model includes: a random forest algorithm, a support vector machine algorithm, or a convolutional neural network algorithm.
In summary, the test data processing apparatus provided in the embodiment of the present invention obtains the feature data by performing feature extraction on the obtained target test data, where the feature data is used to reflect whether there is an interface parameter that needs to be subjected to parameter verification and a test value does not pass the parameter verification in the target test data. And inputting the characteristic data into the test result prediction model to obtain an expected test result. The expected test result is used to indicate whether the target test data is a normal use case. In the technical scheme, the expected test result corresponding to the target test data is determined by adopting the test result prediction model, and compared with a method for artificially predicting the test result of the test data, the prediction time of the preset test result corresponding to the target test data is shortened, and the prediction efficiency of the test data is improved. And the complete generation process of the test data is simplified, and the generation efficiency of the test data is improved.
The test data generation device provided by the embodiment of the application is provided with the functional module corresponding to the test data processing method, can execute the test data processing method provided by any embodiment of the application, and can achieve the same beneficial effects.
In another embodiment provided by the present application, there is also provided an electronic device, which may include: the processor executes the program to realize the processes of the test data processing method embodiment, and can achieve the same technical effects, and the details are not repeated here in order to avoid repetition.
For example, as shown in fig. 5, the electronic device may specifically include: a processor 501, a storage device 502, a display screen 503 with touch functionality, an input device 504, an output device 505, and a communication device 506. The number of the processors 501 in the electronic device may be one or more, and one processor 501 is taken as an example in fig. 5. The processor 501, the storage means 502, the display 503, the input means 504, the output means 505 and the communication means 506 of the electronic device may be connected by a bus or other means.
In yet another embodiment provided by the present application, there is also provided a computer-readable storage medium having stored therein instructions, which when run on a computer, cause the computer to execute the test data processing method described in any of the above embodiments.
In a further embodiment provided by the present application, there is also provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the test data processing method of any of the above embodiments.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better embodiment. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that many more modifications and variations can be made without departing from the spirit of the invention and the scope of the appended claims.
Those of ordinary skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed in the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method of test data processing, the method comprising:
acquiring target test data of a tested system, wherein the target test data comprises: a plurality of interface parameters of a target interface of the system to be tested and a test value of each interface parameter;
extracting features of the target test data to obtain feature data, wherein the feature data are used for reflecting whether the target test data have interface parameters which need parameter verification and the test values do not pass the parameter verification;
and inputting the characteristic data into a test result prediction model to obtain an expected test result, wherein the expected test result is used for indicating whether the target test data is a normal case or not.
2. The method of claim 1, wherein the performing feature extraction on the target test data to obtain feature data comprises:
aiming at any one target parameter check in multiple parameter checks, when determining that no target interface parameter needing the target parameter check exists in the target test data, generating first data corresponding to the target parameter check;
when determining that the target test data has the target interface parameter needing to be verified, matching the target test value of the target interface parameter with the verification condition of the target parameter verification;
when the target test value meets the verification condition, generating the first data corresponding to the target parameter verification;
when the target test value does not meet the verification condition, generating second data corresponding to the target parameter verification;
generating characteristic data, wherein the characteristic data comprises data corresponding to each generated parameter check.
3. The method of claim 2, wherein before said extracting features from said target test data to obtain feature data, said method further comprises:
acquiring test information of the tested system, wherein the test information comprises constraint information of each interface parameter, and the constraint information comprises at least one of the following information: the method comprises the following steps of (1) determining the type of a parameter, the length range of the parameter, whether the parameter is a mandatory parameter, whether the value of the parameter is allowed to be empty, whether the value of the parameter is unique, and whether the parameter needs to be verified with strong validity; the checking condition of the parameter checking is generated based on whether the constraint information is met;
the determining that the target interface parameter which needs to be checked does not exist in the target test data includes:
and when the constraint information of each interface parameter does not include the constraint information corresponding to the target parameter verification, determining that no target interface parameter needing the target parameter verification exists in the target test data.
4. The method of claim 1, wherein prior to said obtaining target test data for a system under test, the method further comprises:
acquiring test information of the tested system, wherein the test information comprises: each interface parameter, a test value set of each interface parameter and a target coefficient, wherein the target coefficient is less than or equal to the number of the interface parameters;
carrying out Cartesian product operation on the test value set of each interface parameter to obtain a plurality of groups of initial test data, wherein any group of initial test data comprises the test value of each interface parameter;
aiming at any group of initial test data, carrying out permutation and combination on a plurality of test values of each target system in the initial test data to obtain at least one group of pairing data corresponding to the initial test data;
and screening the multiple groups of initial test data by adopting a pairing algorithm based on multiple groups of pairing data corresponding to the initial test data to obtain the target test data.
5. The method of claim 4, wherein after the selecting the plurality of sets of initial test data to obtain the target test data based on the plurality of sets of pairing data corresponding to each of the initial test data by using the pairing algorithm, the method further comprises:
testing the system to be tested by adopting the target test data to obtain the coverage rate of the current code;
when the current code coverage rate is smaller than the target code coverage rate, updating the target coefficient to enable the updated target coefficient to be larger than the target coefficient before updating;
and repeatedly executing the step of arranging and combining the test values in the initial test data to test the tested system by adopting the target test data to obtain the current code coverage rate until the current code coverage rate is greater than or equal to the target code coverage rate, and determining the test data of the tested system as the target test data.
6. The method of claim 1, wherein prior to said inputting said feature data into a test result prediction model to obtain an expected test result, said method comprises:
obtaining a plurality of groups of test sample data, wherein the test sample data comprises: the interface test method comprises the steps of obtaining a plurality of interface parameters of an interface, a test value of each interface parameter and label data, wherein the label data are used for indicating that the test sample data are normal cases or abnormal cases;
extracting the characteristics of the test sample data to obtain sample characteristic data, wherein the sample characteristic data is used for reflecting whether the test sample data has interface parameters which need parameter verification and the test value does not pass the parameter verification;
and training a machine learning model by adopting the characteristic sample data to obtain the test result prediction model.
7. The method of claim 6, wherein a machine learning algorithm used to train the machine learning model comprises: a random forest algorithm, a support vector machine algorithm, or a convolutional neural network algorithm.
8. A test data processing apparatus, characterized in that the apparatus comprises:
an obtaining module, configured to obtain target test data of a system under test, where the target test data includes: a plurality of interface parameters of a target interface of the system to be tested and a test value of each interface parameter;
the extraction module is used for extracting the characteristics of the target test data to obtain characteristic data, wherein the characteristic data is used for reflecting whether the target test data has interface parameters which need to be subjected to parameter verification and the test values do not pass the parameter verification;
and the model prediction module is used for inputting the characteristic data into a test result prediction model to obtain an expected test result, and the expected test result is used for indicating whether the target test data is a normal case or not.
9. An electronic device, comprising: processor, memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the test data processing method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the test data processing method according to one of claims 1 to 7.
CN202111153287.8A 2021-09-29 2021-09-29 Test data processing method and device, electronic equipment and storage medium Pending CN114185765A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111153287.8A CN114185765A (en) 2021-09-29 2021-09-29 Test data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111153287.8A CN114185765A (en) 2021-09-29 2021-09-29 Test data processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114185765A true CN114185765A (en) 2022-03-15

Family

ID=80601387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111153287.8A Pending CN114185765A (en) 2021-09-29 2021-09-29 Test data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114185765A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116303071A (en) * 2023-03-29 2023-06-23 上海聚水潭网络科技有限公司 Interface testing method and device, electronic equipment and storage medium
CN117471288A (en) * 2023-12-20 2024-01-30 俐玛光电科技(北京)有限公司 Test data analysis method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116303071A (en) * 2023-03-29 2023-06-23 上海聚水潭网络科技有限公司 Interface testing method and device, electronic equipment and storage medium
CN117471288A (en) * 2023-12-20 2024-01-30 俐玛光电科技(北京)有限公司 Test data analysis method and device, electronic equipment and storage medium
CN117471288B (en) * 2023-12-20 2024-03-15 俐玛光电科技(北京)有限公司 Automatic analysis method and device for test data, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US8799869B2 (en) System for ensuring comprehensiveness of requirements testing of software applications
KR101752251B1 (en) Method and device for identificating a file
US20180232204A1 (en) Intelligent data extraction
CN114185765A (en) Test data processing method and device, electronic equipment and storage medium
Gong et al. Evolutionary generation of test data for many paths coverage based on grouping
CN107862425B (en) Wind control data acquisition method, device and system and readable storage medium
CN109726108B (en) Front-end code testing method, device, system and medium based on analog data
CN106681921A (en) Method and device for achieving data parameterization
CN111754044B (en) Employee behavior auditing method, device, equipment and readable storage medium
JP6419081B2 (en) Transform generation system
CN110046155B (en) Method, device and equipment for updating feature database and determining data features
Bastazini et al. Untangling the tangled bank: a novel method for partitioning the effects of phylogenies and traits on ecological networks
CN112801773A (en) Enterprise risk early warning method, device, equipment and storage medium
CN109614327B (en) Method and apparatus for outputting information
CN115237724A (en) Data monitoring method, device, equipment and storage medium based on artificial intelligence
CN109684198B (en) Method, device, medium and electronic equipment for acquiring data to be tested
CN117370356A (en) Method and related device for mapping metadata by data standard
RU2532714C2 (en) Method of acquiring data when evaluating network resources and apparatus therefor
CN112541688A (en) Service data checking method and device, electronic equipment and computer storage medium
CN109597702B (en) Root cause analysis method, device, equipment and storage medium for message bus abnormity
CN115809796A (en) Project intelligent dispatching method and system based on user portrait
CN113780666B (en) Missing value prediction method and device and readable storage medium
CN115481026A (en) Test case generation method and device, computer equipment and storage medium
CN113077185B (en) Workload evaluation method, workload evaluation device, computer equipment and storage medium
Lakra et al. Improving software maintainability prediction using hyperparameter tuning of baseline machine learning algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination