CN110413510B - Data processing method, device and equipment - Google Patents

Data processing method, device and equipment Download PDF

Info

Publication number
CN110413510B
CN110413510B CN201910576170.7A CN201910576170A CN110413510B CN 110413510 B CN110413510 B CN 110413510B CN 201910576170 A CN201910576170 A CN 201910576170A CN 110413510 B CN110413510 B CN 110413510B
Authority
CN
China
Prior art keywords
terminal
fluency
target
parameters
application program
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910576170.7A
Other languages
Chinese (zh)
Other versions
CN110413510A (en
Inventor
马良
卢强
罗文柱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910576170.7A priority Critical patent/CN110413510B/en
Publication of CN110413510A publication Critical patent/CN110413510A/en
Application granted granted Critical
Publication of CN110413510B publication Critical patent/CN110413510B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application discloses a data processing method, a device and equipment, wherein the method comprises the steps of predicting the fluency of an application program running on each first terminal based on the performance parameters and a fluency prediction model of each first terminal to obtain predicted fluency parameters of each first terminal; the fluent prediction model is obtained by performing data fitting processing on the performance parameters and actually measured fluent parameters of each second terminal in the second plurality of second terminals; and generating a fluency parameter data set of the full quantity terminal. By utilizing the technical scheme provided by the application, comprehensive automatic test data can be obtained, the test cost and error probability are reduced, and the test efficiency is improved.

Description

Data processing method, device and equipment
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a data processing method, apparatus, and device.
Background
With the rapid development of terminal applications, the update speed of application programs of terminals is faster and faster. If an application program of the terminal needs to implement a new function, a function test is usually required for the new function.
In the prior art, a tester typically performs a manual function test of this function on a large number of model terminals purchased from the market. This way of functional testing has at least one of the following drawbacks: the function test cost is high because a large number of model terminals are required to be purchased; there is a high risk of error due to the manual testing; and because the test is time-consuming, the requirement of the application program on instant release cannot be met.
Disclosure of Invention
The invention provides a data processing method, a device and equipment, which can realize that comprehensive automatic test data can be obtained, reduce test cost and error probability and improve test efficiency.
In one aspect, the present invention provides a data processing method, including:
acquiring performance parameters of each first terminal in a first number of first terminals;
predicting the fluency of an application program running on each first terminal based on the performance parameters and the fluency prediction model of each first terminal to obtain predicted fluency parameters of each first terminal; the fluency prediction model is obtained by performing data fitting processing on the performance parameters of the second plurality of second terminals and the actually measured fluency parameters thereof; the actually measured fluency parameters are obtained by carrying out fluency performance test on each second terminal based on the test cases corresponding to the attribute information of the application program;
generating a fluent parameter data set of the full quantity of terminals based on the attribute information of the application program, the performance parameters and the predicted fluent parameters of each first terminal, and the performance parameters and the actually measured fluent parameters of each second terminal;
Wherein the full-volume terminal includes a first number of first terminals and a second number of second terminals.
In another aspect, the present invention further provides a data processing method, including:
receiving a trigger operation request sent by a target terminal to a target application program, wherein the trigger operation request comprises performance parameters of the target terminal and attribute information of the target application program;
inputting the performance parameters of the target terminal and the attribute information of the target application program into a fluent prediction model, and predicting the fluent performance of the target application program running on the target terminal to obtain target fluent parameters of the target terminal;
the target fluency parameter is sent to the target terminal, so that the target terminal controls the running state of the target application program based on the target fluency parameter;
the fluency prediction model is obtained by performing data fitting processing on the performance parameters and actually measured fluency parameters of each second terminal in the second plurality of second terminals; and the actually measured fluency parameters are obtained by testing each second terminal based on the test cases corresponding to the attribute information of the application program.
In another aspect, the present invention also provides a data processing apparatus, including:
the parameter acquisition module is used for acquiring the performance parameters of each first terminal in the first number of first terminals;
the parameter prediction module is used for predicting the fluency of the application program running on each first terminal based on the performance parameters and the fluency prediction model of each first terminal to obtain predicted fluency parameters of each first terminal; the fluency prediction model is obtained by performing data fitting processing on the performance parameters and actually measured fluency parameters of each second terminal in the second plurality of second terminals; the actually measured fluent parameters are obtained by testing each second terminal based on a test case corresponding to the attribute information of the application program;
the data set generation module is used for generating a fluent parameter data set of the full quantity of terminals based on the attribute information of the application program, the performance parameters and the predicted fluent parameters of each first terminal, and the performance parameters and the actually measured fluent parameters of each second terminal;
wherein the full-volume terminal includes a first number of first terminals and a second number of second terminals.
In another aspect, the present invention also provides a data processing apparatus, including:
the system comprises a request receiving module, a trigger operation module and a trigger control module, wherein the request receiving module is used for receiving a trigger operation request for a target application program sent by a target terminal, and the trigger operation request comprises performance parameters of the target terminal and attribute information of the target application program;
the parameter determining module is used for inputting the performance parameters of the target terminal and the attribute information of the target application program into a fluent prediction model, and predicting the fluent performance of the target application program running on the target terminal to obtain target fluent parameters of the target terminal;
the parameter sending module is used for sending the target fluency parameter to the target terminal so that the target terminal controls the running state of the target application program based on the target fluency parameter;
the fluency prediction model is obtained by performing data fitting processing on the performance parameters and actually measured fluency parameters of each second terminal in the second plurality of second terminals; and the actually measured fluency parameters are obtained by testing each second terminal based on the test cases corresponding to the attribute information of the application program.
In another aspect, the present invention also provides a data processing apparatus, the apparatus comprising a processor and a memory, the memory storing at least one instruction, at least one program, a set of codes or a set of instructions, the at least one instruction, the at least one program, the set of codes or the set of instructions being loaded and executed by the processor to implement a data processing method as described in any of the above.
The data processing method, device and equipment provided by the application have at least the following technical effects:
according to the embodiment of the invention, the fluency prediction model is built according to the actually measured fluency parameters of the second terminal and the performance parameters of the second terminal, the fluency performance of the first terminal is predicted based on the fluency prediction model to obtain the predicted fluency parameters, and then the fluency parameter data set of the whole quantity of terminals is generated, so that comprehensive automatic test data can be obtained without purchasing a large quantity of terminal equipment to be tested, the test cost and the error probability are reduced, the test efficiency is improved, and the instant release requirement of an application program can be met. In addition, a fluent prediction model is established according to the actually measured fluent parameters, so that the real fluent performance of the terminal running application program can be reflected; and the target fluency parameters corresponding to the target terminal can be inquired from the fluency parameter data set, so that the fluency prediction calculated amount can be reduced, the fluency performance corresponding to the target terminal can be rapidly predicted, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a data processing method according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart for establishing a fluent prediction model according to an embodiment of the present invention.
Fig. 3 is a schematic partial flow chart of another data processing method according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of an application scenario provided in an embodiment of the present invention.
Fig. 5 is a flowchart of another data processing method according to an embodiment of the present invention.
Fig. 6 is a block diagram of a data processing apparatus according to an embodiment of the present invention.
Fig. 7 is a block diagram of another data processing apparatus according to an embodiment of the present invention.
Fig. 8 is a schematic hardware structure of an apparatus for implementing the method provided by the embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
In the following, a specific embodiment of a data processing method according to the present invention is described, and fig. 1 is a schematic flow chart of a data processing method according to an embodiment of the present invention, and the present specification provides the steps of the method according to the embodiment or the flowchart, but may include more or less steps based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented in a real system or server product, the methods illustrated in the embodiments or figures may be performed sequentially or in parallel (e.g., in a parallel processor or multithreaded environment). As shown in fig. 1, the method may include:
S101: the performance parameters of each first terminal in the first number of first terminals are obtained.
In the embodiment of the invention, the first terminal may include a smart phone, a tablet computer, a notebook computer, a desktop computer, a digital assistant, an intelligent wearable device, and other types of devices. In a specific embodiment, the first terminal may be a terminal device that has already been installed or is about to be installed with some application (e.g. an application with a new function to be tested or a target application). The number of first terminals is a first number, preferably a plurality, and the performance parameters of each first terminal are different from each other. For example, if the first terminal is a smart phone, the first number may include, but is not limited to, hundreds or even thousands.
In an embodiment of the present invention, the performance parameters of the first terminal may include one or more of the number of CPUs (Central Processing Unit, central processing units), CPU frequency, GPU (Graphics Processing Unit, graphics processor) frequency, GPU number, memory size (raw memory, real memory), terminal model, screen size, and screen resolution.
In the embodiment of the invention, when a new function of an application program needs to be tested, the performance parameters of the first terminals can be requested to be acquired from each first terminal, or the performance parameters of the first terminals can be acquired from the performance parameter server stored with all the first terminals.
S103: and predicting the fluency of the application program running on each first terminal based on the performance parameters and the fluency prediction model of each first terminal to obtain predicted fluency parameters of each first terminal.
In the embodiment of the invention, the fluent prediction model is obtained by performing data fitting processing on the performance parameters and actually measured fluent parameters of each second terminal in the second plurality of second terminals. The second terminal may also include a smart phone, tablet, notebook, desktop, digital assistant, smart wearable device, etc. The second terminal is preferably a terminal device that differs from the performance parameters of the first terminal. In particular, the second terminal may be a second number of existing terminal devices, and the second number is preferably a plurality of second terminals, and the performance parameters of each second terminal are different from each other. For example, if the second terminal is a smart phone, the second number may include, but is not limited to, 50-300.
In an embodiment, the number of first terminals and the number of second terminals may be associated. The second terminal is an exemplary terminal device for actually measuring an existing physical object, and the first terminal is an exemplary terminal device for predicting and possibly not having a physical object. If the performance parameters of all the user terminals on which the app related to the application program is installed in the market can be obtained when testing an application program, the performance parameters of other terminals, which are obtained by removing the performance parameters of all the second terminals from the performance parameters of all the user terminals, are the performance parameters of the first terminal. Of course, the first number and the second number may be set according to a specific rule. As another example, the second number may be set to 1/2 to 1/100 of the first number.
In the embodiment of the invention, the actually measured fluency parameter is obtained by carrying out fluency performance test on each second terminal based on the test case corresponding to the attribute information of the application program. The measured fluency parameter is data used for representing fluency of the second terminal running the application program.
In the embodiment of the invention, the application program can be an application program with a new function to be tested or a target application program. By way of example, the application may be a general purpose program such as a WeChat applet, weChat, microblog, QQ, browser, game program, shopping program, video program, and the like. The attribute information of the application program may include program type information (e.g., a large game class, a small game class, a short video class, a long video class, a chat class, a shopping class, etc.), program identification information, and the like.
In an embodiment of the invention, the test cases are associated with attribute information of the application program. For example, if the application is a mini-game class, the test case may be test program code or test script corresponding to a game matching the mini-game class. In particular, the test program code or test script may be fixed, i.e., all applet applications may unify the fixed test program code or test script. Of course, in other embodiments, the test program code or test script may make adaptations based on attribute information of other applications.
In some embodiments, the fluent prediction model is obtained by performing a data fitting process on the performance parameter of each of the second plurality of second terminals and the measured fluent parameter thereof. The fluency prediction model comprises a mapping relation of performance parameters of the terminal, fluency parameters of the terminal and attribute information of the application program. By inputting the performance parameters of the first terminal into the fluency prediction model, the fluency of the application program running on the first terminal is predicted, and the predicted fluency parameters used for representing the fluency of the application program running on the first terminal are obtained.
S105: and generating a fluent parameter data set of the full quantity of terminals based on the attribute information of the application program, the performance parameters and the predicted fluent parameters of each first terminal, and the performance parameters and the actually measured fluent parameters of each second terminal.
In the embodiment of the invention, the full-volume terminal can comprise a first number of first terminals and a second number of second terminals. The full-volume terminal refers to all types of terminal devices on the market or all terminal devices on which the specific application program is installed. The fluency parameter data set is used for determining a target fluency parameter corresponding to the performance parameter of the target terminal and the attribute information of the target application program, so that the target terminal controls the running state of the target application program based on the target fluency parameter.
And generating a fluent parameter data set of the full quantity of terminals by combining the corresponding attribute information of the application program through the performance parameters and the actually measured fluent parameters of the second terminal and the performance parameters and the predicted fluent parameters of the second terminal. The generated fluent parameter dataset includes measured data and predicted data for all terminals. The fluency parameter data set can establish a database according to field types or data attributes, and also can establish an index storage table for the fluency parameter data set so as to facilitate searching. The fluent parameter dataset may be stored in at least one background server, in a local disk of the relevant node, or in the cloud.
According to the embodiment of the invention, the fluency prediction model is built according to the actually measured fluency parameters of the second terminal and the performance parameters of the second terminal, the fluency performance of the first terminal is predicted based on the fluency prediction model to obtain the predicted fluency parameters, and then the fluency parameter data set of the whole quantity of terminals is generated, so that comprehensive automatic test data can be obtained without purchasing a large quantity of terminal equipment to be tested, the test cost and the error probability are reduced, the test efficiency is improved, and the instant release requirement of an application program can be met. In addition, a fluent prediction model is established according to the actually measured fluent parameters, so that the real fluent performance of the terminal running application program can be reflected; and the target fluency parameters corresponding to the target terminal can be inquired from the fluency parameter data set, so that the fluency prediction calculated amount is reduced, the fluency performance result corresponding to the target terminal can be fed back rapidly, and the user experience is improved.
In an embodiment, the data processing method may further include establishing a fluency prediction model. Fig. 2 is a schematic flow chart for establishing a fluent prediction model according to an embodiment of the present invention. As shown in fig. 2, the establishment of the fluent prediction model may be implemented by:
s201: and acquiring the performance parameters of each second terminal in the second number of second terminals.
In the embodiment of the present invention, the second terminal may be an existing second number of terminal devices, where the second number is preferably a plurality of second terminals, and performance parameters of each second terminal are different from each other. I.e. a second number of performance parameters corresponding to a second number of second terminals.
In practical applications, the second terminal may be a smart phone, and the second number may include, but is not limited to, 50-300, for example, 100, 150, 200, etc.
S203: and acquiring a test case corresponding to the attribute information of the application program.
The test cases are associated with the attribute information of the application programs to match the tests of different types of application programs on the terminal. Test cases are test program code or test scripts written by a tester and matching test events of an application (to be tested). For example, the test event may include at least one of an application scenario of the application program, an operating environment of the application program, and relevant configuration information (such as a memory ratio, a storage location, etc.) of the application program, and a corresponding test case is generated by parsing the test event and converting the test event.
In some embodiments, if the application is a mini-game and the fluency of the application running on the terminal is tested, the test case may be test program code or test script corresponding to matching the mini-game. Because the running of the application program is essentially to draw the application picture on the terminal display screen, the running fluency of the application program on the terminal can be reflected by testing the rendering fluency of the application picture of the application program on the terminal page. In one embodiment, if the application classes are the same, the test program code or test script may be fixed, i.e., all the mini-game applications may be unified with the fixed test program code or test script, such as the test script for mini-game screen rendering. By using the fixed test cases to carry out fluency performance test on the same type of application program, the writing and developing time of the test cases can be reduced.
In the embodiment of the invention, the test case corresponding to the attribute information of the application program can be obtained from a local disk, and the test case can also be obtained from other servers or cloud.
S205: and testing the fluency of the application program running on each second terminal by using the test case to obtain the actually measured fluency parameters of each second terminal.
In the embodiment of the invention, after the communication connection between the second terminals and the testing equipment is established, the testing case is operated on each second terminal to perform the fluency performance test, so that the actually measured fluency parameter of each second terminal is obtained. The measured fluency parameter is data used for representing fluency of the second terminal running the application program.
In a specific embodiment, the test case may include information that gradually increases the number of objects and/or the complexity level of the objects drawn on the terminal interface.
Wherein the object may comprise a genius (e.g., cartoon animal such as a bear, puppy, etc.), a person, a plant, an animal, an article, a picture, a photograph, etc. The object complexity level is used to describe the complexity of an object when it is drawn or rendered. By way of example, the complexity level of a three-dimensional object is higher than the complexity level of a two-dimensional object; the more the number of the top ends or the lines of the model corresponding to the object, the higher the complexity degree in drawing or rendering, and the higher the corresponding object complexity level.
Correspondingly, the testing the fluency of the application program running on each second terminal by using the test case to obtain the actually measured fluency parameter of each second terminal includes:
S2051: and monitoring the actual frame rate in the process of drawing the objects by each second terminal and the corresponding object quantity and/or object complexity level of the drawn objects.
The Frame rate (FPS) is the number of display frames per Second. In the process of drawing objects, as the number of objects and/or the complexity level of the objects of each second terminal are gradually increased, when the objects are drawn in the current frame, all the objects drawn in the history frame move continuously, so that as the time for drawing the objects is longer and longer, the number of objects and/or the complexity level of the objects of the drawn objects is higher and higher, and the current picture frame rate is changed along with the drawing time. By monitoring the current actual frame rate of the drawn objects and the corresponding number of drawn objects and/or object complexity level, relevant data can be obtained, and thus a linear relationship between the actual frame rate of each second terminal and the number of objects (and/or object complexity level) is established.
In practical application, in the case that the drawn object is a sprite, if the test case includes information that gradually increases the number of sprites of the drawn sprite on the terminal interface. At this time, a linear relationship between the actual frame rate of each second terminal and the number of the fairings is then established by acquiring the current actual frame rate of the drawn fairings and the corresponding number of the fairings drawn. If the test case includes information on the number of the fairings and the complexity level of the fairings drawn gradually increase on the terminal interface. At this time, the current actual frame rate of the drawing of the eidolon and the corresponding number of the drawn eidolon and the corresponding complexity level of the eidolon are obtained, then the number of the eidolon and the complexity level of the eidolon are subjected to weighting processing (for example, weighted summation and the like) after conversion to obtain test data, and then a linear relation between the actual frame rate of each second terminal and the test data is established.
S2053: and determining the maximum object number and/or the highest object complexity level of the drawn objects when the actual frame rate of each second terminal is equal to a preset frame rate.
In the embodiment of the invention, after the established linear relation between the actual frame rate of each second terminal and the maximum object number and/or the maximum object complexity level, the maximum object number and/or the maximum object complexity level under the preset frame rate can be determined according to the linear relation.
In some embodiments, the preset frame rate may be a frame rate corresponding to no jamming when the terminal draws the object.
For example, if the application is a game class, the predetermined frame rate is equal to 60 frames. If the application program is a video class, the preset frame rate is equal to 30 frames. At this time, the number of the maximum objects drawn at the actual frame rate of 60 or 30 frames is the maximum number of objects; and/or the most complex object drawn at an actual frame rate of 60 or 30 frames is the highest object complexity level.
By way of example, if 5000 demons are drawn when the application is a game class, the FPS can remain at 60 frames; if 5100 sprites are drawn, the FPS drops to 55 frames; the maximum number of objects at 60 frames for the FPS is determined to be 5000 demons. Of course, the maximum number of objects at 60 frames of FPS may also be calculated based on the linear relationship of frame rate and number of drawn fairings.
It should be noted that the preset frame rate is not limited to 60 frames or 30 frames. With the improvement of the display of the terminal or the improvement of the equipment, the preset frame rate can be adaptively adjusted, for example, the preset frame rate can be set to be a value larger than 60 frames or 30 frames.
S2055: and taking the maximum object number and/or the highest object complexity level as the measured fluency parameter of each second terminal.
In some specific embodiments, the maximum number of objects or the highest object complexity level is used as the measured fluency parameter of each second terminal; or the maximum object number or the highest object complexity level is weighted after conversion to obtain test data, and then the test data is used as the actual measurement fluency parameter of each second terminal.
S207: and adopting a machine learning algorithm to perform data fitting processing on the performance parameters of each second terminal and the corresponding actually measured fluent parameters of each second terminal to obtain the fluent prediction model.
In an embodiment, the machine learning algorithm may include a decision tree algorithm, a neural network algorithm, or the like. The decision tree algorithm is a method of approximating discrete function values. The decision tree algorithm may include a gradient-lifted decision tree (Gradient Boosting Decision Tree, GBDT) algorithm, a random forest algorithm, or the like.
In an embodiment of the present invention, the machine learning algorithm is a GBDT algorithm. The GBDT algorithm is an iterative decision tree algorithm consisting of a number of decision trees, the conclusions of all of which are accumulated as the final prediction result. The GBDT algorithm adopts a plurality of learners to learn the training set respectively, the final model is the combination of the learners, and the two most main methods are a Bagging algorithm and a Boosting algorithm. Of course, in other embodiments, the machine learning algorithm may also include other learning algorithms.
In the embodiment of the present invention, the performing, by using a machine learning algorithm, a data fitting process on the performance parameter of each second terminal and the corresponding actually measured fluent parameter of each second terminal to obtain the fluent prediction model may include:
s2071: and sequencing a second number of second terminals based on the performance parameters of each second terminal.
The performance parameters of the second terminal may include one or more of a number of CPUs, a CPU frequency, a GPU frequency, a number of GPUs, a memory size, a terminal model, a screen size, and a screen resolution.
By way of example, the performance parameters of the second terminal include the number of CPUs (e.g., single core, dual core, quad core, etc.), CPU frequency (e.g., 1.4Ghz, 1.7Ghz, 2Ghz, etc.), memory size (e.g., 2G, 4G, 8G, etc.). Each performance score is carried out on each second terminal according to the performance parameters of the second terminal, for example, a four-core CPU is marked as 4 points, a two-core CPU is marked as 2 points, a single-core CPU is marked as 0 point and the like; and then sorting each second terminal according to the score sum of each performance to obtain a sorted second terminal set.
S2073: grouping the second terminals with the second number after sorting according to a preset grouping rule to obtain a third number of terminal sets.
The preset grouping rule may be set and adjusted according to practical situations, and the preset grouping rule may include the number of groupings (for example, may be divided into 3 groups, 4 groups or multiple groups), the proportion of each group (for example, in the proportion of 3:4:3), the score range of each group, and the like. Grouping the second terminals in the second number after sequencing according to a preset grouping rule to obtain a third number of terminal sets. For example, the second number of second terminals may be divided into groups of three different configuration levels, i.e., high, medium, and low, to obtain 3 terminal sets.
S2075: the second terminals in each terminal set are divided into a training set and a testing set.
In an embodiment, the first terminal in each terminal set may be divided according to a certain division ratio (for example, 85:15, 90:10, 98:2, etc.), so as to obtain a training set and a test set respectively.
S2077: and carrying out fluent prediction training on a preset machine learning model based on the training set and the testing set to obtain the fluent prediction model.
The training set is used for training a preset machine learning model, and the testing set is used for testing the trained preset machine learning model (such as GBDT model) so as to optimize model parameters; and taking the optimized model as the fluency prediction model.
The embodiment of the invention trains the fluency prediction model by grouping a plurality of second terminals and then collecting the grouped terminals. The terminals are divided into groups with different grades, and fine data fitting can be performed on each group of terminals, so that the accuracy of data fitting and the accuracy of the obtained fluent prediction model are improved.
Fig. 3 is a schematic partial flow chart of another data processing method according to an embodiment of the present invention. Specifically, as shown in fig. 3, after the fluent parameter dataset is generated, the method may further include:
s301: and receiving a trigger operation request sent by a target terminal to a target application program, wherein the trigger operation request comprises performance parameters of the target terminal and attribute information of the target application program.
In the embodiment of the invention, the user performs a trigger operation event (such as clicking, sliding, selecting and the like) on the target application program at the target terminal, and the trigger operation request is generated. The trigger operation request may include a start request for a target application, a load new function request, etc. The performance parameters of the target terminal may include one or more of CPU number, CPU frequency, GPU number, memory size, terminal model, screen size, and screen resolution. The attribute information of the target application may include program type information (e.g., a macro game class, a mini game class, a short video class, a long video class, a chat class, a shopping class, etc.), program identification information, and the like.
S303: and inquiring target fluency parameters corresponding to the performance parameters of the target terminal and the attribute information of the target application program from the fluency parameter data set.
And inquiring corresponding target fluency parameters from the fluency parameter data set according to the performance parameters of the target terminal and the attribute information of the target application program. And if the query has a corresponding result, returning the target fluency parameter of the query to the target terminal. If no corresponding result is queried, the prompt message that the query result is "null" or "no exists" can be directly returned; the performance parameters of the target terminal and the attribute information of the target application program can also be input into a fluent prediction model for prediction so as to output corresponding target fluent parameters.
S305: and sending the inquired target fluency parameters to the target terminal so that the target terminal controls the running state of the target application program based on the target fluency parameters.
In an embodiment of the present invention, the operation state includes a start-up state.
Correspondingly, the sending the queried target fluency parameter to the target terminal, so that the target terminal controls the running state of the target application program based on the target fluency parameter, includes:
And sending the inquired target fluency parameters to the target terminal so that the target terminal judges whether the target application program meets the starting condition or not based on the target fluency parameters, and the target terminal controls the starting state of the target application program according to the judging result.
In a specific embodiment, if the target application is a game, the target terminal obtains the target fluency parameter (for example, the maximum number of the demos is 4500 under 60 FPS), and the target terminal determines that the target fluency parameter does not meet the starting condition (for example, the maximum number of the demos under 60FPS is not less than 5500), then the target terminal controls the target application to not start according to the determination result that the starting condition is not met, and sends a prompt, for example, "the terminal device cannot open the program", to the user.
Fig. 4 is a schematic diagram of an application scenario provided in an embodiment of the present invention.
As shown in fig. 4, the WeChat app at the target terminal includes a plurality of applets, such as "XX bucket land owner", "XX skip", "XX tank war", "XX book ranking", etc. in fig. 4. If the 'jump one jump' small game is the newly added function of the micro-letter small game program, the user tries to try the 'XX jump one jump' small game, and clicks the 'XX jump one jump' icon to enter a first page with a 'XX jump one jump' pattern, then the target terminal sends a trigger operation request to the background server, wherein the trigger operation request carries attribute information of the small game program and performance parameters of the target terminal, then the target terminal receives a query result of a target fluency parameter fed back by the background server, and if the target fluency parameter is detected by the target terminal not to meet the condition of starting the 'XX jump one jump' small game (the target fluency parameter is lower than a starting threshold), the small game is not started, and a prompt page of 'the current equipment system temporarily does not support the small game program' is displayed.
According to the embodiment of the invention, the target fluency parameters corresponding to the target terminal can be searched quickly in the fluency parameter data set of the full quantity terminal, then the target terminal can pre-judge the running fluency of the target application program according to the target fluency parameters, if the running fluency of the target application program is not good, the situation of blocking can be judged, the target application program is not in accordance with the starting condition, and then a corresponding prompt that the target application program cannot be started is given. Therefore, the fluent prediction calculated amount can be reduced, the fluent performance of the target application program can be rapidly predicted, the problems of blocking or uncomfortableness and the like caused by opening the target application program are avoided, and the user experience is improved.
In the following, another embodiment of the data processing method according to the present invention is described, and fig. 5 is a schematic flow chart of another data processing method according to the embodiment of the present invention, where the method operation steps described in the embodiment or the flowchart are provided, but more or fewer operation steps may be included based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented by a system or server product or terminal in practice, the methods illustrated in the embodiments or figures may be performed sequentially or in parallel (e.g., in a parallel processor or multithreaded environment). As shown in fig. 5, the method may include:
S501: and receiving a trigger operation request sent by a target terminal to a target application program, wherein the trigger operation request comprises performance parameters of the target terminal and attribute information of the target application program.
S503: and inputting the performance parameters of the target terminal and the attribute information of the target application program into a fluent prediction model, and predicting the fluent performance of the target application program running on the target terminal to obtain the target fluent parameters of the target terminal.
The fluency prediction model is obtained by performing data fitting processing on the performance parameters and actually measured fluency parameters of each second terminal in the second plurality of second terminals; and the actually measured fluency parameters are obtained by testing each second terminal based on the test cases corresponding to the attribute information of the application program.
S505: and sending the target fluency parameter to the target terminal so that the target terminal controls the running state of the target application program based on the target fluency parameter.
The specific content and details of steps S501 to S505 in the embodiment of the present invention can be referred to the above embodiment, and in order to avoid repetition, the details are not repeated here.
According to the embodiment of the invention, the actually measured fluent parameters are obtained by utilizing the test case on the second terminal, so that the real fluency of the terminal running the application program can be reflected more. Because the fluency prediction model is established according to the actually measured fluency parameter of the second terminal and the performance parameter of the second terminal, the fluency performance of the target terminal is predicted based on the fluency prediction model, and the target fluency parameter is directly obtained. Therefore, a large number of terminal equipment to be tested does not need to be purchased, and the test cost is greatly reduced; and moreover, the test error caused by manual test can be reduced, and the accuracy of the test result is higher. In addition, the time consumption of the test is shortened, and the instant release requirement of the application program can be met.
The following are examples of the apparatus of the present invention that may be used to perform the method embodiments of the present invention. For details and advantages not disclosed in the embodiments of the apparatus of the present invention, please refer to the embodiments of the method of the present invention.
Referring to fig. 6, a block diagram of a data processing apparatus according to an embodiment of the present invention is shown. The device has the functions for realizing the method examples, and the functions can be realized by hardware or can be realized by corresponding software executed by hardware. The data processing device 60 may include:
A parameter obtaining module 61, configured to obtain a performance parameter of each of the first number of first terminals;
the parameter prediction module 62 is configured to predict, based on the performance parameter and the fluency prediction model of each first terminal, the fluency of the application running on each first terminal, so as to obtain a predicted fluency parameter of each first terminal; the fluency prediction model is obtained by performing data fitting processing on the performance parameters and actually measured fluency parameters of each second terminal in the second plurality of second terminals; the actually measured fluent parameters are obtained by testing each second terminal based on a test case corresponding to the attribute information of the application program;
a data set generating module 63, configured to generate a fluent parameter data set of the full-scale terminal based on the attribute information of the application program, the performance parameter of the first terminal and its predicted fluent parameter, and the performance parameter of the second terminal and its actually measured fluent parameter;
wherein the full-volume terminal includes a first number of first terminals and a second number of second terminals.
In some embodiments, the apparatus 60 may further include a model building module 64, the model building module 64 including:
A first obtaining unit 641 for obtaining a performance parameter of each of the second terminals in the second number;
a second obtaining unit 642 configured to obtain a test case corresponding to attribute information of the application;
a parameter determining unit 643, configured to test the fluency of the application running on each second terminal by using the test case, to obtain an actually measured fluency parameter of each second terminal;
the model building unit 644 is configured to perform data fitting processing on the performance parameters of each second terminal and the corresponding actually measured fluent parameters of each second terminal by using a machine learning algorithm, so as to obtain the fluent prediction model.
In an embodiment, the parameter determining unit 643 may include:
a detection subunit, configured to monitor an actual frame rate and a corresponding number of objects and/or a corresponding complexity level of the objects in the process of drawing the objects by each second terminal;
a first determining subunit, configured to determine a maximum number of objects and/or a maximum object complexity level of the drawn objects when the actual frame rate of each second terminal is equal to a preset frame rate;
and the second determining subunit is used for taking the maximum object number and/or the highest object complexity level as the measured fluency parameter of each second terminal.
In an embodiment, the model building unit 644 may include:
a sorting subunit, configured to sort a second number of second terminals based on the performance parameter of each second terminal;
grouping subunit, configured to group the second number of second terminals after sorting according to a preset grouping rule, to obtain a third number of terminal sets;
the dividing subunit is used for dividing the second terminal in each terminal set into a training set and a testing set;
and the model building subunit is used for carrying out smooth prediction training on a preset machine learning model based on the training set and the testing set to obtain the smooth prediction model.
In one embodiment, the apparatus 60 may further include:
a request receiving module 65, configured to receive a trigger operation request sent by a target terminal for a target application program, where the trigger operation request includes a performance parameter of the target terminal and attribute information of the target application program;
a query module 66, configured to query, from the fluid parameter dataset, a target fluid parameter corresponding to the performance parameter of the target terminal and attribute information of the target application;
And the sending module 67 is configured to send the queried target fluency parameter to the target terminal, so that the target terminal controls the running state of the target application program based on the target fluency parameter.
In one embodiment, the operating state includes a start-up state. Accordingly, the sending module may be configured to:
and sending the inquired target fluency parameters to the target terminal so that the target terminal judges whether the target application program meets the starting condition or not based on the target fluency parameters, and the target terminal controls the starting state of the target application program according to the judging result.
Referring to fig. 7, a block diagram of another data processing apparatus according to an embodiment of the present invention is shown. The device has the functions for realizing the method examples, and the functions can be realized by hardware or can be realized by corresponding software executed by hardware. The data processing device 70 may include:
a request receiving module 71, configured to receive a trigger operation request sent by a target terminal for a target application program, where the trigger operation request includes a performance parameter of the target terminal and attribute information of the target application program;
The parameter determining module 72 is configured to input the performance parameter of the target terminal and the attribute information of the target application program into a fluent prediction model, and predict the fluent performance of the target application program running on the target terminal, so as to obtain a target fluent parameter of the target terminal;
a parameter sending module 73, configured to send the target fluency parameter to the target terminal, so that the target terminal controls the running state of the target application program based on the target fluency parameter;
the fluency prediction model is obtained by performing data fitting processing on the performance parameters and actually measured fluency parameters of each second terminal in the second plurality of second terminals; and the actually measured fluency parameters are obtained by testing each second terminal based on the test cases corresponding to the attribute information of the application program.
An embodiment of the present invention provides a data processing apparatus, where the apparatus includes a processor and a memory, where the memory stores at least one instruction, at least one section of program, a code set, or an instruction set, and the at least one instruction, the at least one section of program, the code set, or the instruction set is loaded and executed by the processor to implement a data processing method as provided in the foregoing method embodiment.
The memory may be used to store software programs and modules that the processor executes to perform various functional applications and data processing by executing the software programs and modules stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for functions, and the like; the storage data area may store data created according to the use of the device, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory may also include a memory controller to provide access to the memory by the processor.
Further, fig. 8 shows a schematic hardware structure of an apparatus for implementing the method provided by the embodiment of the present invention, where the apparatus may be a computer terminal, a mobile terminal or other apparatuses, and the apparatus may also participate in forming or including an apparatus provided by the embodiment of the present invention. As shown in fig. 8, the computer terminal 10 may include one or more processors 102 (shown as 102a, 102b, … …,102 n) 102 (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA), a memory 104 for storing data, and a transmission device 106 for communication functions. In addition, the method may further include: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power supply, and/or a camera. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 8 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 8, or have a different configuration than shown in FIG. 8.
It should be noted that the one or more processors 102 and/or other data processing circuits described above may be referred to generally herein as "data processing circuits. The data processing circuit may be embodied in whole or in part in software, hardware, firmware, or any other combination. Furthermore, the data processing circuitry may be a single stand-alone processing module, or incorporated, in whole or in part, into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the embodiments of the present application, the data processing circuit acts as a processor control (e.g., selection of the path of the variable resistor termination to interface).
The memory 104 may be used to store software programs and modules of application software, and the processor 102 executes the software programs and modules stored in the memory 104 to perform various functional applications and data processing, i.e., implement a neural network processing method according to the embodiments of the present invention. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 106 is arranged to receive or transmit data via a network. The specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the device and server embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and references to the parts of the description of the method embodiments are only required.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (14)

1. A method of data processing, comprising:
acquiring performance parameters of each first terminal in a first number of first terminals;
Predicting the fluency of an application program running on each first terminal based on the performance parameters and the fluency prediction model of each first terminal to obtain predicted fluency parameters of each first terminal; the fluency prediction model is obtained by performing data fitting processing on the performance parameters and actually measured fluency parameters of each second terminal in the second plurality of second terminals; the actually measured fluency parameters are obtained by carrying out fluency performance test on each second terminal based on a test case corresponding to the attribute information of the application program;
generating a fluent parameter data set of the full quantity of terminals based on the attribute information of the application program, the performance parameters and the predicted fluent parameters of each first terminal, and the performance parameters and the actually measured fluent parameters of each second terminal;
wherein the full-volume terminal includes a first number of first terminals and a second number of second terminals.
2. The method of claim 1, wherein the fluency prediction model is implemented by:
acquiring performance parameters of each second terminal in the second number of second terminals;
Acquiring a test case corresponding to the attribute information of the application program;
testing the fluency of the application program running on each second terminal by using the test case to obtain the actually measured fluency parameter of each second terminal;
and adopting a machine learning algorithm to perform data fitting processing on the performance parameters of each second terminal and the corresponding actually measured fluent parameters of each second terminal to obtain the fluent prediction model.
3. The method according to claim 2, wherein the test case includes information for gradually increasing the number of objects and/or the complexity level of the objects drawn on the terminal interface;
correspondingly, the testing the fluency of the application program running on each second terminal by using the test case to obtain the actually measured fluency parameter of each second terminal includes:
monitoring the actual frame rate in the process of drawing the object by each second terminal and the corresponding object quantity and/or object complexity level of the drawn object;
determining the maximum object number and/or the highest object complexity level of the drawn objects when the actual frame rate of each second terminal is equal to a preset frame rate;
And taking the maximum object number and/or the highest object complexity level as the measured fluency parameter of each second terminal.
4. The method of claim 2, wherein the performing, by using a machine learning algorithm, a data fitting process on the performance parameter of each second terminal and the corresponding measured fluid parameter of each second terminal to obtain the fluid prediction model includes:
sorting a second number of second terminals based on the performance parameters of each second terminal;
grouping the second terminals of the second number after sorting according to a preset grouping rule to obtain a third number of terminal sets;
dividing a second terminal in each terminal set into a training set and a testing set;
and carrying out fluent prediction training on a preset machine learning model based on the training set and the testing set to obtain the fluent prediction model.
5. The method of any one of claims 1-4, further comprising:
receiving a trigger operation request sent by a target terminal to a target application program, wherein the trigger operation request comprises performance parameters of the target terminal and attribute information of the target application program;
Inquiring target fluency parameters corresponding to the performance parameters of the target terminal and the attribute information of the target application program from the fluency parameter data set;
and sending the inquired target fluency parameters to the target terminal so that the target terminal controls the running state of the target application program based on the target fluency parameters.
6. The method of claim 5, wherein the operating state comprises a start-up state;
correspondingly, the sending the queried target fluency parameter to the target terminal, so that the target terminal controls the running state of the target application program based on the target fluency parameter, includes:
and sending the inquired target fluency parameters to the target terminal so that the target terminal judges whether the target application program meets the starting condition or not based on the target fluency parameters, and the target terminal controls the starting state of the target application program according to the judging result.
7. A data processing apparatus, comprising:
the parameter acquisition module is used for acquiring the performance parameters of each first terminal in the first number of first terminals;
The parameter prediction module is used for predicting the fluency of the application program running on each first terminal based on the performance parameters and the fluency prediction model of each first terminal to obtain predicted fluency parameters of each first terminal; the fluency prediction model is obtained by performing data fitting processing on the performance parameters and actually measured fluency parameters of each second terminal in the second plurality of second terminals; the actually measured fluent parameters are obtained by testing each second terminal based on a test case corresponding to the attribute information of the application program;
the data set generation module is used for generating a fluent parameter data set of the full quantity of terminals based on the attribute information of the application program, the performance parameters and the predicted fluent parameters of each first terminal, and the performance parameters and the actually measured fluent parameters of each second terminal;
wherein the full-volume terminal includes a first number of first terminals and a second number of second terminals.
8. The apparatus of claim 7, further comprising a modeling module, the modeling module comprising:
a first obtaining unit, configured to obtain a performance parameter of each second terminal in the second number of second terminals;
The second acquisition unit is used for acquiring the test cases corresponding to the attribute information of the application program;
the parameter determining unit is used for testing the fluency of the application program running on each second terminal by using the test case to obtain the actually measured fluency parameter of each second terminal;
the model building unit is used for carrying out data fitting processing on the performance parameters of each second terminal and the corresponding actually measured fluent parameters of each second terminal by adopting a machine learning algorithm to obtain the fluent prediction model.
9. The apparatus of claim 8, wherein the test case includes information for gradually increasing the number of objects and/or the complexity level of the objects drawn on the terminal interface; the parameter determination unit includes:
a detection subunit, configured to monitor an actual frame rate and a corresponding number of objects and/or a corresponding complexity level of the objects in the process of drawing the objects by each second terminal;
a first determining subunit, configured to determine a maximum number of objects and/or a maximum object complexity level of the drawn objects when the actual frame rate of each second terminal is equal to a preset frame rate;
And the second determining subunit is used for taking the maximum object number and/or the highest object complexity level as the measured fluency parameter of each second terminal.
10. The apparatus according to claim 8, wherein the model building unit includes:
a sorting subunit, configured to sort a second number of second terminals based on the performance parameter of each second terminal;
grouping subunit, configured to group the second number of second terminals after sorting according to a preset grouping rule, to obtain a third number of terminal sets;
the dividing subunit is used for dividing the second terminal in each terminal set into a training set and a testing set;
and the model building subunit is used for carrying out smooth prediction training on a preset machine learning model based on the training set and the testing set to obtain the smooth prediction model.
11. The apparatus according to any one of claims 7-10, wherein the apparatus further comprises:
the system comprises a request receiving module, a trigger operation module and a trigger control module, wherein the request receiving module is used for receiving a trigger operation request for a target application program sent by a target terminal, and the trigger operation request comprises performance parameters of the target terminal and attribute information of the target application program;
The inquiring module is used for inquiring the target fluency parameters corresponding to the performance parameters of the target terminal and the attribute information of the target application program from the fluency parameter data set;
and the sending module is used for sending the inquired target fluency parameter to the target terminal so that the target terminal controls the running state of the target application program based on the target fluency parameter.
12. The apparatus of claim 11, wherein the operational state comprises an activated state; the sending module is used for:
and sending the inquired target fluency parameters to the target terminal so that the target terminal judges whether the target application program meets the starting condition or not based on the target fluency parameters, and the target terminal controls the starting state of the target application program according to the judging result.
13. A data processing apparatus, characterized in that the apparatus comprises a processor and a memory in which at least one instruction, at least one program, a set of codes or a set of instructions is stored, the at least one instruction, the at least one program, the set of codes or the set of instructions being loaded and executed by the processor to implement the data processing method according to any one of claims 1 to 6.
14. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program, which is loaded and executed by a processor to implement the data processing method according to any of claims 1 to 6.
CN201910576170.7A 2019-06-28 2019-06-28 Data processing method, device and equipment Active CN110413510B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910576170.7A CN110413510B (en) 2019-06-28 2019-06-28 Data processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910576170.7A CN110413510B (en) 2019-06-28 2019-06-28 Data processing method, device and equipment

Publications (2)

Publication Number Publication Date
CN110413510A CN110413510A (en) 2019-11-05
CN110413510B true CN110413510B (en) 2024-04-12

Family

ID=68358479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910576170.7A Active CN110413510B (en) 2019-06-28 2019-06-28 Data processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN110413510B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112286808B (en) * 2020-10-29 2023-08-11 抖音视界有限公司 Application program testing method and device, electronic equipment and medium
CN112835641A (en) * 2021-02-02 2021-05-25 上海臣星软件技术有限公司 Configuration method and device for application program running function
CN113769387A (en) * 2021-09-18 2021-12-10 网易(杭州)网络有限公司 Game graphic parameter configuration method and device and terminal equipment
CN114064513B (en) * 2022-01-12 2022-06-21 北京新氧科技有限公司 Page fluency performance detection method, device, equipment and system
CN115333941B (en) * 2022-01-28 2023-08-22 花瓣云科技有限公司 Method for acquiring application running condition and related equipment
CN114827646B (en) * 2022-03-23 2023-12-12 百果园技术(新加坡)有限公司 Method, device, equipment and storage medium for preloading live broadcasting room in video stream

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105308941A (en) * 2014-05-16 2016-02-03 华为技术有限公司 Method and apparatus for resource configuration
CN106294902A (en) * 2015-05-28 2017-01-04 阿里巴巴集团控股有限公司 Method, device and the electronic equipment of prediction mobile applications page performance
CN108334440A (en) * 2017-01-19 2018-07-27 阿里巴巴集团控股有限公司 A kind of processing method and processing device, client obtaining application performance test result
CN108984369A (en) * 2018-07-13 2018-12-11 厦门美图移动科技有限公司 Caton prediction method and device and mobile terminal
CN109697090A (en) * 2018-12-27 2019-04-30 Oppo广东移动通信有限公司 A kind of method, terminal device and the storage medium of controlling terminal equipment
CN109814933A (en) * 2019-01-29 2019-05-28 腾讯科技(深圳)有限公司 A kind of business data processing method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10592404B2 (en) * 2017-03-01 2020-03-17 International Business Machines Corporation Performance test of software products with reduced duration

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105308941A (en) * 2014-05-16 2016-02-03 华为技术有限公司 Method and apparatus for resource configuration
CN106294902A (en) * 2015-05-28 2017-01-04 阿里巴巴集团控股有限公司 Method, device and the electronic equipment of prediction mobile applications page performance
CN108334440A (en) * 2017-01-19 2018-07-27 阿里巴巴集团控股有限公司 A kind of processing method and processing device, client obtaining application performance test result
CN108984369A (en) * 2018-07-13 2018-12-11 厦门美图移动科技有限公司 Caton prediction method and device and mobile terminal
CN109697090A (en) * 2018-12-27 2019-04-30 Oppo广东移动通信有限公司 A kind of method, terminal device and the storage medium of controlling terminal equipment
CN109814933A (en) * 2019-01-29 2019-05-28 腾讯科技(深圳)有限公司 A kind of business data processing method and device

Also Published As

Publication number Publication date
CN110413510A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN110413510B (en) Data processing method, device and equipment
CN111209215B (en) Application program testing method and device, computer equipment and storage medium
CN110110201A (en) A kind of content recommendation method and system
CN111382863A (en) Neural network compression method and device
CN110197004B (en) Circuit simulation method and device based on mobile terminal, computer medium and equipment
CN107807935B (en) Using recommended method and device
CN114564374A (en) Operator performance evaluation method and device, electronic equipment and storage medium
CN114924684A (en) Environmental modeling method and device based on decision flow graph and electronic equipment
CN113742069A (en) Capacity prediction method and device based on artificial intelligence and storage medium
CN106648895A (en) Data processing method and device, and terminal
CN111190801A (en) Recommendation system testing method and device and electronic equipment
CN110489104A (en) Suitable for the mathematical formulae processing method and processing device of experimental data, storage medium
CN112632309B (en) Image display method and device, electronic equipment and storage medium
CN117933350A (en) Multi-agent reinforcement learning system, method, electronic device and storage medium
CN113032040B (en) Method, apparatus, device, medium, and article for processing tasks
CN112632883B (en) Method, device, equipment and medium for testing simulation result of device model
CN109614328B (en) Method and apparatus for processing test data
CN117933352A (en) Single-agent reinforcement learning method and device, electronic equipment and storage medium
CN114064106A (en) Middleware cluster recommendation method and device, electronic equipment and storage medium
CN116720685A (en) Application gray scale test method and device, electronic equipment and storage medium
CN114925265A (en) User portrait group acquisition method, system, equipment and computer readable storage medium based on group behaviors
CN115391657A (en) Resource recommendation method, device, equipment and storage medium
CN115687751A (en) Method and system for selecting user for target terminal
CN112035372A (en) Software testing method and device and electronic equipment
CN116662652A (en) Model training method, resource recommendation method, sample generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant