WO2019085749A1 - Procédé et appareil de commande de programme d'application, support et dispositif électronique - Google Patents

Procédé et appareil de commande de programme d'application, support et dispositif électronique Download PDF

Info

Publication number
WO2019085749A1
WO2019085749A1 PCT/CN2018/110518 CN2018110518W WO2019085749A1 WO 2019085749 A1 WO2019085749 A1 WO 2019085749A1 CN 2018110518 W CN2018110518 W CN 2018110518W WO 2019085749 A1 WO2019085749 A1 WO 2019085749A1
Authority
WO
WIPO (PCT)
Prior art keywords
application
layer
feature information
training model
calculation
Prior art date
Application number
PCT/CN2018/110518
Other languages
English (en)
Chinese (zh)
Inventor
梁昆
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019085749A1 publication Critical patent/WO2019085749A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44594Unloading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Definitions

  • the present application relates to the field of electronic device terminals, and in particular, to an application management method, device, medium, and electronic device.
  • the embodiment of the present application provides an application management method, device, medium, and electronic device to intelligently close an application.
  • An embodiment of the present application provides an application management and control method, which is applied to an electronic device, where the application management method includes the following steps:
  • the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
  • the Back Propagation (BP) neural network algorithm is used to calculate the sample vector set to generate a training model.
  • the current feature information s of the application is input into the training model for calculation;
  • the embodiment of the present application further provides an application management method device, where the device includes:
  • An obtaining module configured to obtain the application sample vector set, where the sample vector in the sample vector set includes historical feature information x i of multiple dimensions of the application;
  • a generating module for calculating a sample vector set by using a BP neural network algorithm to generate a training model
  • a calculation module configured to input the current feature information s of the application into the training model for calculation when the application enters the background;
  • the determining module is configured to determine whether the application needs to be closed.
  • the embodiment of the present application further provides a medium in which a plurality of instructions are stored, the instructions being adapted to be loaded by a processor to execute the application management method described above.
  • the embodiment of the present application further provides an electronic device, where the electronic device includes a processor and a memory, the electronic device is electrically connected to the memory, the memory is used to store instructions and data, and the processor is configured to execute the following step:
  • the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
  • the BP neural network algorithm is used to calculate the sample vector set to generate a training model.
  • the current feature information s of the application is input into the training model for calculation;
  • the embodiment of the present application provides an application management method, device, medium, and electronic device to intelligently close an application.
  • FIG. 1 is a schematic diagram of a system of an application management device according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an application scenario of an application management and control device according to an embodiment of the present disclosure.
  • FIG. 3 is a schematic flowchart of an application management and control method according to an embodiment of the present application.
  • FIG. 4 is another schematic flowchart of an application management and control method according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an apparatus according to an embodiment of the present application.
  • FIG. 6 is another schematic structural diagram of an apparatus according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 8 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
  • An application management method is applied to an electronic device, wherein the application management method comprises the following steps:
  • the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
  • the Back Propagation (BP) neural network algorithm is used to calculate the sample vector set to generate a training model.
  • the current feature information s of the application is input into the training model for calculation;
  • the BP neural network algorithm is used to calculate the sample vector set, and the steps of generating the training model include:
  • the sample vector set is brought into the network structure for calculation to obtain a training model.
  • the method in the step of defining a network structure, includes:
  • the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i ;
  • the hidden layer including M nodes
  • the classification layer adopts a Softmax function, and the Softmax function is Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result. Is the jth intermediate value;
  • the output layer comprising 2 nodes
  • the activation function adopting a sigmoid function, and the sigmoid function is Wherein the range of f(x) is 0 to 1;
  • the batch size is A
  • the learning rate is set, and the learning rate is B.
  • the hidden layer includes a first implicit layer, a second implicit layer, and a third implicit layer, the first implicit layer, and the second implicit layer
  • the number of nodes in each of the layer and the third implicit layer is less than 10.
  • the dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10.
  • the step of bringing a sample vector set into a network structure for calculation, and obtaining the training model includes:
  • the predicted probability value is brought into the output layer for calculation to obtain a predicted result value y.
  • y [1 0] T
  • the network structure is modified according to the predicted result value y to obtain a training model.
  • the current feature information s is input into the training model to calculate a predicted probability value of the classification layer.
  • the method in the step of determining whether the application needs to be closed, the method includes:
  • the current feature information s of the application is input into the training model for calculation, including:
  • the current feature information s is brought into the training model for calculation.
  • An application management device wherein the device comprises:
  • An obtaining module configured to obtain the application sample vector set, where the sample vector in the sample vector set includes historical feature information x i of multiple dimensions of the application;
  • a generating module for calculating a sample vector set by using a BP neural network algorithm to generate a training model
  • a calculation module configured to input the current feature information s of the application into the training model for calculation when the application enters the background;
  • the determining module is configured to determine whether the application needs to be closed.
  • An electronic device comprising: a processor and a memory, the electronic device being electrically connected to the memory, the memory for storing instructions and data, the processor for performing:
  • the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
  • the Back Propagation (BP) neural network algorithm is used to calculate the sample vector set to generate a training model.
  • the current feature information s of the application is input into the training model for calculation;
  • the BP neural network algorithm is used to calculate the sample vector set, and the steps of generating the training model include:
  • the sample vector set is brought into the network structure for calculation to obtain a training model.
  • the method includes:
  • the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i ;
  • the hidden layer including M nodes
  • the classification layer adopts a Softmax function, and the Softmax function is Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result. Is the jth intermediate value;
  • the output layer comprising 2 nodes
  • the activation function adopting a sigmoid function, and the sigmoid function is Wherein the range of f(x) is 0 to 1;
  • the batch size is A
  • the learning rate is set, and the learning rate is B.
  • the hidden layer includes a first implicit layer, a second implicit layer, and a third hidden layer, the first implicit layer, the second hidden layer, and The number of nodes in each layer in the third implicit layer is less than 10.
  • the dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10.
  • the step of bringing the sample vector set into the network structure for calculation, and obtaining the training model includes:
  • the predicted probability value is brought into the output layer for calculation to obtain a predicted result value y.
  • y [1 0] T
  • the network structure is modified according to the predicted result value y to obtain a training model.
  • the method includes:
  • the application management method provided by the present application is mainly applied to electronic devices such as a wristband, a smart phone, a tablet based on an Apple system or an Android system, or a smart mobile electronic device such as a Windows or Linux based notebook computer.
  • the application may be a chat application, a video application, a music application, a shopping application, a shared bicycle application, or a mobile banking application.
  • FIG. 1 is a schematic diagram of a system for controlling an application program according to an embodiment of the present application.
  • the application management device is mainly configured to: obtain historical feature information x i of the application from a database, and then calculate the historical feature information x i by an algorithm to obtain a training model, and secondly, the current feature information of the application.
  • the training model is input for calculation, and the calculation result is used to judge whether the application can be closed to control the preset application, such as closing, or freezing.
  • FIG. 2 is a schematic diagram of an application scenario of an application management and control method according to an embodiment of the present application.
  • the historical feature information x i of the application is obtained from the database, and then the historical feature information x i is calculated by an algorithm to obtain a training model, and secondly, when the application control device detects that the application enters When the electronic device is in the background, the current feature information s of the application is input into the training model for calculation, and the calculation result determines whether the application can be closed.
  • the historical feature information x i of the application a is obtained from the database, and then the historical feature information x i is calculated by an algorithm to obtain a training model, and secondly, when the application control device detects that the application a enters the electronic device In the background, the current feature information s of the application is input into the training model for calculation, and the calculation result determines that the application a can be closed, and the application a is closed, when the application control device detects that the application b enters the background of the electronic device. At this time, the current feature information s of the application b is input into the training model for calculation, and it is judged by the calculation result that the application b needs to be retained, and the application b is retained.
  • the embodiment of the present application provides an application management method, and the execution entity of the application management method may be an application management device provided by an embodiment of the present invention, or an electronic device of the application management device, where the application The control device can be implemented in hardware or software.
  • FIG. 3 is a schematic flowchart diagram of an application management and control method according to an embodiment of the present application.
  • the application management and control method provided by the embodiment of the present application is applied to an electronic device, and the specific process may be as follows:
  • Step S101 Acquire the application sample vector set, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
  • the application sample vector set is obtained from a sample database, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
  • the feature information of the multiple dimensions may refer to Table 1.
  • the feature information of the ten dimensions shown in Table 1 above is only one of the embodiments in the present application, but the application is not limited to the feature information of the ten dimensions shown in Table 1, and may also be One of them, or at least two of them, or all of them, may also include feature information of other dimensions, for example, whether it is currently charging, current power, or whether WiFi is currently connected.
  • historical features of six dimensions can be selected:
  • WiFi whether WiFi is turned on, for example, WiFi is turned on, recorded as 1, WiFi is turned off, and recorded as 0;
  • step S102 the BP neural network algorithm is used to calculate the sample vector set to generate a training model.
  • FIG. 4 is a schematic flowchart diagram of an application management and control method according to an embodiment of the present application.
  • the step S102 may include:
  • Step S1021 defining a network structure
  • Step S1022 Bring the sample vector set into the network structure for calculation, and obtain a training model.
  • step S1021 the defining the network structure includes:
  • Step S1021a setting an input layer, the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i .
  • the dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10 to simplify the operation process.
  • the historical feature information x i has a dimension of 6 dimensions, and the input layer includes 6 nodes.
  • Step S1021b setting a hidden layer, the hidden layer including M nodes.
  • the hidden layer may include a plurality of implicit layers.
  • the number of nodes in each of the implicit layers is less than 10 to simplify the operation process.
  • the hidden layer may include a first implicit layer, a second hidden layer, and a third hidden layer.
  • the first implicit layering includes 10 nodes
  • the second implicit layering includes 5 nodes
  • the third implicit layering includes 5 nodes.
  • Step S1021c setting a classification layer, the classification layer adopts a softmax function, and the softmax function is
  • p is the predicted probability value
  • Z K is the intermediate value
  • C is the number of categories of the predicted result. Is the jth intermediate value.
  • step S1021d an output layer is set, and the output layer includes two nodes.
  • Step S1021e setting an activation function, the activation function adopting a sigmoid function, and the sigmoid function is Wherein the range of f(x) is 0 to 1.
  • step S1021f the batch size is set, and the batch size is A.
  • the batch size can be flexibly adjusted according to actual conditions.
  • the batch size can be 50-200.
  • the batch size is 128.
  • step S1021g a learning rate is set, and the learning rate is B.
  • the learning rate can be flexibly adjusted according to actual conditions.
  • the learning rate can be from 0.1 to 1.5.
  • the learning rate is 0.9.
  • step S1022 the step of bringing the sample vector set into the network structure for calculation, the step of obtaining the training model may include:
  • step S1022a the sample vector set is input at the input layer for calculation, and an output value of the input layer is obtained.
  • Step S1022b inputting an output value of the input layer in the hidden layer to obtain an output value of the hidden layer.
  • the output value of the input layer is an input value of the hidden layer.
  • the hidden layer may include a plurality of hidden layers.
  • the output of the input layer is the input value of the first implicit layer.
  • the output value of the first implicit layer is an input value of the second implicit layer.
  • the output value of the second implicit layer is an input value of the third implicit layer, and so on.
  • Step S1022c inputting an output value of the hidden layer in the classification layer to perform calculation to obtain the predicted probability value [p 1 p 2 ] T .
  • the output value of the hidden layer is an input value of the classification layer.
  • the hidden layer may include a plurality of hidden layers.
  • the output value of the last implicit layer is the input value of the classification layer.
  • Step S1022d Bring the predicted probability value into the output layer for calculation to obtain a predicted result value y.
  • y [1 0] T
  • the output value of the classification layer is an input value of the output layer.
  • step S1022e the network structure is modified according to the prediction result value y to obtain a training model.
  • Step S103 when the application enters the background, the current feature information s of the application is input into the training model for calculation.
  • the step S103 may include:
  • Step S1031 Collect current feature information s of the application.
  • the dimension of the current feature information s of the collected application is the same as the dimension of the collected historical feature information x i of the application.
  • Step S1032 Bring the current feature information s into the training model for calculation.
  • step S104 it is determined whether the application needs to be closed.
  • the application management method provided by the present application generates the training model by using the BP neural network algorithm by acquiring the historical feature information x i , and brings the current feature information s of the application into the training model when the detection application enters the background, and further Determine if the application needs to be closed and intelligently close the application.
  • FIG. 5 is a schematic structural diagram of an application program management apparatus according to an embodiment of the present application.
  • the device 30 includes an acquisition module 31, a generation module 32, a calculation module 33, and a determination module 34.
  • the application may be a chat application, a video application, a music application, a shopping application, a shared bicycle application, or a mobile banking application.
  • the obtaining module 31 is configured to obtain the application sample vector set, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
  • the application sample vector set is obtained from a sample database, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
  • FIG. 6 is a schematic structural diagram of an application program management apparatus according to an embodiment of the present application.
  • the device 30 further includes a detection module 35 for detecting that the application enters the background.
  • the device 30 can also include a storage module 36.
  • the storage module 36 is configured to store historical feature information x i of the application .
  • the feature information of the multiple dimensions may refer to Table 2.
  • the feature information of the ten dimensions shown in Table 2 above is only one of the embodiments in the present application, but the application is not limited to the feature information of the ten dimensions shown in Table 1, and may also be One of them, or at least two of them, or all of them, may also include feature information of other dimensions, for example, whether it is currently charging, current power, or whether WiFi is currently connected.
  • historical features of six dimensions can be selected:
  • WiFi whether WiFi is turned on, for example, WiFi is turned on, recorded as 1, WiFi is turned off, and recorded as 0;
  • the generating module 32 is configured to calculate a sample vector set by using a BP neural network algorithm to generate a training model.
  • the generating module 32 trains the historical feature information x i acquired by the obtaining module 31, and inputs the historical feature information x i in the BP neural network algorithm.
  • the generating module 32 includes a defining module 321 and a solving module 322.
  • the definition module 321 is used to define a network structure.
  • the definition module 321 may include an input layer definition module 3211, an implicit layer definition module 3212, a classification layer definition module 3213, an output layer definition module 3214, an activation function definition module 3215, a batch size definition module 3216, and a learning rate definition module 3217.
  • the input layer definition module 3211 is configured to set an input layer, where the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i .
  • the dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10 to simplify the operation process.
  • the historical feature information x i has a dimension of 6 dimensions, and the input layer includes 6 nodes.
  • the hidden layer definition module 3212 is configured to set an implicit layer, and the hidden layer includes M nodes.
  • the hidden layer may include a plurality of implicit layers.
  • the number of nodes in each of the implicit layers is less than 10 to simplify the operation process.
  • the hidden layer may include a first implicit layer, a second hidden layer, and a third hidden layer.
  • the first implicit layering includes 10 nodes
  • the second implicit layering includes 5 nodes
  • the third implicit layering includes 5 nodes.
  • the classification layer definition module 3213 is configured to set a classification layer, the classification layer adopts a softmax function, and the softmax function is Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result. Is the jth intermediate value.
  • the output layer definition module 3214 is configured to set an output layer, and the output layer includes 2 nodes.
  • the activation function definition module 3215 is configured to set an activation function, the activation function adopts a sigmoid function, and the sigmoid function is Wherein the range of f(x) is 0 to 1.
  • the batch size definition module 3216 is configured to set a batch size, and the batch size is A.
  • the batch size can be flexibly adjusted according to actual conditions.
  • the batch size can be 50-200.
  • the batch size is 128.
  • the learning rate definition module 3217 is configured to set a learning rate, and the learning rate is B.
  • the learning rate can be flexibly adjusted according to actual conditions.
  • the learning rate can be from 0.1 to 1.5.
  • the learning rate is 0.9.
  • the input layer definition module 3211 sets the input layer
  • the hidden layer definition module 3212 sets the hidden layer
  • the classification layer definition module 3213 sets the classification layer
  • the output layer definition module 3214 The setting output layer
  • the activation function definition module 3215 sets the activation function
  • the batch size definition module 3216 sets the batch size
  • the learning order definition module 3217 sets the learning order in a sequence that can be flexibly adjusted.
  • the solving module 322 is configured to bring the sample vector set into the network structure for calculation to obtain a training model.
  • the solution module 322 can include a first solution module 3221, a second solution module 3222, a third solution module 3223, a fourth solution module 3224, and a correction module.
  • the first solving module 3221 is configured to input the sample vector set at the input layer for calculation to obtain an output value of the input layer.
  • the second solving module 3222 is configured to input an output value of the input layer at the hidden layer to obtain an output value of the hidden layer.
  • the output value of the input layer is an input value of the hidden layer.
  • the hidden layer may include a plurality of hidden layers.
  • the output of the input layer is the input value of the first implicit layer.
  • the output value of the first implicit layer is an input value of the second implicit layer.
  • the output value of the second implicit layer is an input value of the third implicit layer, and so on.
  • the third solving module 3223 is configured to input an output value of the hidden layer in the classification layer to calculate, to obtain the predicted probability value [p 1 p 2 ] T .
  • the output value of the hidden layer is an input value of the classification layer.
  • the fourth solving module 3224 is configured to bring the predicted probability value into the output layer for calculation to obtain a predicted result value y.
  • y [1 0] T
  • y [0 1] T .
  • the output value of the classification layer is an input value of the output layer.
  • the modification module 3225 is configured to modify the network structure according to the prediction result value y to obtain a training model.
  • the calculating module 33 is configured to input the current feature information s of the application into the training model for calculation when the application enters the background.
  • the calculation module 33 may include an acquisition module 331 and an operation module 332 .
  • the collecting module 331 is configured to collect current feature information s of the application.
  • the dimension of the current feature information s of the collected application is the same as the dimension of the collected historical feature information x i of the application.
  • the operation module 332 is configured to bring the current feature information s into the training model for calculation.
  • the collecting module 331 is configured to collect the current feature information s according to a predetermined acquisition time, and store the current feature information s in the storage module 36.
  • the collecting module 331 is further configured to collect and detect the application.
  • the current feature information s corresponding to the time point entering the background is used, and the current feature information s is input into the operation module 332 for being brought into the training model for calculation.
  • the determining module 34 is configured to determine whether the application needs to be closed.
  • the apparatus 30 can also include a shutdown module 37 for shutting down the application when it is determined that the application needs to be closed.
  • the apparatus for application management and control provided by the application obtains the historical feature information x i , generates a training model by using a BP neural network algorithm, and brings the current feature information s of the application into the background when the detection application enters the background. Train the model to determine if the application needs to be closed and intelligently close the application.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device 500 includes a processor 501 and a memory 502.
  • the processor 501 is electrically connected to the memory 502.
  • the processor 501 is a control center of the electronic device 500, and connects various parts of the entire electronic device 500 by various interfaces and lines, by running or loading an application stored in the memory 502, and calling data stored in the memory 502, executing The various functions of the electronic device and the processing of the data enable overall monitoring of the electronic device 500.
  • the processor 501 in the electronic device 500 loads the instructions corresponding to the process of one or more applications into the memory 502 according to the following steps, and is stored and stored in the memory 502 by the processor 501.
  • the application thus implementing various functions:
  • the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
  • the neural network algorithm is used to calculate the sample vector set to generate a training model
  • the current feature information s of the application is input into the training model for calculation;
  • the application may be a chat application, a video application, a music application, a shopping application, a shared bicycle application, or a mobile banking application.
  • the application sample vector set is obtained from a sample database, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
  • the feature information of the multiple dimensions may refer to Table 3.
  • the feature information of the ten dimensions shown in Table 3 above is only one of the embodiments in the present application, but the application is not limited to the feature information of the ten dimensions shown in Table 1, and may also be One of them, or at least two of them, or all of them, may also include feature information of other dimensions, for example, whether it is currently charging, current power, or whether WiFi is currently connected.
  • historical features of six dimensions can be selected:
  • WiFi whether WiFi is turned on, for example, WiFi is turned on, recorded as 1, WiFi is turned off, and recorded as 0;
  • the processor 501 calculates a sample vector set by using a BP neural network algorithm, and the generating the training model further includes:
  • the sample vector set is brought into the network structure for calculation to obtain a training model.
  • the defined network structure includes:
  • the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i ;
  • the dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10 to simplify the operation process.
  • the historical feature information x i has a dimension of 6 dimensions, and the input layer includes 6 nodes.
  • a hidden layer is set, the hidden layer including M nodes.
  • the hidden layer may include a plurality of implicit layers.
  • the number of nodes in each of the implicit layers is less than 10 to simplify the operation process.
  • the hidden layer may include a first implicit layer, a second hidden layer, and a third hidden layer.
  • the first implicit layering includes 10 nodes
  • the second implicit layering includes 5 nodes
  • the third implicit layering includes 5 nodes.
  • the classification layer adopts a softmax function, and the softmax function is Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result. Is the jth intermediate value.
  • An output layer is set, the output layer comprising 2 nodes.
  • the activation function adopting a sigmoid function
  • the sigmoid function is Wherein the range of f(x) is 0 to 1.
  • the batch size can be flexibly adjusted according to actual conditions.
  • the batch size can be 50-200.
  • the batch size is 128.
  • the learning rate is set, and the learning rate is B.
  • the learning rate can be flexibly adjusted according to actual conditions.
  • the learning rate can be from 0.1 to 1.5.
  • the learning rate is 0.9.
  • the step of bringing the sample vector set into the network structure for calculation, and obtaining the training model may include:
  • the sample vector set is input at the input layer for calculation to obtain an output value of the input layer.
  • An output value of the input layer is input to the hidden layer to obtain an output value of the hidden layer.
  • the output value of the input layer is an input value of the hidden layer.
  • the hidden layer may include a plurality of hidden layers.
  • the output of the input layer is the input value of the first implicit layer.
  • the output value of the first implicit layer is an input value of the second implicit layer.
  • the output value of the second implicit layer is an input value of the third implicit layer, and so on.
  • the output value of the hidden layer is input at the classification layer to calculate, and the predicted probability value [p 1 p 2 ] T is obtained .
  • the output value of the hidden layer is an input value of the classification layer.
  • the hidden layer may include a plurality of hidden layers.
  • the output value of the last implicit layer is the input value of the classification layer.
  • the predicted probability value is brought into the output layer for calculation to obtain a predicted result value y.
  • y [1 0] T
  • the output value of the classification layer is an input value of the output layer.
  • the network structure is modified according to the predicted result value y to obtain a training model.
  • the step of inputting the current feature information s of the application into the training model for calculation includes:
  • the current feature information s of the application is collected.
  • the dimension of the current feature information s of the collected application is the same as the dimension of the collected historical feature information x i of the application.
  • the current feature information s is brought into the training model for calculation.
  • Memory 502 can be used to store applications and data.
  • the program stored in the memory 502 contains instructions executable in the processor.
  • the program can constitute various functional modules.
  • the processor 501 executes various function applications and data processing by running a program stored in the memory 502.
  • FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device 500 further includes a radio frequency circuit 503, a display screen 504, a control circuit 505, an input unit 506, an audio circuit 507, a sensor 508, and a power source 509.
  • the processor 501 is electrically connected to the radio frequency circuit 503, the display screen 504, the control circuit 505, the input unit 506, the audio circuit 507, the sensor 508, and the power source 509, respectively.
  • the radio frequency circuit 503 is configured to transceive radio frequency signals to communicate with a server or other electronic device over a wireless communication network.
  • the display screen 504 can be used to display information entered by the user or information provided to the user as well as various graphical user interfaces of the terminal, which can be composed of images, text, icons, video, and any combination thereof.
  • the control circuit 505 is electrically connected to the display screen 504 for controlling the display screen 504 to display information.
  • the input unit 506 can be configured to receive input digits, character information, or user characteristic information (eg, fingerprints), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function controls.
  • user characteristic information eg, fingerprints
  • the audio circuit 507 can provide an audio interface between the user and the terminal through a speaker and a microphone.
  • Sensor 508 is used to collect external environmental information.
  • Sensor 508 can include one or more of ambient brightness sensors, acceleration sensors, gyroscopes, and the like.
  • Power source 509 is used to power various components of electronic device 500.
  • the power supply 509 can be logically coupled to the processor 501 through a power management system to enable functions such as managing charging, discharging, and power management through the power management system.
  • the electronic device 500 may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the electronic device provided by the present application generates the training model by using the BP neural network algorithm by acquiring the historical feature information x i , and when the detection application enters the background, the current feature information s of the application is brought into the training model, and then the judgment is performed. Whether the application needs to be closed, intelligently close the application.
  • the embodiment of the present invention further provides a medium in which a plurality of instructions are stored, the instructions being adapted to be loaded by a processor to execute the application management method described in any of the above embodiments.
  • the application management method, the device, the medium, and the electronic device provided by the embodiments of the present invention belong to the same concept, and the specific implementation process thereof is described in the full text of the specification, and details are not described herein again.
  • the program may be stored in a computer readable storage medium, and the storage medium may include: Read Only Memory (ROM), Random Access Memory (RAM), disk or optical disk.
  • ROM Read Only Memory
  • RAM Random Access Memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Stored Programmes (AREA)

Abstract

La présente invention concerne un procédé et un appareil de commande de programme d'application, un support et un dispositif électronique. Le procédé comprend les étapes qui consistent : à obtenir des informations de caractéristiques d'historique x i ; à générer un modèle d'apprentissage au moyen d'un algorithme de rétropropagation (BP) de réseau neuronal; lorsqu'il est détecté qu'un programme d'application passe à l'arrière-plan, à inclure des informations de caractéristiques courantes s du programme d'application dans un modèle d'apprentissage; à déterminer alors si le programme d'application doit être fermé, et à fermer de manière intelligente le programme d'application.
PCT/CN2018/110518 2017-10-31 2018-10-16 Procédé et appareil de commande de programme d'application, support et dispositif électronique WO2019085749A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711044959.5A CN107885544B (zh) 2017-10-31 2017-10-31 应用程序管控方法、装置、介质及电子设备
CN201711044959.5 2017-10-31

Publications (1)

Publication Number Publication Date
WO2019085749A1 true WO2019085749A1 (fr) 2019-05-09

Family

ID=61783058

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/110518 WO2019085749A1 (fr) 2017-10-31 2018-10-16 Procédé et appareil de commande de programme d'application, support et dispositif électronique

Country Status (2)

Country Link
CN (1) CN107885544B (fr)
WO (1) WO2019085749A1 (fr)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107885544B (zh) * 2017-10-31 2020-04-10 Oppo广东移动通信有限公司 应用程序管控方法、装置、介质及电子设备
CN109101326A (zh) * 2018-06-06 2018-12-28 三星电子(中国)研发中心 一种后台进程管理方法和装置
CN110275760A (zh) * 2019-06-27 2019-09-24 深圳市网心科技有限公司 基于虚拟主机处理器的进程挂起方法及其相关设备
CN110286949A (zh) * 2019-06-27 2019-09-27 深圳市网心科技有限公司 基于物理主机存储装置读写的进程挂起方法及相关设备
CN110286961A (zh) * 2019-06-27 2019-09-27 深圳市网心科技有限公司 基于物理主机处理器的进程挂起方法及相关设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306095A (zh) * 2011-07-21 2012-01-04 宇龙计算机通信科技(深圳)有限公司 应用程序管理方法和终端
CN105718027A (zh) * 2016-01-20 2016-06-29 努比亚技术有限公司 后台应用程序的管理方法及移动终端
CN105808410A (zh) * 2016-03-29 2016-07-27 联想(北京)有限公司 一种信息处理方法及电子设备
US20170116511A1 (en) * 2015-10-27 2017-04-27 Pusan National University Industry-University Cooperation Foundation Apparatus and method for classifying home appliances based on power consumption using deep learning
CN106909447A (zh) * 2015-12-23 2017-06-30 北京金山安全软件有限公司 一种后台应用程序的处理方法、装置及终端
CN107608748A (zh) * 2017-09-30 2018-01-19 广东欧珀移动通信有限公司 应用程序管控方法、装置、存储介质及终端设备
CN107643948A (zh) * 2017-09-30 2018-01-30 广东欧珀移动通信有限公司 应用程序管控方法、装置、介质及电子设备
CN107885544A (zh) * 2017-10-31 2018-04-06 广东欧珀移动通信有限公司 应用程序管控方法、装置、介质及电子设备

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160091786A (ko) * 2015-01-26 2016-08-03 삼성전자주식회사 사용자 관리 방법 및 사용자 관리 장치
CN105389193B (zh) * 2015-12-25 2019-04-26 北京奇虎科技有限公司 应用的加速处理方法、装置和***、服务器
CN106354836A (zh) * 2016-08-31 2017-01-25 南威软件股份有限公司 一种广告页面的预测方法和装置
CN106648023A (zh) * 2016-10-02 2017-05-10 上海青橙实业有限公司 移动终端及其基于神经网络的省电方法
CN107145215B (zh) * 2017-05-06 2019-09-27 维沃移动通信有限公司 一种后台应用程序清理方法及移动终端
CN107133094B (zh) * 2017-06-05 2021-11-02 努比亚技术有限公司 应用管理方法、移动终端及计算机可读存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306095A (zh) * 2011-07-21 2012-01-04 宇龙计算机通信科技(深圳)有限公司 应用程序管理方法和终端
US20170116511A1 (en) * 2015-10-27 2017-04-27 Pusan National University Industry-University Cooperation Foundation Apparatus and method for classifying home appliances based on power consumption using deep learning
CN106909447A (zh) * 2015-12-23 2017-06-30 北京金山安全软件有限公司 一种后台应用程序的处理方法、装置及终端
CN105718027A (zh) * 2016-01-20 2016-06-29 努比亚技术有限公司 后台应用程序的管理方法及移动终端
CN105808410A (zh) * 2016-03-29 2016-07-27 联想(北京)有限公司 一种信息处理方法及电子设备
CN107608748A (zh) * 2017-09-30 2018-01-19 广东欧珀移动通信有限公司 应用程序管控方法、装置、存储介质及终端设备
CN107643948A (zh) * 2017-09-30 2018-01-30 广东欧珀移动通信有限公司 应用程序管控方法、装置、介质及电子设备
CN107885544A (zh) * 2017-10-31 2018-04-06 广东欧珀移动通信有限公司 应用程序管控方法、装置、介质及电子设备

Also Published As

Publication number Publication date
CN107885544A (zh) 2018-04-06
CN107885544B (zh) 2020-04-10

Similar Documents

Publication Publication Date Title
WO2019085749A1 (fr) Procédé et appareil de commande de programme d'application, support et dispositif électronique
WO2019062413A1 (fr) Procédé et appareil de gestion et de commande de programme d'application, support de stockage et dispositif électronique
US11249645B2 (en) Application management method, storage medium, and electronic apparatus
WO2019062317A1 (fr) Dispositif électronique et procédé de commande de programme d'application
WO2019062358A1 (fr) Procédé de commande de programme d'application et dispositif terminal
WO2019085750A1 (fr) Procédé et appareil de commande de programme d'application, support et dispositif électronique
CN107885545B (zh) 应用管理方法、装置、存储介质及电子设备
WO2019062405A1 (fr) Procédé et appareil de traitement de programme d'application, support de stockage et dispositif électronique
CN107659717B (zh) 状态检测方法、装置和存储介质
CN107402808B (zh) 进程管理方法、装置、存储介质及电子设备
CN113284142A (zh) 图像检测方法、装置、计算机可读存储介质及计算机设备
CN111797288A (zh) 数据筛选方法、装置、存储介质及电子设备
CN111738365B (zh) 图像分类模型训练方法、装置、计算机设备及存储介质
WO2019062462A1 (fr) Procédé et appareil de commande d'application, support de stockage et dispositif électronique
WO2019062404A1 (fr) Procédé et appareil de traitement de programme d'application, support de stockage et dispositif électronique
CN107729144B (zh) 应用控制方法、装置、存储介质及电子设备
CN112672405A (zh) 功耗计算方法、装置、存储介质、电子设备以及服务器
CN107861770B (zh) 应用程序管控方法、装置、存储介质及终端设备
CN115618232A (zh) 数据预测方法、装置、存储介质及电子设备
CN112948763B (zh) 件量预测方法、装置、电子设备及存储介质
CN113918757A (zh) 应用推荐方法、装置、电子设备及存储介质
CN114298403A (zh) 预测作品的关注度的方法和装置
CN114647703A (zh) 数据处理方法、装置、电子设备及存储介质
CN112367428A (zh) 电量的显示方法、***、存储介质及移动终端
CN107766892B (zh) 应用程序管控方法、装置、存储介质及终端设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18873865

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18873865

Country of ref document: EP

Kind code of ref document: A1