WO2019085749A1 - 应用程序管控方法、装置、介质及电子设备 - Google Patents

应用程序管控方法、装置、介质及电子设备 Download PDF

Info

Publication number
WO2019085749A1
WO2019085749A1 PCT/CN2018/110518 CN2018110518W WO2019085749A1 WO 2019085749 A1 WO2019085749 A1 WO 2019085749A1 CN 2018110518 W CN2018110518 W CN 2018110518W WO 2019085749 A1 WO2019085749 A1 WO 2019085749A1
Authority
WO
WIPO (PCT)
Prior art keywords
application
layer
feature information
training model
calculation
Prior art date
Application number
PCT/CN2018/110518
Other languages
English (en)
French (fr)
Inventor
梁昆
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019085749A1 publication Critical patent/WO2019085749A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44594Unloading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Definitions

  • the present application relates to the field of electronic device terminals, and in particular, to an application management method, device, medium, and electronic device.
  • the embodiment of the present application provides an application management method, device, medium, and electronic device to intelligently close an application.
  • An embodiment of the present application provides an application management and control method, which is applied to an electronic device, where the application management method includes the following steps:
  • the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
  • the Back Propagation (BP) neural network algorithm is used to calculate the sample vector set to generate a training model.
  • the current feature information s of the application is input into the training model for calculation;
  • the embodiment of the present application further provides an application management method device, where the device includes:
  • An obtaining module configured to obtain the application sample vector set, where the sample vector in the sample vector set includes historical feature information x i of multiple dimensions of the application;
  • a generating module for calculating a sample vector set by using a BP neural network algorithm to generate a training model
  • a calculation module configured to input the current feature information s of the application into the training model for calculation when the application enters the background;
  • the determining module is configured to determine whether the application needs to be closed.
  • the embodiment of the present application further provides a medium in which a plurality of instructions are stored, the instructions being adapted to be loaded by a processor to execute the application management method described above.
  • the embodiment of the present application further provides an electronic device, where the electronic device includes a processor and a memory, the electronic device is electrically connected to the memory, the memory is used to store instructions and data, and the processor is configured to execute the following step:
  • the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
  • the BP neural network algorithm is used to calculate the sample vector set to generate a training model.
  • the current feature information s of the application is input into the training model for calculation;
  • the embodiment of the present application provides an application management method, device, medium, and electronic device to intelligently close an application.
  • FIG. 1 is a schematic diagram of a system of an application management device according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an application scenario of an application management and control device according to an embodiment of the present disclosure.
  • FIG. 3 is a schematic flowchart of an application management and control method according to an embodiment of the present application.
  • FIG. 4 is another schematic flowchart of an application management and control method according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an apparatus according to an embodiment of the present application.
  • FIG. 6 is another schematic structural diagram of an apparatus according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 8 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
  • An application management method is applied to an electronic device, wherein the application management method comprises the following steps:
  • the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
  • the Back Propagation (BP) neural network algorithm is used to calculate the sample vector set to generate a training model.
  • the current feature information s of the application is input into the training model for calculation;
  • the BP neural network algorithm is used to calculate the sample vector set, and the steps of generating the training model include:
  • the sample vector set is brought into the network structure for calculation to obtain a training model.
  • the method in the step of defining a network structure, includes:
  • the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i ;
  • the hidden layer including M nodes
  • the classification layer adopts a Softmax function, and the Softmax function is Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result. Is the jth intermediate value;
  • the output layer comprising 2 nodes
  • the activation function adopting a sigmoid function, and the sigmoid function is Wherein the range of f(x) is 0 to 1;
  • the batch size is A
  • the learning rate is set, and the learning rate is B.
  • the hidden layer includes a first implicit layer, a second implicit layer, and a third implicit layer, the first implicit layer, and the second implicit layer
  • the number of nodes in each of the layer and the third implicit layer is less than 10.
  • the dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10.
  • the step of bringing a sample vector set into a network structure for calculation, and obtaining the training model includes:
  • the predicted probability value is brought into the output layer for calculation to obtain a predicted result value y.
  • y [1 0] T
  • the network structure is modified according to the predicted result value y to obtain a training model.
  • the current feature information s is input into the training model to calculate a predicted probability value of the classification layer.
  • the method in the step of determining whether the application needs to be closed, the method includes:
  • the current feature information s of the application is input into the training model for calculation, including:
  • the current feature information s is brought into the training model for calculation.
  • An application management device wherein the device comprises:
  • An obtaining module configured to obtain the application sample vector set, where the sample vector in the sample vector set includes historical feature information x i of multiple dimensions of the application;
  • a generating module for calculating a sample vector set by using a BP neural network algorithm to generate a training model
  • a calculation module configured to input the current feature information s of the application into the training model for calculation when the application enters the background;
  • the determining module is configured to determine whether the application needs to be closed.
  • An electronic device comprising: a processor and a memory, the electronic device being electrically connected to the memory, the memory for storing instructions and data, the processor for performing:
  • the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
  • the Back Propagation (BP) neural network algorithm is used to calculate the sample vector set to generate a training model.
  • the current feature information s of the application is input into the training model for calculation;
  • the BP neural network algorithm is used to calculate the sample vector set, and the steps of generating the training model include:
  • the sample vector set is brought into the network structure for calculation to obtain a training model.
  • the method includes:
  • the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i ;
  • the hidden layer including M nodes
  • the classification layer adopts a Softmax function, and the Softmax function is Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result. Is the jth intermediate value;
  • the output layer comprising 2 nodes
  • the activation function adopting a sigmoid function, and the sigmoid function is Wherein the range of f(x) is 0 to 1;
  • the batch size is A
  • the learning rate is set, and the learning rate is B.
  • the hidden layer includes a first implicit layer, a second implicit layer, and a third hidden layer, the first implicit layer, the second hidden layer, and The number of nodes in each layer in the third implicit layer is less than 10.
  • the dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10.
  • the step of bringing the sample vector set into the network structure for calculation, and obtaining the training model includes:
  • the predicted probability value is brought into the output layer for calculation to obtain a predicted result value y.
  • y [1 0] T
  • the network structure is modified according to the predicted result value y to obtain a training model.
  • the method includes:
  • the application management method provided by the present application is mainly applied to electronic devices such as a wristband, a smart phone, a tablet based on an Apple system or an Android system, or a smart mobile electronic device such as a Windows or Linux based notebook computer.
  • the application may be a chat application, a video application, a music application, a shopping application, a shared bicycle application, or a mobile banking application.
  • FIG. 1 is a schematic diagram of a system for controlling an application program according to an embodiment of the present application.
  • the application management device is mainly configured to: obtain historical feature information x i of the application from a database, and then calculate the historical feature information x i by an algorithm to obtain a training model, and secondly, the current feature information of the application.
  • the training model is input for calculation, and the calculation result is used to judge whether the application can be closed to control the preset application, such as closing, or freezing.
  • FIG. 2 is a schematic diagram of an application scenario of an application management and control method according to an embodiment of the present application.
  • the historical feature information x i of the application is obtained from the database, and then the historical feature information x i is calculated by an algorithm to obtain a training model, and secondly, when the application control device detects that the application enters When the electronic device is in the background, the current feature information s of the application is input into the training model for calculation, and the calculation result determines whether the application can be closed.
  • the historical feature information x i of the application a is obtained from the database, and then the historical feature information x i is calculated by an algorithm to obtain a training model, and secondly, when the application control device detects that the application a enters the electronic device In the background, the current feature information s of the application is input into the training model for calculation, and the calculation result determines that the application a can be closed, and the application a is closed, when the application control device detects that the application b enters the background of the electronic device. At this time, the current feature information s of the application b is input into the training model for calculation, and it is judged by the calculation result that the application b needs to be retained, and the application b is retained.
  • the embodiment of the present application provides an application management method, and the execution entity of the application management method may be an application management device provided by an embodiment of the present invention, or an electronic device of the application management device, where the application The control device can be implemented in hardware or software.
  • FIG. 3 is a schematic flowchart diagram of an application management and control method according to an embodiment of the present application.
  • the application management and control method provided by the embodiment of the present application is applied to an electronic device, and the specific process may be as follows:
  • Step S101 Acquire the application sample vector set, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
  • the application sample vector set is obtained from a sample database, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
  • the feature information of the multiple dimensions may refer to Table 1.
  • the feature information of the ten dimensions shown in Table 1 above is only one of the embodiments in the present application, but the application is not limited to the feature information of the ten dimensions shown in Table 1, and may also be One of them, or at least two of them, or all of them, may also include feature information of other dimensions, for example, whether it is currently charging, current power, or whether WiFi is currently connected.
  • historical features of six dimensions can be selected:
  • WiFi whether WiFi is turned on, for example, WiFi is turned on, recorded as 1, WiFi is turned off, and recorded as 0;
  • step S102 the BP neural network algorithm is used to calculate the sample vector set to generate a training model.
  • FIG. 4 is a schematic flowchart diagram of an application management and control method according to an embodiment of the present application.
  • the step S102 may include:
  • Step S1021 defining a network structure
  • Step S1022 Bring the sample vector set into the network structure for calculation, and obtain a training model.
  • step S1021 the defining the network structure includes:
  • Step S1021a setting an input layer, the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i .
  • the dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10 to simplify the operation process.
  • the historical feature information x i has a dimension of 6 dimensions, and the input layer includes 6 nodes.
  • Step S1021b setting a hidden layer, the hidden layer including M nodes.
  • the hidden layer may include a plurality of implicit layers.
  • the number of nodes in each of the implicit layers is less than 10 to simplify the operation process.
  • the hidden layer may include a first implicit layer, a second hidden layer, and a third hidden layer.
  • the first implicit layering includes 10 nodes
  • the second implicit layering includes 5 nodes
  • the third implicit layering includes 5 nodes.
  • Step S1021c setting a classification layer, the classification layer adopts a softmax function, and the softmax function is
  • p is the predicted probability value
  • Z K is the intermediate value
  • C is the number of categories of the predicted result. Is the jth intermediate value.
  • step S1021d an output layer is set, and the output layer includes two nodes.
  • Step S1021e setting an activation function, the activation function adopting a sigmoid function, and the sigmoid function is Wherein the range of f(x) is 0 to 1.
  • step S1021f the batch size is set, and the batch size is A.
  • the batch size can be flexibly adjusted according to actual conditions.
  • the batch size can be 50-200.
  • the batch size is 128.
  • step S1021g a learning rate is set, and the learning rate is B.
  • the learning rate can be flexibly adjusted according to actual conditions.
  • the learning rate can be from 0.1 to 1.5.
  • the learning rate is 0.9.
  • step S1022 the step of bringing the sample vector set into the network structure for calculation, the step of obtaining the training model may include:
  • step S1022a the sample vector set is input at the input layer for calculation, and an output value of the input layer is obtained.
  • Step S1022b inputting an output value of the input layer in the hidden layer to obtain an output value of the hidden layer.
  • the output value of the input layer is an input value of the hidden layer.
  • the hidden layer may include a plurality of hidden layers.
  • the output of the input layer is the input value of the first implicit layer.
  • the output value of the first implicit layer is an input value of the second implicit layer.
  • the output value of the second implicit layer is an input value of the third implicit layer, and so on.
  • Step S1022c inputting an output value of the hidden layer in the classification layer to perform calculation to obtain the predicted probability value [p 1 p 2 ] T .
  • the output value of the hidden layer is an input value of the classification layer.
  • the hidden layer may include a plurality of hidden layers.
  • the output value of the last implicit layer is the input value of the classification layer.
  • Step S1022d Bring the predicted probability value into the output layer for calculation to obtain a predicted result value y.
  • y [1 0] T
  • the output value of the classification layer is an input value of the output layer.
  • step S1022e the network structure is modified according to the prediction result value y to obtain a training model.
  • Step S103 when the application enters the background, the current feature information s of the application is input into the training model for calculation.
  • the step S103 may include:
  • Step S1031 Collect current feature information s of the application.
  • the dimension of the current feature information s of the collected application is the same as the dimension of the collected historical feature information x i of the application.
  • Step S1032 Bring the current feature information s into the training model for calculation.
  • step S104 it is determined whether the application needs to be closed.
  • the application management method provided by the present application generates the training model by using the BP neural network algorithm by acquiring the historical feature information x i , and brings the current feature information s of the application into the training model when the detection application enters the background, and further Determine if the application needs to be closed and intelligently close the application.
  • FIG. 5 is a schematic structural diagram of an application program management apparatus according to an embodiment of the present application.
  • the device 30 includes an acquisition module 31, a generation module 32, a calculation module 33, and a determination module 34.
  • the application may be a chat application, a video application, a music application, a shopping application, a shared bicycle application, or a mobile banking application.
  • the obtaining module 31 is configured to obtain the application sample vector set, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
  • the application sample vector set is obtained from a sample database, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
  • FIG. 6 is a schematic structural diagram of an application program management apparatus according to an embodiment of the present application.
  • the device 30 further includes a detection module 35 for detecting that the application enters the background.
  • the device 30 can also include a storage module 36.
  • the storage module 36 is configured to store historical feature information x i of the application .
  • the feature information of the multiple dimensions may refer to Table 2.
  • the feature information of the ten dimensions shown in Table 2 above is only one of the embodiments in the present application, but the application is not limited to the feature information of the ten dimensions shown in Table 1, and may also be One of them, or at least two of them, or all of them, may also include feature information of other dimensions, for example, whether it is currently charging, current power, or whether WiFi is currently connected.
  • historical features of six dimensions can be selected:
  • WiFi whether WiFi is turned on, for example, WiFi is turned on, recorded as 1, WiFi is turned off, and recorded as 0;
  • the generating module 32 is configured to calculate a sample vector set by using a BP neural network algorithm to generate a training model.
  • the generating module 32 trains the historical feature information x i acquired by the obtaining module 31, and inputs the historical feature information x i in the BP neural network algorithm.
  • the generating module 32 includes a defining module 321 and a solving module 322.
  • the definition module 321 is used to define a network structure.
  • the definition module 321 may include an input layer definition module 3211, an implicit layer definition module 3212, a classification layer definition module 3213, an output layer definition module 3214, an activation function definition module 3215, a batch size definition module 3216, and a learning rate definition module 3217.
  • the input layer definition module 3211 is configured to set an input layer, where the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i .
  • the dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10 to simplify the operation process.
  • the historical feature information x i has a dimension of 6 dimensions, and the input layer includes 6 nodes.
  • the hidden layer definition module 3212 is configured to set an implicit layer, and the hidden layer includes M nodes.
  • the hidden layer may include a plurality of implicit layers.
  • the number of nodes in each of the implicit layers is less than 10 to simplify the operation process.
  • the hidden layer may include a first implicit layer, a second hidden layer, and a third hidden layer.
  • the first implicit layering includes 10 nodes
  • the second implicit layering includes 5 nodes
  • the third implicit layering includes 5 nodes.
  • the classification layer definition module 3213 is configured to set a classification layer, the classification layer adopts a softmax function, and the softmax function is Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result. Is the jth intermediate value.
  • the output layer definition module 3214 is configured to set an output layer, and the output layer includes 2 nodes.
  • the activation function definition module 3215 is configured to set an activation function, the activation function adopts a sigmoid function, and the sigmoid function is Wherein the range of f(x) is 0 to 1.
  • the batch size definition module 3216 is configured to set a batch size, and the batch size is A.
  • the batch size can be flexibly adjusted according to actual conditions.
  • the batch size can be 50-200.
  • the batch size is 128.
  • the learning rate definition module 3217 is configured to set a learning rate, and the learning rate is B.
  • the learning rate can be flexibly adjusted according to actual conditions.
  • the learning rate can be from 0.1 to 1.5.
  • the learning rate is 0.9.
  • the input layer definition module 3211 sets the input layer
  • the hidden layer definition module 3212 sets the hidden layer
  • the classification layer definition module 3213 sets the classification layer
  • the output layer definition module 3214 The setting output layer
  • the activation function definition module 3215 sets the activation function
  • the batch size definition module 3216 sets the batch size
  • the learning order definition module 3217 sets the learning order in a sequence that can be flexibly adjusted.
  • the solving module 322 is configured to bring the sample vector set into the network structure for calculation to obtain a training model.
  • the solution module 322 can include a first solution module 3221, a second solution module 3222, a third solution module 3223, a fourth solution module 3224, and a correction module.
  • the first solving module 3221 is configured to input the sample vector set at the input layer for calculation to obtain an output value of the input layer.
  • the second solving module 3222 is configured to input an output value of the input layer at the hidden layer to obtain an output value of the hidden layer.
  • the output value of the input layer is an input value of the hidden layer.
  • the hidden layer may include a plurality of hidden layers.
  • the output of the input layer is the input value of the first implicit layer.
  • the output value of the first implicit layer is an input value of the second implicit layer.
  • the output value of the second implicit layer is an input value of the third implicit layer, and so on.
  • the third solving module 3223 is configured to input an output value of the hidden layer in the classification layer to calculate, to obtain the predicted probability value [p 1 p 2 ] T .
  • the output value of the hidden layer is an input value of the classification layer.
  • the fourth solving module 3224 is configured to bring the predicted probability value into the output layer for calculation to obtain a predicted result value y.
  • y [1 0] T
  • y [0 1] T .
  • the output value of the classification layer is an input value of the output layer.
  • the modification module 3225 is configured to modify the network structure according to the prediction result value y to obtain a training model.
  • the calculating module 33 is configured to input the current feature information s of the application into the training model for calculation when the application enters the background.
  • the calculation module 33 may include an acquisition module 331 and an operation module 332 .
  • the collecting module 331 is configured to collect current feature information s of the application.
  • the dimension of the current feature information s of the collected application is the same as the dimension of the collected historical feature information x i of the application.
  • the operation module 332 is configured to bring the current feature information s into the training model for calculation.
  • the collecting module 331 is configured to collect the current feature information s according to a predetermined acquisition time, and store the current feature information s in the storage module 36.
  • the collecting module 331 is further configured to collect and detect the application.
  • the current feature information s corresponding to the time point entering the background is used, and the current feature information s is input into the operation module 332 for being brought into the training model for calculation.
  • the determining module 34 is configured to determine whether the application needs to be closed.
  • the apparatus 30 can also include a shutdown module 37 for shutting down the application when it is determined that the application needs to be closed.
  • the apparatus for application management and control provided by the application obtains the historical feature information x i , generates a training model by using a BP neural network algorithm, and brings the current feature information s of the application into the background when the detection application enters the background. Train the model to determine if the application needs to be closed and intelligently close the application.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device 500 includes a processor 501 and a memory 502.
  • the processor 501 is electrically connected to the memory 502.
  • the processor 501 is a control center of the electronic device 500, and connects various parts of the entire electronic device 500 by various interfaces and lines, by running or loading an application stored in the memory 502, and calling data stored in the memory 502, executing The various functions of the electronic device and the processing of the data enable overall monitoring of the electronic device 500.
  • the processor 501 in the electronic device 500 loads the instructions corresponding to the process of one or more applications into the memory 502 according to the following steps, and is stored and stored in the memory 502 by the processor 501.
  • the application thus implementing various functions:
  • the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application;
  • the neural network algorithm is used to calculate the sample vector set to generate a training model
  • the current feature information s of the application is input into the training model for calculation;
  • the application may be a chat application, a video application, a music application, a shopping application, a shared bicycle application, or a mobile banking application.
  • the application sample vector set is obtained from a sample database, wherein the sample vector in the sample vector set includes historical feature information x i of the plurality of dimensions of the application.
  • the feature information of the multiple dimensions may refer to Table 3.
  • the feature information of the ten dimensions shown in Table 3 above is only one of the embodiments in the present application, but the application is not limited to the feature information of the ten dimensions shown in Table 1, and may also be One of them, or at least two of them, or all of them, may also include feature information of other dimensions, for example, whether it is currently charging, current power, or whether WiFi is currently connected.
  • historical features of six dimensions can be selected:
  • WiFi whether WiFi is turned on, for example, WiFi is turned on, recorded as 1, WiFi is turned off, and recorded as 0;
  • the processor 501 calculates a sample vector set by using a BP neural network algorithm, and the generating the training model further includes:
  • the sample vector set is brought into the network structure for calculation to obtain a training model.
  • the defined network structure includes:
  • the input layer includes N nodes, and the number of nodes of the input layer is the same as the dimension of the historical feature information x i ;
  • the dimension of the historical feature information x i is less than 10, and the number of nodes of the input layer is less than 10 to simplify the operation process.
  • the historical feature information x i has a dimension of 6 dimensions, and the input layer includes 6 nodes.
  • a hidden layer is set, the hidden layer including M nodes.
  • the hidden layer may include a plurality of implicit layers.
  • the number of nodes in each of the implicit layers is less than 10 to simplify the operation process.
  • the hidden layer may include a first implicit layer, a second hidden layer, and a third hidden layer.
  • the first implicit layering includes 10 nodes
  • the second implicit layering includes 5 nodes
  • the third implicit layering includes 5 nodes.
  • the classification layer adopts a softmax function, and the softmax function is Where p is the predicted probability value, Z K is the intermediate value, and C is the number of categories of the predicted result. Is the jth intermediate value.
  • An output layer is set, the output layer comprising 2 nodes.
  • the activation function adopting a sigmoid function
  • the sigmoid function is Wherein the range of f(x) is 0 to 1.
  • the batch size can be flexibly adjusted according to actual conditions.
  • the batch size can be 50-200.
  • the batch size is 128.
  • the learning rate is set, and the learning rate is B.
  • the learning rate can be flexibly adjusted according to actual conditions.
  • the learning rate can be from 0.1 to 1.5.
  • the learning rate is 0.9.
  • the step of bringing the sample vector set into the network structure for calculation, and obtaining the training model may include:
  • the sample vector set is input at the input layer for calculation to obtain an output value of the input layer.
  • An output value of the input layer is input to the hidden layer to obtain an output value of the hidden layer.
  • the output value of the input layer is an input value of the hidden layer.
  • the hidden layer may include a plurality of hidden layers.
  • the output of the input layer is the input value of the first implicit layer.
  • the output value of the first implicit layer is an input value of the second implicit layer.
  • the output value of the second implicit layer is an input value of the third implicit layer, and so on.
  • the output value of the hidden layer is input at the classification layer to calculate, and the predicted probability value [p 1 p 2 ] T is obtained .
  • the output value of the hidden layer is an input value of the classification layer.
  • the hidden layer may include a plurality of hidden layers.
  • the output value of the last implicit layer is the input value of the classification layer.
  • the predicted probability value is brought into the output layer for calculation to obtain a predicted result value y.
  • y [1 0] T
  • the output value of the classification layer is an input value of the output layer.
  • the network structure is modified according to the predicted result value y to obtain a training model.
  • the step of inputting the current feature information s of the application into the training model for calculation includes:
  • the current feature information s of the application is collected.
  • the dimension of the current feature information s of the collected application is the same as the dimension of the collected historical feature information x i of the application.
  • the current feature information s is brought into the training model for calculation.
  • Memory 502 can be used to store applications and data.
  • the program stored in the memory 502 contains instructions executable in the processor.
  • the program can constitute various functional modules.
  • the processor 501 executes various function applications and data processing by running a program stored in the memory 502.
  • FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device 500 further includes a radio frequency circuit 503, a display screen 504, a control circuit 505, an input unit 506, an audio circuit 507, a sensor 508, and a power source 509.
  • the processor 501 is electrically connected to the radio frequency circuit 503, the display screen 504, the control circuit 505, the input unit 506, the audio circuit 507, the sensor 508, and the power source 509, respectively.
  • the radio frequency circuit 503 is configured to transceive radio frequency signals to communicate with a server or other electronic device over a wireless communication network.
  • the display screen 504 can be used to display information entered by the user or information provided to the user as well as various graphical user interfaces of the terminal, which can be composed of images, text, icons, video, and any combination thereof.
  • the control circuit 505 is electrically connected to the display screen 504 for controlling the display screen 504 to display information.
  • the input unit 506 can be configured to receive input digits, character information, or user characteristic information (eg, fingerprints), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function controls.
  • user characteristic information eg, fingerprints
  • the audio circuit 507 can provide an audio interface between the user and the terminal through a speaker and a microphone.
  • Sensor 508 is used to collect external environmental information.
  • Sensor 508 can include one or more of ambient brightness sensors, acceleration sensors, gyroscopes, and the like.
  • Power source 509 is used to power various components of electronic device 500.
  • the power supply 509 can be logically coupled to the processor 501 through a power management system to enable functions such as managing charging, discharging, and power management through the power management system.
  • the electronic device 500 may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the electronic device provided by the present application generates the training model by using the BP neural network algorithm by acquiring the historical feature information x i , and when the detection application enters the background, the current feature information s of the application is brought into the training model, and then the judgment is performed. Whether the application needs to be closed, intelligently close the application.
  • the embodiment of the present invention further provides a medium in which a plurality of instructions are stored, the instructions being adapted to be loaded by a processor to execute the application management method described in any of the above embodiments.
  • the application management method, the device, the medium, and the electronic device provided by the embodiments of the present invention belong to the same concept, and the specific implementation process thereof is described in the full text of the specification, and details are not described herein again.
  • the program may be stored in a computer readable storage medium, and the storage medium may include: Read Only Memory (ROM), Random Access Memory (RAM), disk or optical disk.
  • ROM Read Only Memory
  • RAM Random Access Memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Stored Programmes (AREA)

Abstract

本申请提供了一种应用程序管控方法、装置、介质及电子设备,通过获取历史特征信息x i ,采用反向传播(Back Propagation,BP)神经网络算法生成训练模型,当检测应用程序进入后台时,从而将应用程序的当前特征信息s带入训练模型,进而判断所述应用程序是否需要关闭,智能关闭应用程序。

Description

应用程序管控方法、装置、介质及电子设备
本申请要求于2017年10月31日提交中国专利局、申请号为201711044959.5、申请名称为“应用程序管控方法、装置、介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子设备终端领域,具体涉及一种应用程序管控方法、装置、介质及电子设备。
背景技术
终端用户每天会使用大量应用,通常一个应用被推到后台后,如果及时不清理会占用宝贵的***内存资源,并且会影响***功耗。因此,有必要提供一种应用程序管控方法、装置、介质及电子设备。
技术问题
本申请实施例提供一种应用程序管控方法、装置、介质及电子设备,以智能关闭应用程序。
技术解决方案
本申请实施例提供一种应用程序管控方法,应用于电子设备,所述应用程序管控方法包括以下步骤:
获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
采用反向传播(Back Propagation,BP)神经网络算法对样本向量集进行计算,生成训练模型;
当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算;以及
判断所述应用程序是否需要关闭。
本申请实施例还提供一种应用程序管控方法装置,所述装置包括:
获取模块,用于获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
生成模块,用于采用BP神经网络算法对样本向量集进行计算,生成训练模型;
计算模块,用于当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算;以及
判断模块,用于判断所述应用程序是否需要关闭。
本申请实施例还提供一种介质,所述介质中存储有多条指令,所述指令适于由处理器加载以执行上述的应用程序管控方法。
本申请实施例还提供一种电子设备,所述电子设备包括处理器和存储器,所述电子设备与所述存储器电性连接,所述存储器用于存储指令和数据,所述处理器用于执行以下步骤:
获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
采用BP神经网络算法对样本向量集进行计算,生成训练模型;
当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算;以及
判断所述应用程序是否需要关闭。
有益效果
本申请实施例提供一种应用程序管控方法、装置、介质及电子设备,以智能关闭应用程序。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的应用程序管控装置的一种***示意图。
图2为本申请实施例提供的应用程序管控装置的应用场景示意图。
图3为本申请实施例提供的应用程序管控方法的一种流程示意图。
图4为本申请实施例提供的应用程序管控方法的另一种流程示意图。
图5为本申请实施例提供的装置的一种结构示意图。
图6为本申请实施例提供的装置的另一种结构示意图。
图7为本申请实施例提供的电子设备的一种结构示意图。
图8为本申请实施例提供的电子设备的另一种结构示意图。
本发明的实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
一种应用程序管控方法,应用于电子设备,其中,所述应用程序管控方法包括以下步骤:
获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
采用反向传播(Back Propagation,BP)神经网络算法对样本向量集进行计算,生成训练模型;
当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算;以及
判断所述应用程序是否需要关闭。
在所述应用程序管控方法中,采用BP神经网络算法对样本向量集进行计算,生成训练模型的步骤包括:
定义网络结构;以及
将样本向量集带入网络结构进行计算,得到训练模型。
在所述应用程序管控方法中,在所述定义网络结构的步骤中,包括:
设定输入层,所述输入层包括N个节点,所述输入层的节点数与所述历史特征信息x i的维数相同;
设定隐含层,所述隐含层包括M个节点;
设定分类层,所述分类层采用Softmax函数,所述Softmax函数为
Figure PCTCN2018110518-appb-000001
其中,p为预测概率值,Z K为中间值,C为预测结果的类别数,
Figure PCTCN2018110518-appb-000002
为第j个中间值;
设定输出层,所述输出层包括2个节点;
设定激活函数,所述激活函数采用sigmoid函数,所述sigmoid函数为
Figure PCTCN2018110518-appb-000003
其中,所述f(x)的范围为0到1;
设定批量大小,所述批量大小为A;以及
设定学习率,所述学习率为B。
在所述应用程序管控方法中,所述隐含层包括第一隐含分层,第二隐含分层和第三隐含分层,所述第一隐含分层,第二隐含分层和第三隐含分层中的每一层的节点数均小于10。
在所述应用程序管控方法中,所述历史特征信息x i的维数小于10,所述输入层的节点数小于10。
在所述应用程序管控方法中,所述将样本向量集带入网络结构进行计算,得到训练模型的步骤包括:
在输入层输入所述样本向量集进行计算,得到输入层的输出值;
在所述隐含层的输入所述输入层的输出值,得到所述隐含层的输出值;
在所述分类层输入所述隐含层的输出值进行计算,得到所述预测概率值[p 1 p 2] T
将所述预测概率值带入输出层进行计算,得到预测结果值y,当p 1大于p 2时,y=[1 0] T,当p 1小于等于p 2时,y=[0 1] T;以及
根据预测结果值y修正所述网络结构,得到训练模型。
在所述应用程序管控方法中,在将所述应用程序的当前特征信息s输入所述训练模型进行计算的步骤中,将当前特征信息s输入所述训练模型进行计算得到分类层的预测概率值[p 1’ p 2’] T,当p 1’大于p 2’时,y=[1 0] T,当p 1’小于等于p 2’时,y=[0 1] T
在所述应用程序管控方法中,在所述判断所述应用程序是否需要关闭的步骤中,包括:
当y=[1 0] T,判定所述应用程序需要关闭;以及
当y=[0 1] T,判定所述应用程序需要保留。
其中,
在所述应用程序管控方法中,在所述当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算中,包括:
采集所述应用程序的当前特征信息s;
将当前特征信息s带入训练模型进行计算。
一种应用程序管控装置,其中,所述装置包括:
获取模块,用于获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
生成模块,用于采用BP神经网络算法对样本向量集进行计算,生成训练模型;
计算模块,用于当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算;以及
判断模块,用于判断所述应用程序是否需要关闭。
一种介质,其中,所述介质中存储有多条指令,所述指令适于由处理器加载以执行如前所述的应用程序管控方法。
一种电子设备,其中,所述电子设备包括处理器和存储器,所述电子设备与所述存储器电性连接,所述存储器用于存储指令和数据,所述处理器用于执行:
获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
采用反向传播(Back Propagation,BP)神经网络算法对样本向量集进行计算,生成训练模型;
当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算; 以及
判断所述应用程序是否需要关闭。
在所述电子设备中,采用BP神经网络算法对样本向量集进行计算,生成训练模型的步骤包括:
定义网络结构;以及
将样本向量集带入网络结构进行计算,得到训练模型。
在所述电子设备中,在所述定义网络结构的步骤中,包括:
设定输入层,所述输入层包括N个节点,所述输入层的节点数与所述历史特征信息x i的维数相同;
设定隐含层,所述隐含层包括M个节点;
设定分类层,所述分类层采用Softmax函数,所述Softmax函数为
Figure PCTCN2018110518-appb-000004
其中,p为预测概率值,Z K为中间值,C为预测结果的类别数,
Figure PCTCN2018110518-appb-000005
为第j个中间值;
设定输出层,所述输出层包括2个节点;
设定激活函数,所述激活函数采用sigmoid函数,所述sigmoid函数为
Figure PCTCN2018110518-appb-000006
其中,所述f(x)的范围为0到1;
设定批量大小,所述批量大小为A;以及
设定学习率,所述学习率为B。
在所述电子设备中,所述隐含层包括第一隐含分层,第二隐含分层和第三隐含分层,所述第一隐含分层,第二隐含分层和第三隐含分层中的每一层的节点数均小于10。
在所述电子设备中,所述历史特征信息x i的维数小于10,所述输入层的节点数小于10。
在所述电子设备中,所述将样本向量集带入网络结构进行计算,得到训练模型的步骤包括:
在输入层输入所述样本向量集进行计算,得到输入层的输出值;
在所述隐含层的输入所述输入层的输出值,得到所述隐含层的输出值;
在所述分类层输入所述隐含层的输出值进行计算,得到所述预测概率值[p 1 p 2] T
将所述预测概率值带入输出层进行计算,得到预测结果值y,当p 1大于p 2时,y=[1 0] T,当p 1小于等于p 2时,y=[0 1] T;以及
根据预测结果值y修正所述网络结构,得到训练模型。
在所述电子设备中,在将所述应用程序的当前特征信息s输入所述训练模型进行计算的步骤中,将当前特征信息s输入所述训练模型进行计算得到分类层的预测概率值[p 1’ p 2’] T,当p 1’大于p 2’时,y=[1 0] T,当p 1’小于等于p 2’时,y=[0 1] T
在所述电子设备中,在所述判断所述应用程序是否需要关闭的步骤中,包括:
当y=[1 0] T,判定所述应用程序需要关闭;以及
当y=[0 1] T,判定所述应用程序需要保留。
本申请提供的应用程序管控方法,主要应用于电子设备,如:手环、智能手机、基于苹果***或安卓***的平板电脑、或基于Windows或Linux***的笔记本电脑等智能移动电子设备。需要说明的是,所述应用程序可以为聊天应用程序、视频应用程序、音乐应用程序、购物应用程序、共享单车应用程序或手机银行应用程序等。
请参阅图1,图1为本申请实施例提供的应用程序管控装置的***示意图。所述应用程序管控装置主要用于:从数据库中获取应用程序的历史特征信息x i,然后,将历史特征 信息x i通过算法进行计算,得到训练模型,其次,将应用程序的当前特征信息s输入训练模型进行计算,通过计算结果判断应用程序是否可关闭,以对预设应用程序进行管控,例如关闭、或者冻结等。
具体的,请参阅图2,图2为本申请实施例提供的应用程序管控方法的应用场景示意图。在一种实施例中,从数据库中获取应用程序的历史特征信息x i,然后,将历史特征信息x i通过算法进行计算,得到训练模型,其次,当应用程序管控装置在检测到应用程序进入电子设备的后台时,将应用程序的当前特征信息s输入训练模型进行计算,通过计算结果判断应用程序是否可关闭。比如,从数据库中获取应用程序a的历史特征信息x i,然后,将历史特征信息x i通过算法进行计算,得到训练模型,其次,当应用程序管控装置在检测到应用程序a进入电子设备的后台时,将应用程序的当前特征信息s输入训练模型进行计算,通过计算结果判断应用程序a可关闭,并将应用程序a关闭,当应用程序管控装置在检测到应用程序b进入电子设备的后台时,将应用程序b的当前特征信息s输入训练模型进行计算,通过计算结果判断应用程序b需要保留,并将应用程序b保留。
本申请实施例提供一种应用程序管控方法,所述应用程序管控方法的执行主体可以是本发明实施例提供的应用程序管控装置,或者成了该应用程序管控装置的电子设备,其中该应用程序管控装置可以采用硬件或者软件的方式实现。
请参阅图3,图3为本申请实施例提供的应用程序管控方法的流程示意图。本申请实施例提供的应用程序管控方法应用于电子设备,具体流程可以如下:
步骤S101,获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
其中,从样本数据库中获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
其中,所述多个维度的特征信息可以参考表1。
Figure PCTCN2018110518-appb-000007
表1
需要说明的是,以上表1示出的10个维度的特征信息仅为本申请实施例中的一种,但 是本申请并不局限于表1示出的10个维度的特征信息,也可以为其中之一、或者其中至少两个,或者全部,亦或者还可以包括其他维度的特征信息,例如,当前是否在充电、当前的电量或者当前是否连接WiFi等。
在一种实施例中,可以选取6个维度的历史特征信息:
A、应用程序在后台驻留的时间;
B、屏幕是否为亮,例如,屏幕亮,记为1,屏幕熄灭,记为0;
C、当周总使用次数统计;
D、当周总使用时间统计;
E、WiFi是否打开,例如,WiFi打开,记为1,WiFi关闭,记为0;以及
F、当前是否在充电,例如,当前正在充电,记为1,当前未在充电,记为0。
步骤S102,采用BP神经网络算法对样本向量集进行计算,生成训练模型。
请参阅图4,图4为本申请实施例提供的应用程序管控方法的流程示意图。在一种实施例中,所述步骤S102可以包括:
步骤S1021:定义网络结构;以及
步骤S1022:将样本向量集带入网络结构进行计算,得到训练模型。
在步骤S1021中,所述定义网络结构包括:
步骤S1021a,设定输入层,所述输入层包括N个节点,所述输入层的节点数与所述历史特征信息x i的维数相同。
其中,所述历史特征信息x i的维数小于10个,所述输入层的节点数小于10个,以简化运算过程。
在一种实施例中,所述历史特征信息x i的维数为6维,所述输入层包括6个节点。
步骤S1021b,设定隐含层,所述隐含层包括M个节点。
其中,所述隐含层可以包括多个隐含分层。每一所述隐含分层的节点数小于10个,以简化运算过程。
在一种实施例中,所述隐含层可以包括第一隐含分层,第二隐含分层和第三隐含分层。所述第一隐含分层包括10个节点,第二隐含分层包括5个节点,第三隐含分层包括5个节点。
步骤S1021c,设定分类层,所述分类层采用softmax函数,所述softmax函数为
Figure PCTCN2018110518-appb-000008
其中,p为预测概率值,Z K为中间值,C为预测结果的类别数,
Figure PCTCN2018110518-appb-000009
为第j个中间值。
步骤S1021d,设定输出层,所述输出层包括2个节点。
步骤S1021e,设定激活函数,所述激活函数采用sigmoid函数,所述sigmoid函数为
Figure PCTCN2018110518-appb-000010
其中,所述f(x)的范围为0到1。
步骤S1021f,设定批量大小,所述批量大小为A。
其中,所述批量大小可以根据实际情况灵活调整。所述批量大小可以为50-200。
在一种实施例中,所述批量大小为128。
步骤S1021g,设定学习率,所述学习率为B。
其中,所述学习率可以根据实际情况灵活调整。所述学习率可以为0.1-1.5。
在一种实施例中,所述学习率为0.9。
需要说明的是,所述步骤S1021a、S1021b、S1021c、S1021d、S1021e、S1021f、S1021g的先后顺序可以灵活调整。
在步骤S1022中,所述将样本向量集带入网络结构进行计算,得到训练模型的步骤可以包括:
步骤S1022a,在输入层输入所述样本向量集进行计算,得到输入层的输出值。
步骤S1022b,在所述隐含层的输入所述输入层的输出值,得到所述隐含层的输出值。
其中,所述输入层的输出值为所述隐含层的输入值。
在一种实施例中,所述隐含层可以包括多个隐含分层。所述输入层的输出值为第一隐含分层的输入值。所述第一隐含分层的输出值为第二隐含分层的输入值。所述第二隐含分层的输出值为所述第三隐含分层的输入值,依次类推。
步骤S1022c,在所述分类层输入所述隐含层的输出值进行计算,得到所述预测概率值[p 1 p 2] T
其中,所述隐含层的输出值为所述分类层的输入值。
在一种实施例中,所述隐含层可以包括多个隐含分层。最后一个隐含分层的输出值为所述分类层的输入值。
步骤S1022d,将所述预测概率值带入输出层进行计算,得到预测结果值y,当p 1大于p 2时,y=[1 0] T,当p 1小于等于p 2时,y=[0 1] T
其中,所述分类层的输出值为所述输出层的输入值。
步骤S1022e,根据预测结果值y修正所述网络结构,得到训练模型。
步骤S103,当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算。
请参阅图4,在一种实施例中,所述步骤S103可以包括:
步骤S1031:采集所述应用程序的当前特征信息s。
其中,采集的所述应用程序的当前特征信息s的维度与采集的所述应用程序的历史特征信息x i的维度相同。
步骤S1032:将当前特征信息s带入训练模型进行计算。
其中,将当前特征信息s输入所述训练模型进行计算得到分类层的预测概率值[p 1’ p 2’] T,当p 1’大于p 2’时,y=[1 0] T,当p 1’小于等于p 2’时,y=[0 1] T
步骤S104,判断所述应用程序是否需要关闭。
需要说明的是,当y=[1 0] T,判定所述应用程序需要关闭;当y=[0 1] T,判定所述应用程序需要保留。
本申请所提供的应用程序管控方法,通过获取历史特征信息x i,采用BP神经网络算法生成训练模型,当检测应用程序进入后台时,从而将应用程序的当前特征信息s带入训练模型,进而判断所述应用程序是否需要关闭,智能关闭应用程序。
请参阅图5,图5为本申请实施例提供的应用程序管控装置的结构示意图。所述装置30包括获取模块31,生成模块32、计算模块33和判断模块34。
需要说明的是,所述应用程序可以为聊天应用程序、视频应用程序、音乐应用程序、购物应用程序、共享单车应用程序或手机银行应用程序等。
所述获取模块31用于获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
其中,从样本数据库中获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
请参阅图6,图6为本申请实施例提供的应用程序管控装置的结构示意图。所述装置30还包括检测模块35,用于检测所述应用程序进入后台。
所述装置30还可以包括储存模块36。所述储存模块36用于储存应用程序的历史特征信息x i
其中,所述多个维度的特征信息可以参考表2。
Figure PCTCN2018110518-appb-000011
表2
需要说明的是,以上表2示出的10个维度的特征信息仅为本申请实施例中的一种,但是本申请并不局限于表1示出的10个维度的特征信息,也可以为其中之一、或者其中至少两个,或者全部,亦或者还可以包括其他维度的特征信息,例如,当前是否在充电、当前的电量或者当前是否连接WiFi等。
在一种实施例中,可以选取6个维度的历史特征信息:
A、应用程序在后台驻留的时间;
B、屏幕是否为亮,例如,屏幕亮,记为1,屏幕熄灭,记为0;
C、当周总使用次数统计;
D、当周总使用时间统计;
E、WiFi是否打开,例如,WiFi打开,记为1,WiFi关闭,记为0;以及
F、当前是否在充电,例如,当前正在充电,记为1,当前未在充电,记为0。
所述生成模块32用于采用BP神经网络算法对样本向量集进行计算,生成训练模型。
所述生成模块32训练所述获取模块31获取的历史特征信息x i,在BP神经网络算法中输入所述历史特征信息x i
请参阅图6,所述生成模块32包括定义模块321和求解模块322。
所述定义模块321用于定义网络结构。
所述定义模块321可以包括输入层定义模块3211、隐含层定义模块3212、分类层定义模块3213、输出层定义模块3214、激活函数定义模块3215、批量大小定义模块3216和学习率定义模块3217。
所述输入层定义模块3211用于设定输入层,所述输入层包括N个节点,所述输入层的节点数与所述历史特征信息x i的维数相同。
其中,所述历史特征信息x i的维数小于10个,所述输入层的节点数小于10个,以简化运算过程。
在一种实施例中,所述历史特征信息x i的维数为6维,所述输入层包括6个节点。
所述隐含层定义模块3212用于设定隐含层,所述隐含层包括M个节点。
其中,所述隐含层可以包括多个隐含分层。每一所述隐含分层的节点数小于10个,以简化运算过程。
在一种实施例中,所述隐含层可以包括第一隐含分层,第二隐含分层和第三隐含分层。所述第一隐含分层包括10个节点,第二隐含分层包括5个节点,第三隐含分层包括5个节点。
所述分类层定义模块3213用于设定分类层,所述分类层采用softmax函数,所述softmax函数为
Figure PCTCN2018110518-appb-000012
其中,p为预测概率值,Z K为中间值,C为预测结果的类别数,
Figure PCTCN2018110518-appb-000013
为第j个中间值。
所述输出层定义模块3214用于设定输出层,所述输出层包括2个节点。
所述激活函数定义模块3215用于设定激活函数,所述激活函数采用sigmoid函数,所述sigmoid函数为
Figure PCTCN2018110518-appb-000014
其中,所述f(x)的范围为0到1。
所述批量大小定义模块3216用于设定批量大小,所述批量大小为A。
其中,所述批量大小可以根据实际情况灵活调整。所述批量大小可以为50-200。
在一种实施例中,所述批量大小为128。
所述学习率定义模块3217用于设定学习率,所述学习率为B。
其中,所述学习率可以根据实际情况灵活调整。所述学习率可以为0.1-1.5。
在一种实施例中,所述学习率为0.9。
需要说明的是,所述输入层定义模块3211设定输入层、所述隐含层定义模块3212设定隐含层、所述分类层定义模块3213设定分类层、所述输出层定义模块3214设定输出层、所述激活函数定义模块3215设定激活函数、所述批量大小定义模块3216设定批量大小和所述学习率定义模块3217设定学习率的先后顺序可以灵活调整。
所述求解模块322用于将样本向量集带入网络结构进行计算,得到训练模型。
所述求解模块322可以包括第一求解模块3221、第二求解模块3222、第三求解模块3223、第四求解模块3224和修正模块。
所述第一求解模块3221用于在输入层输入所述样本向量集进行计算,得到输入层的输出值。
所述第二求解模块3222用于在所述隐含层的输入所述输入层的输出值,得到所述隐含层的输出值。
其中,所述输入层的输出值为所述隐含层的输入值。
在一种实施例中,所述隐含层可以包括多个隐含分层。所述输入层的输出值为第一隐含分层的输入值。所述第一隐含分层的输出值为第二隐含分层的输入值。所述第二隐含分层的输出值为所述第三隐含分层的输入值,依次类推。
所述第三求解模块3223用于在所述分类层输入所述隐含层的输出值进行计算,得到所述预测概率值[p 1 p 2] T
其中,所述隐含层的输出值为所述分类层的输入值。
所述第四求解模块3224用于将所述预测概率值带入输出层进行计算,得到预测结果值y,当p 1大于p 2时,y=[1 0] T,当p 1小于等于p 2时,y=[0 1] T
其中,所述分类层的输出值为所述输出层的输入值。
所述修正模块3225用于根据预测结果值y修正所述网络结构,得到训练模型。
所述计算模块33用于当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算。
请参阅图6,在一种实施例中,所述计算模块33可以包括采集模块331和运算模块332。
所述采集模块331用于采集所述应用程序的当前特征信息s。
其中,采集的所述应用程序的当前特征信息s的维度与采集的所述应用程序的历史特征信息x i的维度相同。
所述运算模块332用于当前特征信息s带入训练模型进行计算。
其中,将当前特征信息s输入所述训练模型进行计算得到分类层的预测概率值[p 1’ p 2’] T,当p 1’大于p 2’时,y=[1 0] T,当p 1’小于等于p 2’时,y=[0 1] T
在一种实施例中,所述采集模块331用于根据预定采集时间定时采集当前特征信息s,并将当前特征信息s存入储存模块36,所述采集模块331还用于采集检测到应用程序进入后台的时间点对应的当前特征信息s,并将该当前特征信息s输入运算模块332用于带入训练模型进行计算。
所述判断模块34用于判断所述应用程序是否需要关闭。
需要说明的是,当y=[1 0] T,判定所述应用程序需要关闭;当y=[0 1] T,判定所述应用程序需要保留。
所述装置30还可以包括关闭模块37,用于当判断应用程序需要关闭时,将所述应用程序关闭。
本申请所提供的用于应用程序管控方法的装置,通过获取历史特征信息x i,采用BP神经网络算法生成训练模型,当检测应用程序进入后台时,从而将应用程序的当前特征信息s带入训练模型,进而判断所述应用程序是否需要关闭,智能关闭应用程序。
请参阅图7,图7为本申请实施例提供的电子设备的结构示意图。所述电子设备500包括:处理器501和存储器502。其中,处理器501与存储器502电性连接。
处理器501是电子设备500的控制中心,利用各种接口和线路连接整个电子设备500的各个部分,通过运行或加载存储在存储器502内的应用程序,以及调用存储在存储器502内的数据,执行电子设备的各种功能和处理数据,从而对电子设备500进行整体监控。
在本实施例中,电子设备500中的处理器501会按照如下的步骤,将一个或一个以上的应用程序的进程对应的指令加载到存储器502中,并由处理器501来运行存储在存储器502中的应用程序,从而实现各种功能:
获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
采用神经网络算法对样本向量集进行计算,生成训练模型;
当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算;以及
判断所述应用程序是否需要关闭。
需要说明的是,所述应用程序可以为聊天应用程序、视频应用程序、音乐应用程序、购物应用程序、共享单车应用程序或手机银行应用程序等。
其中,从样本数据库中获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
其中,所述多个维度的特征信息可以参考表3。
Figure PCTCN2018110518-appb-000015
Figure PCTCN2018110518-appb-000016
表3
需要说明的是,以上表3示出的10个维度的特征信息仅为本申请实施例中的一种,但是本申请并不局限于表1示出的10个维度的特征信息,也可以为其中之一、或者其中至少两个,或者全部,亦或者还可以包括其他维度的特征信息,例如,当前是否在充电、当前的电量或者当前是否连接WiFi等。
在一种实施例中,可以选取6个维度的历史特征信息:
A、应用程序在后台驻留的时间;
B、屏幕是否为亮,例如,屏幕亮,记为1,屏幕熄灭,记为0;
C、当周总使用次数统计;
D、当周总使用时间统计;
E、WiFi是否打开,例如,WiFi打开,记为1,WiFi关闭,记为0;以及
F、当前是否在充电,例如,当前正在充电,记为1,当前未在充电,记为0。
在一种实施例中,所述处理器501采用BP神经网络算法对样本向量集进行计算,生成训练模型还包括:
定义网络结构;以及
将样本向量集带入网络结构进行计算,得到训练模型。
其中,所述定义网络结构包括:
设定输入层,所述输入层包括N个节点,所述输入层的节点数与所述历史特征信息x i的维数相同;
其中,所述历史特征信息x i的维数小于10个,所述输入层的节点数小于10个,以简化运算过程。
在一种实施例中,所述历史特征信息x i的维数为6维,所述输入层包括6个节点。
设定隐含层,所述隐含层包括M个节点。
其中,所述隐含层可以包括多个隐含分层。每一所述隐含分层的节点数小于10个,以简化运算过程。
在一种实施例中,所述隐含层可以包括第一隐含分层,第二隐含分层和第三隐含分层。所述第一隐含分层包括10个节点,第二隐含分层包括5个节点,第三隐含分层包括5个节点。
设定分类层,所述分类层采用softmax函数,所述softmax函数为
Figure PCTCN2018110518-appb-000017
其中,p为预测概率值,Z K为中间值,C为预测结果的类别数,
Figure PCTCN2018110518-appb-000018
为第j个中间值。
设定输出层,所述输出层包括2个节点。
设定激活函数,所述激活函数采用sigmoid函数,所述sigmoid函数为
Figure PCTCN2018110518-appb-000019
其中,所述f(x)的范围为0到1。
设定批量大小,所述批量大小为A。
其中,所述批量大小可以根据实际情况灵活调整。所述批量大小可以为50-200。
在一种实施例中,所述批量大小为128。
设定学习率,所述学习率为B。
其中,所述学习率可以根据实际情况灵活调整。所述学习率可以为0.1-1.5。
在一种实施例中,所述学习率为0.9。
需要说明的是,所述设定输入层、设定隐含层、设定分类层、设定输出层、设定激活函数、设定批量大小、设定学习率的先后顺序可以灵活调整。
所述将样本向量集带入网络结构进行计算,得到训练模型的步骤可以包括:
在输入层输入所述样本向量集进行计算,得到输入层的输出值。
在所述隐含层的输入所述输入层的输出值,得到所述隐含层的输出值。
其中,所述输入层的输出值为所述隐含层的输入值。
在一种实施例中,所述隐含层可以包括多个隐含分层。所述输入层的输出值为第一隐含分层的输入值。所述第一隐含分层的输出值为第二隐含分层的输入值。所述第二隐含分层的输出值为所述第三隐含分层的输入值,依次类推。
在所述分类层输入所述隐含层的输出值进行计算,得到所述预测概率值[p 1 p 2] T
其中,所述隐含层的输出值为所述分类层的输入值。
在一种实施例中,所述隐含层可以包括多个隐含分层。最后一个隐含分层的输出值为所述分类层的输入值。
将所述预测概率值带入输出层进行计算,得到预测结果值y,当p 1大于p 2时,y=[1 0] T,当p 1小于等于p 2时,y=[0 1] T
其中,所述分类层的输出值为所述输出层的输入值。
根据预测结果值y修正所述网络结构,得到训练模型。
所述当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算的步骤包括:
采集所述应用程序的当前特征信息s。
其中,采集的所述应用程序的当前特征信息s的维度与采集的所述应用程序的历史特征信息x i的维度相同。
将当前特征信息s带入训练模型进行计算。
其中,将当前特征信息s输入所述训练模型进行计算得到分类层的预测概率值[p 1’ p 2’] T,当p 1’大于p 2’时,y=[1 0] T,当p 1’小于等于p 2’时,y=[0 1] T
在所述判断所述应用程序是否需要关闭的步骤中,当y=[1 0] T,判定所述应用程序需要关闭;当y=[0 1] T,判定所述应用程序需要保留。
存储器502可用于存储应用程序和数据。存储器502存储的程序中包含有可在处理器中执行的指令。所述程序可以组成各种功能模块。处理器501通过运行存储在存储器502 的程序,从而执行各种功能应用以及数据处理。
在一些实施例中,如图8所示,图8为本申请实施例提供的电子设备的结构示意图。所述电子设备500还包括:射频电路503、显示屏504、控制电路505、输入单元506、音频电路507、传感器508以及电源509。其中,处理器501分别与射频电路503、显示屏504、控制电路505、输入单元506、音频电路507、传感器508以及电源509电性连接。
射频电路503用于收发射频信号,以通过无线通信网络与服务器或其他电子设备进行通信。
显示屏504可用于显示由用户输入的信息或提供给用户的信息以及终端的各种图形用户接口,这些图形用户接口可以由图像、文本、图标、视频和其任意组合来构成。
控制电路505与显示屏504电性连接,用于控制显示屏504显示信息。
输入单元506可用于接收输入的数字、字符信息或用户特征信息(例如指纹),以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。
音频电路507可通过扬声器、传声器提供用户与终端之间的音频接口。
传感器508用于采集外部环境信息。传感器508可以包括环境亮度传感器、加速度传感器、陀螺仪等传感器中的一种或多种。
电源509用于给电子设备500的各个部件供电。在一些实施例中,电源509可以通过电源管理***与处理器501逻辑相连,从而通过电源管理***实现管理充电、放电、以及功耗管理等功能。
尽管图8中未示出,电子设备500还可以包括摄像头、蓝牙模块等,在此不再赘述。
本申请所提供的电子设备,通过获取历史特征信息x i,采用BP神经网络算法生成训练模型,当检测应用程序进入后台时,从而将应用程序的当前特征信息s带入训练模型,进而判断所述应用程序是否需要关闭,智能关闭应用程序。
本发明实施例还提供一种介质,该介质中存储有多条指令,该指令适于由处理器加载以执行上述任一实施例所述的应用程序管控方法。
本发明实施例提供的应用程序管控方法、装置、介质及电子设备属于同一构思,其具体实现过程详见说明书全文,此处不再赘述。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。
以上对本申请实施例提供的应用程序管控方法、装置、介质及电子设备进行了详细介绍,本文中应用了具体个例对本申请的原理及实施例进行了阐述,以上实施例的说明只是用于帮助理解本申请。同时,对于本领域的技术人员,依据本申请的思想,在具体实施例及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (19)

  1. 一种应用程序管控方法,应用于电子设备,其中,所述应用程序管控方法包括以下步骤:
    获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
    采用反向传播(Back Propagation,BP)神经网络算法对样本向量集进行计算,生成训练模型;
    当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算;以及
    判断所述应用程序是否需要关闭。
  2. 如权利要求1所述的应用程序管控方法,其中,采用BP神经网络算法对样本向量集进行计算,生成训练模型的步骤包括:
    定义网络结构;以及
    将样本向量集带入网络结构进行计算,得到训练模型。
  3. 如权利要求2所述的应用程序管控方法,其中,在所述定义网络结构的步骤中,包括:
    设定输入层,所述输入层包括N个节点,所述输入层的节点数与所述历史特征信息x i的维数相同;
    设定隐含层,所述隐含层包括M个节点;
    设定分类层,所述分类层采用Softmax函数,所述Softmax函数为
    Figure PCTCN2018110518-appb-100001
    其中,p为预测概率值,Z K为中间值,C为预测结果的类别数,
    Figure PCTCN2018110518-appb-100002
    为第j个中间值;
    设定输出层,所述输出层包括2个节点;
    设定激活函数,所述激活函数采用sigmoid函数,所述sigmoid函数为
    Figure PCTCN2018110518-appb-100003
    其中,所述f(x)的范围为0到1;
    设定批量大小,所述批量大小为A;以及
    设定学习率,所述学习率为B。
  4. 如权利要求3所述的应用程序管控方法,其中,所述隐含层包括第一隐含分层,第二隐含分层和第三隐含分层,所述第一隐含分层,第二隐含分层和第三隐含分层中的每一层的节点数均小于10。
  5. 如权利要求3所述的应用程序管控方法,其中,所述历史特征信息x i的维数小于10,所述输入层的节点数小于10。
  6. 如权利要求3所述的应用程序管控方法,其中,所述将样本向量集带入网络结构进行计算,得到训练模型的步骤包括:
    在输入层输入所述样本向量集进行计算,得到输入层的输出值;
    在所述隐含层的输入所述输入层的输出值,得到所述隐含层的输出值;
    在所述分类层输入所述隐含层的输出值进行计算,得到所述预测概率值[p 1 p 2] T
    将所述预测概率值带入输出层进行计算,得到预测结果值y,当p 1大于p 2时,y=[1 0] T,当p 1小于等于p 2时,y=[0 1] T;以及
    根据预测结果值y修正所述网络结构,得到训练模型。
  7. 如权利要求6所述的应用程序管控方法,其中,在将所述应用程序的当前特征信息 s输入所述训练模型进行计算的步骤中,将当前特征信息s输入所述训练模型进行计算得到分类层的预测概率值[p 1’ p 2’] T,当p 1’大于p 2’时,y=[1 0] T,当p 1’小于等于p 2’时,y=[0 1] T
  8. 如权利要求7所述的应用程序管控方法,其中,在所述判断所述应用程序是否需要关闭的步骤中,包括:
    当y=[1 0] T,判定所述应用程序需要关闭;以及
    当y=[0 1] T,判定所述应用程序需要保留。
  9. 如权利要求1所述的应用程序管控方法,其中,在所述当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算中,包括:
    采集所述应用程序的当前特征信息s;
    将当前特征信息s带入训练模型进行计算。
  10. 一种应用程序管控装置,其中,所述装置包括:
    获取模块,用于获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
    生成模块,用于采用BP神经网络算法对样本向量集进行计算,生成训练模型;
    计算模块,用于当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算;以及
    判断模块,用于判断所述应用程序是否需要关闭。
  11. 一种介质,其中,所述介质中存储有多条指令,所述指令适于由处理器加载以执行如权利要求1至9中任一项所述的应用程序管控方法。
  12. 一种电子设备,其中,所述电子设备包括处理器和存储器,所述电子设备与所述存储器电性连接,所述存储器用于存储指令和数据,所述处理器用于执行:
    获取所述应用程序样本向量集,其中该样本向量集中的样本向量包括所述应用程序多个维度的历史特征信息x i
    采用反向传播(Back Propagation,BP)神经网络算法对样本向量集进行计算,生成训练模型;
    当应用程序进入后台,将所述应用程序的当前特征信息s输入所述训练模型进行计算;以及
    判断所述应用程序是否需要关闭。
  13. 如权利要求12所述的电子设备,其中,采用BP神经网络算法对样本向量集进行计算,生成训练模型的步骤包括:
    定义网络结构;以及
    将样本向量集带入网络结构进行计算,得到训练模型。
  14. 如权利要求13所述的电子设备,其中,在所述定义网络结构的步骤中,包括:
    设定输入层,所述输入层包括N个节点,所述输入层的节点数与所述历史特征信息x i的维数相同;
    设定隐含层,所述隐含层包括M个节点;
    设定分类层,所述分类层采用Softmax函数,所述Softmax函数为
    Figure PCTCN2018110518-appb-100004
    其中,p为预测概率值,Z K为中间值,C为预测结果的类别数,
    Figure PCTCN2018110518-appb-100005
    为第j个中间值;
    设定输出层,所述输出层包括2个节点;
    设定激活函数,所述激活函数采用sigmoid函数,所述sigmoid函数为
    Figure PCTCN2018110518-appb-100006
    其中,所述f(x)的范围为0到1;
    设定批量大小,所述批量大小为A;以及
    设定学习率,所述学习率为B。
  15. 如权利要求14所述的电子设备,其中,所述隐含层包括第一隐含分层,第二隐含分层和第三隐含分层,所述第一隐含分层,第二隐含分层和第三隐含分层中的每一层的节点数均小于10。
  16. 如权利要求14所述的电子设备,其中,所述历史特征信息x i的维数小于10,所述输入层的节点数小于10。
  17. 如权利要求14所述的电子设备,其中,所述将样本向量集带入网络结构进行计算,得到训练模型的步骤包括:
    在输入层输入所述样本向量集进行计算,得到输入层的输出值;
    在所述隐含层的输入所述输入层的输出值,得到所述隐含层的输出值;
    在所述分类层输入所述隐含层的输出值进行计算,得到所述预测概率值[p 1 p 2] T
    将所述预测概率值带入输出层进行计算,得到预测结果值y,当p 1大于p 2时,y=[1 0] T,当p 1小于等于p 2时,y=[0 1] T;以及
    根据预测结果值y修正所述网络结构,得到训练模型。
  18. 如权利要求17所述的电子设备,其中,在将所述应用程序的当前特征信息s输入所述训练模型进行计算的步骤中,将当前特征信息s输入所述训练模型进行计算得到分类层的预测概率值[p 1’ p 2’] T,当p 1’大于p 2’时,y=[1 0] T,当p 1’小于等于p 2’时,y=[0 1] T
  19. 如权利要求18所述的电子设备,其中,在所述判断所述应用程序是否需要关闭的步骤中,包括:
    当y=[1 0] T,判定所述应用程序需要关闭;以及
    当y=[0 1] T,判定所述应用程序需要保留。
PCT/CN2018/110518 2017-10-31 2018-10-16 应用程序管控方法、装置、介质及电子设备 WO2019085749A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711044959.5A CN107885544B (zh) 2017-10-31 2017-10-31 应用程序管控方法、装置、介质及电子设备
CN201711044959.5 2017-10-31

Publications (1)

Publication Number Publication Date
WO2019085749A1 true WO2019085749A1 (zh) 2019-05-09

Family

ID=61783058

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/110518 WO2019085749A1 (zh) 2017-10-31 2018-10-16 应用程序管控方法、装置、介质及电子设备

Country Status (2)

Country Link
CN (1) CN107885544B (zh)
WO (1) WO2019085749A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107885544B (zh) * 2017-10-31 2020-04-10 Oppo广东移动通信有限公司 应用程序管控方法、装置、介质及电子设备
CN109101326A (zh) * 2018-06-06 2018-12-28 三星电子(中国)研发中心 一种后台进程管理方法和装置
CN110275760A (zh) * 2019-06-27 2019-09-24 深圳市网心科技有限公司 基于虚拟主机处理器的进程挂起方法及其相关设备
CN110286961A (zh) * 2019-06-27 2019-09-27 深圳市网心科技有限公司 基于物理主机处理器的进程挂起方法及相关设备
CN110286949A (zh) * 2019-06-27 2019-09-27 深圳市网心科技有限公司 基于物理主机存储装置读写的进程挂起方法及相关设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306095A (zh) * 2011-07-21 2012-01-04 宇龙计算机通信科技(深圳)有限公司 应用程序管理方法和终端
CN105718027A (zh) * 2016-01-20 2016-06-29 努比亚技术有限公司 后台应用程序的管理方法及移动终端
CN105808410A (zh) * 2016-03-29 2016-07-27 联想(北京)有限公司 一种信息处理方法及电子设备
US20170116511A1 (en) * 2015-10-27 2017-04-27 Pusan National University Industry-University Cooperation Foundation Apparatus and method for classifying home appliances based on power consumption using deep learning
CN106909447A (zh) * 2015-12-23 2017-06-30 北京金山安全软件有限公司 一种后台应用程序的处理方法、装置及终端
CN107608748A (zh) * 2017-09-30 2018-01-19 广东欧珀移动通信有限公司 应用程序管控方法、装置、存储介质及终端设备
CN107643948A (zh) * 2017-09-30 2018-01-30 广东欧珀移动通信有限公司 应用程序管控方法、装置、介质及电子设备
CN107885544A (zh) * 2017-10-31 2018-04-06 广东欧珀移动通信有限公司 应用程序管控方法、装置、介质及电子设备

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160091786A (ko) * 2015-01-26 2016-08-03 삼성전자주식회사 사용자 관리 방법 및 사용자 관리 장치
CN105389193B (zh) * 2015-12-25 2019-04-26 北京奇虎科技有限公司 应用的加速处理方法、装置和***、服务器
CN106354836A (zh) * 2016-08-31 2017-01-25 南威软件股份有限公司 一种广告页面的预测方法和装置
CN106648023A (zh) * 2016-10-02 2017-05-10 上海青橙实业有限公司 移动终端及其基于神经网络的省电方法
CN107145215B (zh) * 2017-05-06 2019-09-27 维沃移动通信有限公司 一种后台应用程序清理方法及移动终端
CN107133094B (zh) * 2017-06-05 2021-11-02 努比亚技术有限公司 应用管理方法、移动终端及计算机可读存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306095A (zh) * 2011-07-21 2012-01-04 宇龙计算机通信科技(深圳)有限公司 应用程序管理方法和终端
US20170116511A1 (en) * 2015-10-27 2017-04-27 Pusan National University Industry-University Cooperation Foundation Apparatus and method for classifying home appliances based on power consumption using deep learning
CN106909447A (zh) * 2015-12-23 2017-06-30 北京金山安全软件有限公司 一种后台应用程序的处理方法、装置及终端
CN105718027A (zh) * 2016-01-20 2016-06-29 努比亚技术有限公司 后台应用程序的管理方法及移动终端
CN105808410A (zh) * 2016-03-29 2016-07-27 联想(北京)有限公司 一种信息处理方法及电子设备
CN107608748A (zh) * 2017-09-30 2018-01-19 广东欧珀移动通信有限公司 应用程序管控方法、装置、存储介质及终端设备
CN107643948A (zh) * 2017-09-30 2018-01-30 广东欧珀移动通信有限公司 应用程序管控方法、装置、介质及电子设备
CN107885544A (zh) * 2017-10-31 2018-04-06 广东欧珀移动通信有限公司 应用程序管控方法、装置、介质及电子设备

Also Published As

Publication number Publication date
CN107885544A (zh) 2018-04-06
CN107885544B (zh) 2020-04-10

Similar Documents

Publication Publication Date Title
WO2019085749A1 (zh) 应用程序管控方法、装置、介质及电子设备
US11249645B2 (en) Application management method, storage medium, and electronic apparatus
WO2019062413A1 (zh) 应用程序管控方法、装置、存储介质及电子设备
WO2019062358A1 (zh) 应用程序管控方法及终端设备
WO2019062317A1 (zh) 应用程序管控方法及电子设备
WO2019085750A1 (zh) 应用程序管控方法、装置、介质及电子设备
WO2019024642A1 (zh) 进程控制方法、装置、存储介质以及电子设备
CN107885545B (zh) 应用管理方法、装置、存储介质及电子设备
WO2019062405A1 (zh) 应用程序的处理方法、装置、存储介质及电子设备
CN107659717B (zh) 状态检测方法、装置和存储介质
CN113284142A (zh) 图像检测方法、装置、计算机可读存储介质及计算机设备
WO2019062404A1 (zh) 应用程序的处理方法、装置、存储介质及电子设备
CN107729144B (zh) 应用控制方法、装置、存储介质及电子设备
US20210397991A1 (en) Predictively setting information handling system (ihs) parameters using learned remote meeting attributes
CN112672405A (zh) 功耗计算方法、装置、存储介质、电子设备以及服务器
CN107861770B (zh) 应用程序管控方法、装置、存储介质及终端设备
CN115618232A (zh) 数据预测方法、装置、存储介质及电子设备
CN109284783A (zh) 基于机器学习的大礼拜计数方法、装置、用户设备及介质
CN112948763B (zh) 件量预测方法、装置、电子设备及存储介质
CN114298403A (zh) 预测作品的关注度的方法和装置
CN114647703A (zh) 数据处理方法、装置、电子设备及存储介质
CN111800535B (zh) 终端运行状态的评估方法、装置、存储介质及电子设备
CN112367428A (zh) 电量的显示方法、***、存储介质及移动终端
CN107766892B (zh) 应用程序管控方法、装置、存储介质及终端设备
CN107844375B (zh) 应用关闭方法、装置、存储介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18873865

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18873865

Country of ref document: EP

Kind code of ref document: A1