CN112860046A - Method, apparatus, electronic device and medium for selecting operation mode - Google Patents

Method, apparatus, electronic device and medium for selecting operation mode Download PDF

Info

Publication number
CN112860046A
CN112860046A CN201911178690.9A CN201911178690A CN112860046A CN 112860046 A CN112860046 A CN 112860046A CN 201911178690 A CN201911178690 A CN 201911178690A CN 112860046 A CN112860046 A CN 112860046A
Authority
CN
China
Prior art keywords
target
operation mode
parameters
user
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911178690.9A
Other languages
Chinese (zh)
Other versions
CN112860046B (en
Inventor
杨殿卿
吴和平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Original Assignee
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yulong Computer Telecommunication Scientific Shenzhen Co Ltd filed Critical Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority to CN201911178690.9A priority Critical patent/CN112860046B/en
Publication of CN112860046A publication Critical patent/CN112860046A/en
Application granted granted Critical
Publication of CN112860046B publication Critical patent/CN112860046B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3206Monitoring of events, devices or parameters that trigger a change in power modality
    • G06F1/3215Monitoring of peripheral devices
    • G06F1/3218Monitoring of peripheral devices of display devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/325Power saving in peripheral device
    • G06F1/3265Power saving in display device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/3287Power saving characterised by the action undertaken by switching off individual functional units in the computer system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)
  • Power Sources (AREA)

Abstract

The application discloses a method, a device, electronic equipment and a medium for selecting an operation mode. In the application, after the use parameters of the target device and the state parameters of the target user are obtained, the target operation mode can be determined based on the use parameters and the state parameters, and then the operation mode of the target device is adjusted to the target operation mode. By applying the technical scheme of the application, the current optimal operation mode can be confirmed in real time according to the current use parameters of the intelligent equipment and the expression state parameters of the current user, and the operation mode of the intelligent equipment can be dynamically adjusted. And further, the problem that equipment resources are unnecessarily consumed due to the fact that the intelligent equipment always uses the same operation mode in the related technology can be solved.

Description

Method, apparatus, electronic device and medium for selecting operation mode
Technical Field
The present application relates to data processing technologies, and in particular, to a method, an apparatus, an electronic device, and a medium for selecting an operation mode
Background
Due to the rise of the communications era and society, smart devices have been continuously developed with the use of more and more users.
Further, with the rapid development of the internet, people can use the intelligent device to realize various functions, such as playing games, watching movies, browsing websites, and the like, in addition to daily communication requirements during the process of using the intelligent device. Wherein, each function is different from the using state of the mobile phone. For example, browsing a web page only needs to occupy a small amount of memory of a mobile phone, and running a game only needs to occupy a large amount of memory of the mobile phone.
However, in the related art, the smart device always keeps the same operation mode. This also causes the problem that the intelligent device wastes operating resources. Therefore, how to select the corresponding operation mode in different usage scenarios becomes a problem to be solved by those skilled in the art.
Disclosure of Invention
The embodiment of the application provides a method and a device for selecting an operation mode, electronic equipment and a medium.
According to an aspect of an embodiment of the present application, there is provided a method for selecting an operation mode, including:
acquiring use parameters of target equipment and state parameters of a target user, wherein the target user is a user using the target equipment;
determining a target operation mode based on the use parameter and the state parameter;
and adjusting the operation mode of the target equipment to the target operation mode.
Optionally, in another embodiment based on the above method of the present application, the usage parameter includes at least any one of the following parameters:
screen operating frequency, screen resolution, CPU utilization.
Optionally, in another embodiment based on the foregoing method of the present application, the obtaining the state parameter of the target user includes:
acquiring a biological information image of the target user by using a camera device;
and performing feature extraction on the biological information image of the target user by using a convolutional neural network model to obtain the state parameters of the target user.
Optionally, in another embodiment based on the above method of the present application, the determining a target operation mode based on the usage parameter and the status parameter includes:
and when the fact that the use parameters meet a first preset condition and the state change value of the target user meets a second preset condition is detected, determining the target operation mode, wherein the state change value is a numerical value generated based on the state parameters.
Optionally, in another embodiment based on the above method of the present application, the determining a target operation mode based on the usage parameter and the status parameter includes:
detecting operation parameters of the target device, wherein the operation parameters comprise network communication parameters of the target device, and/or background application parameters, and/or electric quantity parameters;
determining the target operation mode based on the usage parameter, the status parameter, and an operation parameter of a target device.
Optionally, in another embodiment based on the above method of the present application, the determining a target operation mode based on the usage parameter and the status parameter includes:
when detecting that the use parameters meet a third preset condition and the state change value of the target user meets a fourth preset condition, generating voice broadcast, wherein the voice broadcast is used for confirming an operation mode to the target user;
and when reply information generated based on the voice broadcast is received, determining the target operation mode.
Optionally, in another embodiment based on the foregoing method of the present application, before the determining the target operation mode based on the usage parameter and the status parameter, the method further includes:
acquiring time information in real time;
and when the time information is detected to meet a fifth preset condition, acquiring the use parameters of the target equipment and the state parameters of the target user.
According to another aspect of the embodiments of the present application, there is provided an apparatus for selecting an operation mode, including:
an acquisition module configured to acquire a usage parameter of a target device and a state parameter of a target user, the target user being a user using the target device;
a determination module configured to determine a target operating mode based on the usage parameter and the status parameter;
an adjustment module configured to adjust an operation mode of the target device to the target operation mode.
According to another aspect of the embodiments of the present application, there is provided an electronic device including:
a memory for storing executable instructions; and
a display for displaying with the memory to execute the executable instructions to perform the operations of any of the above-described methods of selecting a mode of operation.
According to yet another aspect of the embodiments of the present application, a computer-readable storage medium is provided for storing computer-readable instructions, which when executed perform the operations of any one of the methods for selecting an operation mode described above.
In the application, after the use parameters of the target device and the state parameters of the target user are obtained, the target operation mode can be determined based on the use parameters and the state parameters, and then the operation mode of the target device is adjusted to the target operation mode. By applying the technical scheme of the application, the current optimal operation mode can be confirmed in real time according to the current use parameters of the intelligent equipment and the expression state parameters of the current user, and the operation mode of the intelligent equipment can be dynamically adjusted. And further, the problem that equipment resources are unnecessarily consumed due to the fact that the intelligent equipment always uses the same operation mode in the related technology can be solved.
The technical solution of the present application is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description, serve to explain the principles of the application.
The present application may be more clearly understood from the following detailed description with reference to the accompanying drawings, in which:
FIG. 1 is a schematic diagram of a system architecture for selecting an operating mode according to the present application;
FIG. 2 is a schematic diagram of a method of selecting an operating mode as set forth herein;
FIG. 3 is a schematic structural diagram of an apparatus for selecting an operation mode according to the present application;
fig. 4 is a schematic view of an electronic device according to the present application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In addition, technical solutions between the various embodiments of the present application may be combined with each other, but it must be based on the realization of the technical solutions by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should be considered to be absent and not within the protection scope of the present application.
It should be noted that all the directional indicators (such as upper, lower, left, right, front and rear … …) in the embodiment of the present application are only used to explain the relative position relationship between the components, the motion situation, etc. in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indicator is changed accordingly.
A method for performing a selection of an operating mode according to an exemplary embodiment of the present application is described below in conjunction with fig. 1-2. It should be noted that the following application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present application, and the embodiments of the present application are not limited in this respect. Rather, embodiments of the present application may be applied to any scenario where applicable.
Fig. 1 shows a schematic diagram of an exemplary system architecture 100 to which a video processing method or a video processing apparatus of an embodiment of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include one or more of terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may be various electronic devices having a display screen, including but not limited to smart phones, tablet computers, portable computers, desktop computers, and the like.
The terminal apparatuses 101, 102, 103 in the present application may be terminal apparatuses that provide various services. For example, a user obtains a use parameter of a target device and a state parameter of a target user through a terminal device 103 (which may also be the terminal device 101 or 102), where the target user is a user using the target device; determining a target operation mode based on the use parameter and the state parameter; and adjusting the operation mode of the target equipment to the target operation mode.
It should be noted that the video processing method provided in the embodiments of the present application may be executed by one or more of the terminal devices 101, 102, and 103, and/or the server 105, and accordingly, the video processing apparatus provided in the embodiments of the present application is generally disposed in the corresponding terminal device, and/or the server 105, but the present application is not limited thereto.
The application also provides a method, a device, a target terminal and a medium for selecting the operation mode.
Fig. 2 schematically shows a flow diagram of a method of selecting an operating mode according to an embodiment of the present application. As shown in fig. 2, the method includes:
s101, obtaining the use parameters of the target equipment and the state parameters of a target user, wherein the target user is a user using the target equipment.
It should be noted that, in the present application, the target device is not specifically limited, and may be, for example, an intelligent device or a server. The smart device may be a PC (Personal Computer), a smart phone, a tablet PC, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III) device for selecting an operation mode, an MP4(Moving Picture Experts Group Audio Layer IV, Moving Picture Experts Group Audio Layer 4) device for selecting an operation mode, a portable Computer, and other mobile terminal devices having a display function.
Further, the application does not specifically limit the use parameters, that is, the use parameters of the target device may be the current use parameters of the smart device. For example, the screen operating frequency, screen resolution, CPU utilization, etc. of the target device. In the application, the current operation mode can be determined according to the use parameters of the target device.
Further, the status parameter in the present application may be a user currently using the smart device. The state parameter of the target user is not specifically limited, for example, the state parameter may be a facial expression parameter of the user, an emotional state of the user, a fatigue degree of the user, or the like.
In addition, the present application may obtain the status information of the target user according to the biometric information of the user, and it should be noted that the biometric information in the present application may be any biometric information of the user. For example, the face feature information, the iris feature information, and the like of the target user may be used.
S102, determining a target operation mode based on the use parameters and the state parameters.
In the application, after the use parameters of the target device and the state parameters of the target user are obtained, the current optimal target operation mode of the target device can be determined based on the use parameters and the state parameters. The method for determining the target operation mode based on the use parameters and the state parameters is not specifically limited. For example, the present application may determine the currently optimal operation mode of the target device as the game mode when it is detected that both the usage parameter and the status parameter reach the preset condition. Or when it is detected that both the usage parameter and the status parameter reach the preset condition, determining the current optimal operation mode of the target device as the communication mode, and the like.
And S103, adjusting the operation mode of the target equipment to the target operation mode.
It can be understood that, in the present application, after determining the currently optimal target operation mode of the target device, the current operation mode may be adjusted to the target operation mode. It should be noted that when the target operation mode is the same as the current operation mode, no operation may be performed on the target device.
It should be noted that, the target operation mode is not specifically limited in the present application, that is, the target operation mode may also be a game operation mode, a leisure operation mode, or a communication operation mode. Taking the target operation mode as the game operation mode and the target device as the mobile phone as an example, when the operation mode of the target device is adjusted to the game operation mode, the mobile phone can turn down the display brightness of the current display screen and close other application programs except the game application from the background, so that the purpose of ensuring the best game experience of the user is achieved. Or, when the operation mode of the target device is adjusted to the communication operation mode, the mobile phone may adjust the current communication mode to the optimal communication mode, for example, switching from 4G to 5G network, switching from mobile network to wireless network, and so on, so as to achieve the purpose of ensuring the best communication experience of the user. Or, when the operation mode of the target device is adjusted to the leisure operation mode, the mobile phone may adjust the current operation mode to the saving mode, for example, the mobile phone may be switched from 5G to 4G network, and the operation rate of the CPU is adjusted to the lowest, so as to achieve the purpose of ensuring that the mobile phone saves the operation resources.
In the application, after the use parameters of the target device and the state parameters of the target user are obtained, the target operation mode can be determined based on the use parameters and the state parameters, and then the operation mode of the target device is adjusted to the target operation mode. By applying the technical scheme of the application, the current optimal operation mode can be confirmed in real time according to the current use parameters of the intelligent equipment and the expression state parameters of the current user, and the operation mode of the intelligent equipment can be dynamically adjusted. And further, the problem that equipment resources are unnecessarily consumed due to the fact that the intelligent equipment always uses the same operation mode in the related technology can be solved.
In one possible embodiment of the present application, the usage parameters may include any one or more of the following parameters:
screen operating frequency, screen resolution, CPU utilization.
Further, in the present application, the state parameter of the target user may be obtained in the following manner:
acquiring a biological information image of a target user by using a camera device;
and performing feature extraction on the biological information image of the target user by using the convolutional neural network model to obtain the state parameters of the target user.
Further, the method and the device for determining the state parameters of the user can determine the state parameters of the user according to the biological feature information in the image after the biological information image of the user is acquired. For example, the mobile terminal may control the camera device to collect a face image including a face of the user in front of a terminal screen. And determining the feature information of each part of the face through a preset face feature map according to the face information. And then after the face feature information of the target user is determined, inputting the face feature information into a convolutional neural network model trained in advance. As can be appreciated, the state parameters of the target user are determined according to the detection result output by the convolutional neural network model. In one possible embodiment, parameters may be generated that reflect the emotional state of the user, such as joy, anger, and injury, for example.
Or, for example, by using the biometric information as the iris information, first, the mobile terminal may control the camera device to collect a face image including the face of the user in front of the terminal screen. And then determining the iris information of the face through a preset face characteristic spectrum according to the face information. And then after the iris information of the target user is determined, inputting the iris information into a convolutional neural network model trained in advance. As can be appreciated, the state parameters of the target user are determined according to the detection result output by the convolutional neural network model. In a possible embodiment, parameters reflecting the emotional state of the user as pleasure, anger, hurry, etc. can also be generated.
In addition, before the face feature information of the target user is obtained, a face detection network architecture can be defined by adopting a deep convolutional neural network based on a cascade region suggestion network, a region regression network and a key point regression network structure. In the adopted deep convolution neural network, the input of the region suggestion network is 16 × 3 image data, the network is composed of a full convolution architecture, and the output is the confidence coefficient and the rough vertex position of a face region suggestion frame; the regional regression network inputs 32 × 3 image data, the network is composed of a convolution and full-connection architecture, and the output is the confidence coefficient and the accurate vertex position of the face region; the input of the key point regression network is 64 x 3 image data, the network is composed of a convolution and full-connection architecture, and the output is the confidence coefficient and the position of the face region and the position of the key point of the face.
Among them, Convolutional Neural Networks (CNN) are a kind of feed forward Neural Networks (fed forward Neural Networks) containing convolution calculation and having a deep structure, and are one of the representative algorithms of deep learning. The convolutional neural network has a representation learning (representation learning) capability, and can perform translation invariant classification on input information according to a hierarchical structure of the convolutional neural network. The CNN (convolutional neural network) has remarkable effects in the fields of image classification, target detection, semantic segmentation and the like due to the powerful feature characterization capability of the CNN on the image.
In one possible embodiment of the present application, the present application may use a biometric information image of the user in the CNN neural network model. It should be noted that, before extracting the biological information image of the user by using the convolutional neural network model, the convolutional neural network model needs to be obtained first by the following steps:
obtaining a sample image, wherein the sample image includes at least one sample feature;
and training a preset neural network image classification model by using the sample image to obtain a convolutional neural network model meeting a preset condition.
Further, the present application may identify, through a neural network image classification model, a sample feature (for example, a face feature, an iris feature, an organ feature, and the like) of at least one object included in the sample image. Furthermore, the neural network image classification model may classify each sample feature in the sample image, and classify the sample features belonging to the same category into the same type, so that a plurality of sample features obtained after semantic segmentation of the sample image may be sample features composed of a plurality of different types.
It should be noted that, when the neural network image classification model performs semantic segmentation processing on the sample image, the more accurate the classification of the pixel points in the sample image is, the higher the accuracy rate of identifying the labeled object in the sample image is. It should be noted that the preset condition may be set by a user.
For example, the preset conditions may be set as: the classification accuracy of the pixel points reaches more than 70%, then, the sample image is used for repeatedly training the neural network image classification model, and when the classification accuracy of the neural network image classification model on the pixel points reaches more than 70%, then the neural network image classification model can be applied to the embodiment of the application and carries out semantic segmentation processing on the key frame data.
Optionally, for the neural network image classification model used, in one embodiment, the neural network image classification model may be trained through the sample image. Specifically, a sample image may be obtained, and a preset neural network image classification model is trained by using the sample image, so as to obtain a convolutional neural network model satisfying a preset condition.
In another possible embodiment of the present application, in S102 (determining the target operation mode based on the usage parameter and the state parameter), the following may be implemented:
and when the fact that the use parameters meet the first preset condition and the state change value of the target user meets the second preset condition is detected, determining a target operation mode, wherein the state change value is a numerical value generated based on the state parameters.
First, it should be noted that the present application does not specifically limit the condition for determining the target operation mode, and for example, the target terminal may determine the current target operation mode when detecting any event. In a preferred mode, the target operation mode may be determined when it is detected that the usage parameter of the target terminal meets a first preset condition and the state change value of the target user meets a second preset condition. The state change value of the target user is used for reflecting the change degree of the expression state of the user in a preset time period. It will be appreciated that the state change value is derived from the state parameter.
In addition, the first preset condition and the second preset condition are not specifically limited, and the change of the first preset condition and the second preset condition does not affect the protection scope of the present application.
For convenience of description, the present application takes a target operation mode as a game operation mode, and takes a target device as a mobile phone for example to explain:
for example, when the mobile phone detects that the own screen operation frequency is increased from a first value to a second value (because the user needs to click the screen to perform operation when playing a game), and/or when the own screen resolution is detected to be changed (for example, the display screen of the mobile phone is switched from a vertical screen to a horizontal screen), and/or when the CPU utilization of the mobile phone is increased from a third value to a fourth value (because the user needs larger running resources when playing the game), it is determined that the user is currently likely to use the game application. Furthermore, the mobile phone starts a front camera positioned on the screen to collect a plurality of face information images of the user, and obtains a user expression state corresponding to each face information image according to the facial features of the plurality of face information images. And then, determining whether the expression state of the user is changed (because the expression is more pleasant when the user plays the game), and whether the expression state change value of the user is greater than a first preset threshold (i.e., whether the non-pleasant state is converted into the pleasant state). If yes, the target user is confirmed to be currently using the mobile phone to play the game application. Therefore, the current operation mode of the mobile phone is determined to be the game operation mode.
Or, taking the target operation mode as the viewing operation mode as an example: when the mobile phone detects that the own screen operation frequency is reduced from a fifth value to a sixth value (because the frequency of clicking the screen to operate when the user watches the film is less), and/or when the own screen resolution is detected to be changed (for example, the display screen of the mobile phone is switched from a vertical screen to a horizontal screen), and/or when the CPU utilization rate of the mobile phone is reduced from the seventh value to an eighth value (because the user does not need large running resources when watching the film), it is determined that the user may use the video application currently. Furthermore, the mobile phone starts a front camera positioned on the screen to collect a plurality of face information images of the user, and obtains a user expression state corresponding to each face information image according to the facial features of the plurality of face information images. And then, determining whether the expression state of the user changes (because when the user browses the movie, the expression of the expression will be pleasant, angry, excited and the like along with the progress of the movie) based on the expression states of the user corresponding to the face information images, and whether the expression state change value of the user is larger than a second preset threshold (namely, whether the state of the expression state of the user is converted from a calm state to a pleasant, angry, excited state and the like). If yes, the target user is confirmed to be currently using the mobile phone for video application. Therefore, the current operation mode of the mobile phone is determined to be the film watching operation mode.
Further optionally, in S102 (determining the target operation mode based on the usage parameter and the status parameter), the following may be further implemented:
detecting operation parameters of target equipment, wherein the operation parameters comprise network communication parameters of the target equipment and/or background application parameters and/or electric quantity parameters;
and determining the target operation mode based on the use parameters, the state parameters and the operation parameters of the target equipment.
Furthermore, the target operation mode can be further determined based on the operation parameters of the target device besides the target operation mode is determined based on the use parameters of the target device and the state parameters of the target user. The operation parameters comprise network communication parameters of the target equipment and/or background application parameters and/or electric quantity parameters.
Similarly, taking the target operation mode as an example of the game operation mode, when the mobile phone detects that the screen operation frequency, the screen resolution and the CPU utilization rate of the mobile phone meet first preset conditions, and the expression state change value of the user is determined to be greater than a first preset threshold value through the face image of the user, it is determined that the target user currently uses the mobile phone to perform game application. Further, the mobile phone may detect a network communication parameter of the mobile phone itself to determine whether the current network communication quality is greater than a first threshold, and/or determine whether the number of currently-started background applications is greater than a second threshold (if the background applications are more started, the operation quality of the game operation mode may be affected), and/or detect whether the remaining power of the mobile phone is greater than a third threshold (since the mobile phone needs to invoke more CPU resources to support game operation after entering the game operation mode, a larger power consumption is needed after entering the game mode.) by detecting the background application parameter. Further, when the mobile phone determines that any one or more of the network communication parameters and/or the background application parameters and/or the electric quantity parameters meet corresponding conditions, it is determined that the current use state of the mobile phone can support the game running mode. Therefore, the current operation mode of the mobile phone is determined to be the game operation mode.
Or, for example, taking the target operation mode as the viewing operation mode, when the mobile phone detects that the screen operation frequency, the screen resolution and the CPU utilization rate of the mobile phone meet the third preset condition, and determines that the expression state change value of the user is greater than the second preset threshold value through the facial image of the user, it is determined that the target user currently uses the mobile phone to perform the video application. Further, the mobile phone may detect its own network communication parameter to determine whether the current network communication quality is greater than a first threshold, and/or determine whether the number of currently started background applications is greater than a second threshold (if the background applications are more started, the operation quality of the viewing operation mode may be affected), and/or detect whether the remaining power of the mobile phone is greater than a third threshold through the power parameter (since more CPU resources need to be invoked to support the display of video definition and sound after the mobile phone enters the viewing operation mode, a larger power needs to be consumed after the mobile phone enters the viewing operation mode). Further, when the mobile phone determines that any one or more of the network communication parameters and/or the background application parameters and/or the electric quantity parameters meet corresponding conditions, it is determined that the current usage state of the mobile phone can support the film watching operation mode. Therefore, the current operation mode of the mobile phone is determined to be the film watching operation mode.
Still further optionally, in S102 (determining the target operation mode based on the usage parameter and the status parameter), the following may be further implemented:
when the fact that the use parameters meet a third preset condition and the state change value of the target user meets a fourth preset condition is detected, generating voice broadcast, wherein the voice broadcast is used for confirming the operation mode to the target user;
and when reply information generated based on voice broadcast is received, determining a target operation mode.
When the application detects that the use parameters meet the third preset condition and the state change value of the target user meets the fourth preset condition, in order to further determine the current optimal operation mode for the user, the application can also generate a voice broadcast for confirming the operation mode to the target user. It can be appreciated that when the target device plays the voice broadcast, the target operating mode can be determined based on a reply message from the user to the voice broadcast.
Further, the content and the mode of the voice broadcast are not specifically limited in the present application. That is, as long as the voice broadcast can be used to confirm the operation mode to the target user. In addition, the reply information generated by the user is not specifically limited by the application. For example, the reply information may be voice reply information, touch reply information, or verification reply information.
For example, when a mobile phone generates a voice broadcast: after the game running mode is required to be switched at present, the target user can directly reply with voice. Alternatively, the user may reply by way of a verification class. For example, after the mobile phone broadcasts a voice broadcast, the mobile phone can monitor whether the fingerprint acquisition device acquires the fingerprint information touched by the user. If the fingerprint information is successfully collected, the fingerprint information is uploaded to a server which is in data connection with the mobile phone, whether the fingerprint information is matched with the fingerprint information corresponding to each pre-stored user is detected, if target fingerprint information matched with the fingerprint information exists, the user corresponding to the fingerprint information can be confirmed, and namely the biological characteristic information of the target user is considered to be successfully matched with the target characteristic information in the database of the selected operation mode. And further the authentication of the user can be confirmed. And thus, the user agrees to switch the current operation mode to the game operation mode.
It should be noted that the third preset condition and the fourth preset condition are not specifically limited in this application, and the third preset condition and the fourth preset condition may be any event. The variation of the third preset condition and the fourth preset condition does not affect the protection scope of the present application.
Further optionally, before S102 (determining the target operation mode based on the usage parameter and the status parameter), the following steps may be further performed:
acquiring time information in real time;
and when the time information is detected to meet the fifth preset condition, acquiring the use parameters of the target equipment and the state parameters of the target user.
It can be understood that, because the user may have a habit of using, the method and the device for acquiring the use parameters of the target device and the state parameters of the target user may start to acquire when the mobile phone detects that the current time point is the preset time point (that is, when it is detected that the time information meets the fifth preset condition). It should be noted that the fifth preset condition is not limited in any way in the present application. That is, the fifth preset condition may be any time range.
In another embodiment of the present application, as shown in fig. 3, the present application further provides a device for selecting an operation mode. The device comprises an obtaining module 201, a determining module 202 and an adjusting module 203, wherein:
an obtaining module 201, configured to obtain a usage parameter of a target device and a state parameter of a target user, where the target user is a user using the target device;
a determination module 202 configured to determine a target operation mode based on the usage parameter and the status parameter;
an adjusting module 203 configured to adjust the operation mode of the target device to the target operation mode.
In the application, after the use parameters of the target device and the state parameters of the target user are obtained, the target operation mode can be determined based on the use parameters and the state parameters, and then the operation mode of the target device is adjusted to the target operation mode. By applying the technical scheme of the application, the current optimal operation mode can be confirmed in real time according to the current use parameters of the intelligent equipment and the expression state parameters of the current user, and the operation mode of the intelligent equipment can be dynamically adjusted. And further, the problem that equipment resources are unnecessarily consumed due to the fact that the intelligent equipment always uses the same operation mode in the related technology can be solved.
In another embodiment of the present application, the determining module 202 further includes:
a determination module 202 configured to acquire a biological information image of the target user by using a camera device;
and the determining module 202 is configured to perform feature extraction on the biological information image of the target user by using a convolutional neural network model to obtain the state parameters of the target user.
In another embodiment of the present application, the determining module 202 further includes:
a determining module 202, configured to determine the target operation mode when it is detected that the usage parameter satisfies a first preset condition and a state change value of the target user satisfies a second preset condition, where the state change value is a numerical value generated based on the state parameter.
In another embodiment of the present application, the determining module 202 further includes:
a determining module 202, configured to generate a voice broadcast when it is detected that the usage parameter meets a third preset condition and a state change value of the target user meets a fourth preset condition, where the voice broadcast is used to confirm an operation mode to the target user;
and when reply information generated based on the voice broadcast is received, determining the target operation mode.
In another embodiment of the present application, the method further includes an adjusting module 203, wherein:
an adjustment module 203 configured to acquire time information in real time;
an adjusting module 203, configured to acquire the usage parameter of the target device and the status parameter of the target user when it is detected that the time information satisfies a fifth preset condition.
In another embodiment of the present application, the method further comprises: the use parameters at least comprise any one of the following parameters:
screen operating frequency, screen resolution, CPU utilization.
Fig. 4 is a block diagram illustrating a logical structure of an electronic device in accordance with an exemplary embodiment. For example, the electronic device 300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 4, electronic device 300 may include one or more of the following components: a processor 301 and a memory 302.
The processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 301 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 301 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 302 may include one or more computer-readable storage media, which may be non-transitory. Memory 302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 302 is configured to store at least one instruction for execution by the processor 301 to implement the interactive special effect calibration method provided by the method embodiments of the present application.
In some embodiments, the electronic device 300 may further include: a peripheral interface 303 and at least one peripheral. The processor 301, memory 302 and peripheral interface 303 may be connected by a bus or signal lines. Each peripheral may be connected to the peripheral interface 303 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 304, touch display screen 305, camera 306, audio circuitry 307, positioning components 308, and power supply 309.
The peripheral interface 303 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 301 and the memory 302. In some embodiments, processor 301, memory 302, and peripheral interface 303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 301, the memory 302 and the peripheral interface 303 may be implemented on a separate chip or circuit board, which is not limited by the embodiment.
The Radio Frequency circuit 304 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 304 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 304 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 304 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 305 is a touch display screen, the display screen 305 also has the ability to capture touch signals on or over the surface of the display screen 305. The touch signal may be input to the processor 301 as a control signal for processing. At this point, the display screen 305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 305 may be one, providing the front panel of the electronic device 300; in other embodiments, the display screens 305 may be at least two, respectively disposed on different surfaces of the electronic device 300 or in a folded design; in still other embodiments, the display 305 may be a flexible display disposed on a curved surface or on a folded surface of the electronic device 300. Even further, the display screen 305 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 305 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 306 is used to capture images or video. Optionally, camera assembly 306 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 306 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 307 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 301 for processing or inputting the electric signals to the radio frequency circuit 304 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the electronic device 300. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 301 or the radio frequency circuitry 304 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 307 may also include a headphone jack.
The positioning component 308 is used to locate the current geographic Location of the electronic device 300 to implement navigation or LBS (Location Based Service). The Positioning component 308 may be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
The power supply 309 is used to supply power to various components in the electronic device 300. The power source 309 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When the power source 309 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 300 also includes one or more sensors 310. The one or more sensors 310 include, but are not limited to: acceleration sensor 311, gyro sensor 312, pressure sensor 313, fingerprint sensor 314, optical sensor 315, and proximity sensor 316.
The acceleration sensor 311 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the electronic device 300. For example, the acceleration sensor 311 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 301 may control the touch display screen 305 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 311. The acceleration sensor 311 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 312 may detect a body direction and a rotation angle of the electronic device 300, and the gyro sensor 312 and the acceleration sensor 311 may cooperate to acquire a 3D motion of the user on the electronic device 300. The processor 301 may implement the following functions according to the data collected by the gyro sensor 312: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 313 may be disposed on a side bezel of the electronic device 300 and/or an underlying layer of the touch display screen 305. When the pressure sensor 313 is arranged on the side frame of the electronic device 300, the holding signal of the user to the electronic device 300 can be detected, and the processor 301 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 313. When the pressure sensor 313 is disposed at the lower layer of the touch display screen 305, the processor 301 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 305. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 314 is used for collecting a fingerprint of the user, and the processor 301 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 314, or the fingerprint sensor 314 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, processor 301 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 314 may be disposed on the front, back, or side of the electronic device 300. When a physical button or vendor Logo is provided on the electronic device 300, the fingerprint sensor 314 may be integrated with the physical button or vendor Logo.
The optical sensor 315 is used to collect the ambient light intensity. In one embodiment, the processor 301 may control the display brightness of the touch screen display 305 based on the ambient light intensity collected by the optical sensor 315. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 305 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 305 is turned down. In another embodiment, the processor 301 may also dynamically adjust the shooting parameters of the camera head assembly 306 according to the ambient light intensity collected by the optical sensor 315.
The proximity sensor 316, also referred to as a distance sensor, is typically disposed on the front panel of the electronic device 300. The proximity sensor 316 is used to capture the distance between the user and the front of the electronic device 300. In one embodiment, the processor 301 controls the touch display screen 305 to switch from the bright screen state to the dark screen state when the proximity sensor 316 detects that the distance between the user and the front surface of the electronic device 300 gradually decreases; when the proximity sensor 316 detects that the distance between the user and the front surface of the electronic device 300 is gradually increased, the processor 301 controls the touch display screen 305 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 4 is not intended to be limiting of electronic device 300 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, there is also provided a non-transitory computer-readable storage medium, such as the memory 304, including instructions executable by the processor 320 of the electronic device 300 to perform the method of selecting a mode of operation described above, the method including: acquiring use parameters of target equipment and state parameters of a target user, wherein the target user is a user using the target equipment; determining a target operation mode based on the use parameter and the state parameter; and adjusting the operation mode of the target equipment to the target operation mode. Optionally, the instructions may also be executable by the processor 320 of the electronic device 300 to perform other steps involved in the exemplary embodiments described above. Optionally, the instructions may also be executable by the processor 320 of the electronic device 300 to perform other steps involved in the exemplary embodiments described above. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided an application/computer program product comprising one or more instructions executable by the processor 320 of the electronic device 300 to perform the above-described method of selecting a mode of operation, the method comprising: acquiring use parameters of target equipment and state parameters of a target user, wherein the target user is a user using the target equipment; determining a target operation mode based on the use parameter and the state parameter; and adjusting the operation mode of the target equipment to the target operation mode. Optionally, the instructions may also be executable by the processor 320 of the electronic device 300 to perform other steps involved in the exemplary embodiments described above. Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A method of selecting a mode of operation, comprising:
acquiring use parameters of target equipment and state parameters of a target user, wherein the target user is a user using the target equipment;
determining a target operation mode based on the use parameter and the state parameter;
and adjusting the operation mode of the target equipment to the target operation mode.
2. The method of claim 1, wherein the usage parameters include at least any one of:
screen operating frequency, screen resolution, CPU utilization.
3. The method of claim 1 or 2, wherein the obtaining of the status parameters of the target user comprises:
acquiring a biological information image of the target user by using a camera device;
and performing feature extraction on the biological information image of the target user by using a convolutional neural network model to obtain the state parameters of the target user.
4. The method of claim 3, wherein determining a target operating mode based on the usage parameter and the status parameter comprises:
and when the fact that the use parameters meet a first preset condition and the state change value of the target user meets a second preset condition is detected, determining the target operation mode, wherein the state change value is a numerical value generated based on the state parameters.
5. The method of claim 4, wherein determining a target operating mode based on the usage parameter and the status parameter comprises:
detecting operation parameters of the target device, wherein the operation parameters comprise network communication parameters of the target device, and/or background application parameters, and/or electric quantity parameters;
determining the target operation mode based on the usage parameter, the status parameter, and an operation parameter of a target device.
6. The method of claim 5, wherein determining a target operating mode based on the usage parameter and the status parameter comprises:
when detecting that the use parameters meet a third preset condition and the state change value of the target user meets a fourth preset condition, generating voice broadcast, wherein the voice broadcast is used for confirming an operation mode to the target user;
and when reply information generated based on the voice broadcast is received, determining the target operation mode.
7. The method of claim 1, wherein prior to said determining a target operating mode based on said usage parameters and said status parameters, further comprising:
acquiring time information in real time;
and when the time information is detected to meet a fifth preset condition, acquiring the use parameters of the target equipment and the state parameters of the target user.
8. An apparatus for selecting a mode of operation, comprising:
an acquisition module configured to acquire a usage parameter of a target device and a state parameter of a target user, the target user being a user using the target device;
a determination module configured to determine a target operating mode based on the usage parameter and the status parameter;
an adjustment module configured to adjust an operation mode of the target device to the target operation mode.
9. An electronic device, comprising:
a memory for storing executable instructions; and the number of the first and second groups,
a processor for display with the memory to execute the executable instructions to perform the operations of the method of selecting a mode of operation of any of claims 1-7.
10. A computer-readable storage medium storing computer-readable instructions that, when executed, perform the operations of the method of selecting a mode of operation of any of claims 1-7.
CN201911178690.9A 2019-11-27 2019-11-27 Method, device, electronic equipment and medium for selecting operation mode Active CN112860046B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911178690.9A CN112860046B (en) 2019-11-27 2019-11-27 Method, device, electronic equipment and medium for selecting operation mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911178690.9A CN112860046B (en) 2019-11-27 2019-11-27 Method, device, electronic equipment and medium for selecting operation mode

Publications (2)

Publication Number Publication Date
CN112860046A true CN112860046A (en) 2021-05-28
CN112860046B CN112860046B (en) 2023-05-12

Family

ID=75985852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911178690.9A Active CN112860046B (en) 2019-11-27 2019-11-27 Method, device, electronic equipment and medium for selecting operation mode

Country Status (1)

Country Link
CN (1) CN112860046B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113777941A (en) * 2021-09-03 2021-12-10 珠海格力电器股份有限公司 Equipment operation control method, device, equipment and storage medium
CN117055737A (en) * 2023-10-11 2023-11-14 天津市品茗科技有限公司 Human-computer interaction method and device based on AR device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718043A (en) * 2014-12-18 2016-06-29 三星电子株式会社 Method And Apparatus For Controlling An Electronic Device
CN108791299A (en) * 2018-05-16 2018-11-13 浙江零跑科技有限公司 A kind of driving fatigue detection of view-based access control model and early warning system and method
CN108960065A (en) * 2018-06-01 2018-12-07 浙江零跑科技有限公司 A kind of driving behavior detection method of view-based access control model
CN110413239A (en) * 2018-04-28 2019-11-05 腾讯科技(深圳)有限公司 Parameter adjusting method, device and storage medium is arranged in terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718043A (en) * 2014-12-18 2016-06-29 三星电子株式会社 Method And Apparatus For Controlling An Electronic Device
CN110413239A (en) * 2018-04-28 2019-11-05 腾讯科技(深圳)有限公司 Parameter adjusting method, device and storage medium is arranged in terminal
CN108791299A (en) * 2018-05-16 2018-11-13 浙江零跑科技有限公司 A kind of driving fatigue detection of view-based access control model and early warning system and method
CN108960065A (en) * 2018-06-01 2018-12-07 浙江零跑科技有限公司 A kind of driving behavior detection method of view-based access control model

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113777941A (en) * 2021-09-03 2021-12-10 珠海格力电器股份有限公司 Equipment operation control method, device, equipment and storage medium
CN113777941B (en) * 2021-09-03 2023-05-12 珠海格力电器股份有限公司 Equipment operation control method, device, equipment and storage medium
CN117055737A (en) * 2023-10-11 2023-11-14 天津市品茗科技有限公司 Human-computer interaction method and device based on AR device
CN117055737B (en) * 2023-10-11 2024-01-26 天津市品茗科技有限公司 Human-computer interaction method and device based on AR device

Also Published As

Publication number Publication date
CN112860046B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
CN110795236B (en) Method, device, electronic equipment and medium for adjusting capacity of server
CN111079012A (en) Live broadcast room recommendation method and device, storage medium and terminal
CN110149557B (en) Video playing method, device, terminal and storage medium
CN110933468A (en) Playing method, playing device, electronic equipment and medium
CN110321126B (en) Method and device for generating page code
WO2020211607A1 (en) Video generation method, apparatus, electronic device, and medium
CN112965683A (en) Volume adjusting method and device, electronic equipment and medium
CN111031170A (en) Method, apparatus, electronic device and medium for selecting communication mode
CN110990341A (en) Method, device, electronic equipment and medium for clearing data
CN111062248A (en) Image detection method, device, electronic equipment and medium
CN110853124B (en) Method, device, electronic equipment and medium for generating GIF dynamic diagram
CN110675473B (en) Method, device, electronic equipment and medium for generating GIF dynamic diagram
CN113613028A (en) Live broadcast data processing method, device, terminal, server and storage medium
CN111327819A (en) Method, device, electronic equipment and medium for selecting image
CN111796990A (en) Resource display method, device, terminal and storage medium
CN112860046B (en) Method, device, electronic equipment and medium for selecting operation mode
CN111341317B (en) Method, device, electronic equipment and medium for evaluating wake-up audio data
CN110891181B (en) Live broadcast picture display method and device, storage medium and terminal
CN112100528A (en) Method, device, equipment and medium for training search result scoring model
CN112732133B (en) Message processing method and device, electronic equipment and storage medium
CN111369434B (en) Method, device, equipment and storage medium for generating spliced video covers
CN111210001A (en) Method and device for adjusting seat, electronic equipment and medium
CN111010732A (en) Network registration method, device, electronic equipment and medium
CN111158780A (en) Method, device, electronic equipment and medium for storing application data
CN110798572A (en) Method, device, electronic equipment and medium for lighting screen

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant