CN117251219A - Multi-system switching method and device based on scene recognition and PC host - Google Patents

Multi-system switching method and device based on scene recognition and PC host Download PDF

Info

Publication number
CN117251219A
CN117251219A CN202311305750.5A CN202311305750A CN117251219A CN 117251219 A CN117251219 A CN 117251219A CN 202311305750 A CN202311305750 A CN 202311305750A CN 117251219 A CN117251219 A CN 117251219A
Authority
CN
China
Prior art keywords
scene
probability
entertainment
user
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311305750.5A
Other languages
Chinese (zh)
Other versions
CN117251219B (en
Inventor
温宝珍
姜瑞静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Meigao Electronic Equipment Co ltd
Original Assignee
Shenzhen Meigao Electronic Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Meigao Electronic Equipment Co ltd filed Critical Shenzhen Meigao Electronic Equipment Co ltd
Priority to CN202311305750.5A priority Critical patent/CN117251219B/en
Publication of CN117251219A publication Critical patent/CN117251219A/en
Application granted granted Critical
Publication of CN117251219B publication Critical patent/CN117251219B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4406Loading of operating system
    • G06F9/441Multiboot arrangements, i.e. selecting an operating system to be loaded
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/36Indoor scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of intelligent recognition, in particular to a multi-system switching method and device based on scene recognition and a PC host. According to the method, the camera is activated to shoot the image of the use environment in real time when the computer is started, then the scene characteristics and the face characteristics in the image are extracted, so that scene recognition is carried out according to the scene characteristics, face recognition is carried out according to the face characteristics, the predicted scene type and user identity information are obtained, whether the current use scene belongs to a working scene or an entertainment scene for the user is determined according to the user identity information, the computer is controlled to read the operating system of the corresponding scene, and accordingly the intelligent recognition of the current scene is realized, and different operating system starting is selected.

Description

Multi-system switching method and device based on scene recognition and PC host
Technical Field
The application relates to the technical field of intelligent recognition, in particular to a multi-system switching method and device based on scene recognition and a PC host.
Background
Computer technology is widely applied to various fields and serves the aspects of human production and life. To accommodate different usage scenarios and requirements, a computer may be equipped with different operating systems. However, in the prior art, there is a problem in switching multiple operating systems of a computer.
During the computer boot process, the BIOS program will read the default boot device and the default operating system, and directly boot the computer into the default operating system (e.g., windows system). In general, in some environments of shared office areas, two operating systems are installed on a computer, and different common software is installed on the two operating systems to specially adapt to different demands of users for work or entertainment, at this time, when the computer is started, a BIOS program also reads one of default operating systems, and in order to switch to another operating system, selection guidance of the operating system needs to be manually performed, intelligent recognition cannot be performed, inconvenience is brought to users, and the situation needs to be further improved.
Disclosure of Invention
In order to solve the problem that the selection and guidance of the operating systems are needed to be manually conducted when the computers of the existing multiple operating systems are started and intelligent identification cannot be achieved, the application provides a multi-system switching method, device and PC host based on scene identification, and the following technical scheme is adopted:
in a first aspect, the present application provides a multi-system switching method based on scene recognition, including the following steps:
when the computer is started, the camera is activated to shoot the image of the use environment in real time;
extracting scene characteristics in the image, inputting the scene characteristics into a preset environment scene recognition model, and obtaining a predicted scene type of a use environment;
extracting face features in the image, inputting the face features into a preset face recognition model, and obtaining user identity information;
determining a current use scene by combining the predicted scene type and the user identity information, wherein the current use scene comprises a work scene and an entertainment scene;
if the current use scene is determined to belong to a working scene, reading the guide information of a default working scene operating system, and starting the working scene operating system in a guide loading mode;
if the current use scene is determined to belong to the entertainment scene, the guiding information of a default entertainment scene operating system is read, and the entertainment scene operating system is guided and loaded for starting.
Through adopting above-mentioned technical scheme, this application is through the image of activating the camera real-time shooting service environment when the computer starts, then extract scene feature and face feature in the image, thereby carry out scene recognition according to scene feature, carry out face identification according to face feature, obtain prediction scene type and user identity information, it belongs to the work scene to this user to confirm current service scene according to user identity information or amusement scene, thereby control the computer and read the operating system start of corresponding service scene, thereby realize that intelligent recognition current scene selects different operating system to start, make the user need not manually carry out operating system's selection guide, the use of user is convenient.
Optionally, in the process of determining whether the current usage scenario belongs to a working scenario or an entertainment scenario by combining the predicted scenario type and the user identity information, the method includes the following steps:
acquiring a probability set of each scene type in the predicted scene types;
acquiring an identity set of each identity in the user identity information;
inputting the probability set and the identity set into a fusion model to obtain a model output result;
and determining whether the current use scene belongs to a working scene or an entertainment scene according to the model output result.
By adopting the technical scheme, the method comprises the steps of obtaining probability sets of each scene type in predicted scene types, obtaining identity sets of each identity in user identity information, inputting the probability sets and the identity sets into a fusion model to obtain a model output result, and determining whether a current use scene belongs to a working scene or an entertainment scene for the user according to the model output result; by means of the method, fusion judgment is carried out based on the probability set of the predicted scene type and the identity set of the user identity, the type of the current use scene for the user can be accurately determined, and the corresponding operating system is guided to start.
Optionally, the process of inputting the probability set and the identity set into the fusion model to obtain the model output result includes the following steps:
calculating joint probability distribution of scene types and identity features according to the probability set and the identity set;
calculating a usage field Jing Gailv of the user under each scene type according to the joint probability distribution, wherein the usage field comprises a work scene or an entertainment scene;
and sequencing all the using scene probabilities, determining the using scene with the largest using scene probability as the current using scene, and obtaining a model output result.
By adopting the technical scheme, the method and the device calculate the joint probability distribution of the scene types and the identity characteristics according to the probability set and the identity set, then calculate the use scene Jing Gailv of the user under each scene type according to the joint probability distribution, sort the use scene probabilities, determine the use scene with the largest use scene probability as the current use scene, obtain the model output result, and further determine the current use scene for the user more accurately.
Optionally, the joint probability distribution is calculated according to the following formula: p (Si, uj) =p (si|uj) ×p (Uj);
where Si represents the scene type, uj represents the identity, P (Si|Uj) represents the probability of the scene Si given Uj, and P (Uj) represents the probability of the identity Uj.
Alternatively, the usage scenario probability is calculated as P (scene|si, uj) =p (Si, uj)/P (Si);
where Scene represents the usage field Jing Leixing, P (Si, uj) represents the joint probability of Si and Uj, and P (Si) represents the probability of the Scene Si.
Optionally, the method further comprises:
acquiring historical use habits of the user;
and adjusting the joint probability distribution according to the historical usage habit.
Through adopting above-mentioned technical scheme, this application is still through obtaining user's history use habit, according to user's history use habit adjustment joint probability distribution to obtain current use scene more accurately based on user's habit.
Optionally, the method further comprises:
identifying a current emotional state of the user according to the face features;
and adjusting the joint probability distribution according to the current emotion state.
By adopting the technical scheme, the joint probability distribution is adjusted according to the current emotional state of the user by identifying the current emotional state of the user, so that the current use scene is obtained more accurately based on the current state of the user.
Optionally, the process of adjusting the joint probability distribution according to the current emotional state includes the following steps:
determining whether the current emotional state is a positive emotion or a negative emotion;
when the current emotion state is determined to be the front emotion, determining that the probability weighting value of the working scene is 0.1, and determining that the probability weighting value of the entertainment scene is 0.3;
when the current emotion state is determined to be a negative emotion, determining that a working scene probability weighting value is 0.3, and an entertainment scene probability weighting value is 0.1;
and adjusting the probability of the working scene and the probability of the entertainment scene in the joint probability distribution according to the weighted value of the probability of the working scene and the weighted value of the probability of the entertainment scene.
By adopting the technical scheme, the application specifically discloses that when the current state of the user is determined to be the positive emotion, the probability weighting value of the working scene is determined to be 0.1, and the probability weighting value of the entertainment scene is determined to be 0.3; otherwise, when the current emotion state is determined to be a negative emotion, determining that the probability weighting value of the working scene is 0.3, and determining that the probability weighting value of the entertainment scene is 0.1; and the working scene probability and the entertainment scene probability in the joint probability distribution are adjusted according to the working scene probability weighting value and the entertainment scene probability weighting value.
In a second aspect, the present application provides a multi-system switching device based on scene recognition, the device comprising:
the shooting module is used for activating the camera to shoot the image of the use environment in real time when the computer is started;
the predicted scene type acquisition module is used for extracting scene characteristics in the image, inputting the scene characteristics into a preset environment scene recognition model and acquiring the predicted scene type of the use environment;
the user identity information acquisition module is used for extracting face features in the image, inputting the face features into a preset face recognition model and acquiring user identity information;
the current use scene determining module is used for determining a current use scene by combining the predicted scene type and the user identity information, wherein the current use scene comprises a working scene and an entertainment scene;
the first starting module is used for reading the guide information of the default working scene operating system and guiding and loading the starting of the working scene operating system if the current use scene is determined to belong to the working scene;
and the second starting module is used for reading the guide information of the default entertainment scene operating system and guiding and loading the entertainment scene operating system to start if the current use scene is determined to belong to the entertainment scene.
In a third aspect, the present application provides a PC host, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the above-described scene recognition-based multisystem handover method when the computer program is executed.
In summary, the present application includes the following beneficial technical effects:
according to the method and the device, the camera is activated to shoot the image of the use environment in real time when the computer is started, then the scene features and the face features in the image are extracted, so that scene recognition is performed according to the scene features, face recognition is performed according to the face features, the predicted scene type and user identity information are obtained, whether the current use scene belongs to a working scene or an entertainment scene for the user is determined according to the user identity information, and therefore the computer is controlled to read the starting of the operating system of the corresponding scene, the intelligent recognition of the starting of different operating systems selected by the current scene is achieved, the user does not need to manually select and guide the operating system, and the use of the user is facilitated.
Drawings
FIG. 1 is an exemplary flow chart of a scene recognition based multi-system handoff method in accordance with an embodiment of the present application;
FIG. 2 is an exemplary flow chart for adjusting joint probability distributions in accordance with an embodiment of the present application;
FIG. 3 is a schematic block diagram of a multi-system switching device based on scene recognition according to an embodiment of the present application;
fig. 4 is an internal structural diagram of a computer device according to an embodiment of the present application.
Description of the embodiments
The terminology used in the following embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include the plural forms as well, unless the context clearly indicates to the contrary. It should also be understood that the term "and/or" as used in this application is intended to encompass any or all possible combinations of one or more of the listed items.
The terms "first," "second," and the like, are used below for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature, and in the description of embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
In order to ensure the safety of working data, some technicians can put work and entertainment in different operating systems, in the starting process of the existing computer, a BIOS program can read default starting equipment and default operating systems, and directly guide the computer to enter the default operating systems (such as Windows systems), and in order to switch different operating systems, selection guidance of the operating systems is needed, so that the user cannot intelligently identify that the current scene selects different operating systems to start, and inconvenience is brought to the user.
The application provides a multi-system switching method, a device and a PC host based on scene recognition, which are used for activating a camera to shoot an image of a use environment in real time when a computer is started, then extracting scene characteristics and face characteristics in the image, and realizing intelligent recognition of different operating system starting selected by a current scene through double judgment of scene types and identity information.
Embodiments of the present application are described in further detail below with reference to the drawings attached hereto.
The embodiment of the application provides a multi-system switching method based on scene recognition, which is executed by electronic equipment, wherein the electronic equipment can be a server or terminal equipment, and the server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server for providing cloud computing service. In this embodiment, the terminal device is a PC host or a mini PC host, but not limited thereto, and may also be an intelligent tablet, a computer, or the like, where the terminal device and the server may be directly or indirectly connected through a wired or wireless communication manner, and the embodiment of the present application is not limited thereto.
The mini PC host adopts an X86 architecture, and has the characteristics of small volume, low power consumption and low price. The same performance as a conventional computer can be achieved. Because the mini PC host computer is small in size, convenient to carry and transfer and capable of meeting the requirement of sharing office work, in some office areas, a situation that one mini PC is shared and used by different teams exists, so that in order to provide proper use interfaces and common software for users with different use purposes conveniently, two operating systems are generally installed on the mini PC, and different common software is installed on the two operating systems so as to be specially suitable for different requirements of the users for work or entertainment. At this time, the computer needs to switch between working or entertainment scenes for different users. At this time, it is necessary to recognize the current use environment of the mini PC host by the multi-system switching method based on scene recognition and start the corresponding operating system.
The embodiment of the application discloses a multi-system switching method based on scene recognition. Referring to fig. 1, fig. 1 is an exemplary flowchart of a multi-system handover method based on scene recognition according to an embodiment of the present application.
A multi-system switching method based on scene recognition comprises the following steps:
s110, when the computer is started, the camera is activated to shoot the image of the use environment in real time.
The PC host is provided with a camera, an activation instruction of the camera is associated with a starting instruction of the PC host, and when the PC host is started, the camera is activated to shoot an image of a use environment in real time.
It will be appreciated that after the camera is activated, the camera may be rotated through an angle to capture a larger range of images of the environment in which it is used, which may be multiple images.
S120, extracting scene features in the image, inputting the scene features into a preset environment scene recognition model, and obtaining the predicted scene type of the use environment.
The training step of the environment scene recognition model comprises the following steps:
s121, preprocessing the image.
The preprocessing of the image comprises clipping, scaling, normalization, image quality improvement, noise elimination and the like, so that the image data is suitable for being input into a neural network for training and reasoning, and the preprocessing is a common means in the field of computer vision.
S122, extracting low-level features of the image by using an image processing method, wherein the low-level features comprise color features, texture features and shape features.
The objective existence in the environment image can be reflected through low-level features such as color features, texture features and shape features of the image, and then the environment can be primarily identified.
S123, extracting semantic features of the image by using a deep learning method, wherein feature mapping of a convolutional neural network layer can be adopted as scene features.
In computer vision, semantic features are also called advanced features, which refer to abstract features that we need to perform advanced cognition to understand when perceiving the world, and can be changed due to subjective cognition; in this embodiment, the image is classified by a deep learning method, so as to determine an environmental scene reflected by the image, where feature mapping in a convolutional neural network may be used as scene features.
For example, for an environment of one office, the low-level features are colors, shapes, etc. of desks, office equipment, etc. in an image of the office, and the high-level features can recognize that the environment in the current image is the office.
S124, fusing the low-level features and the semantic features into image integral features, constructing a deep learning model, and training by using an image data set of the annotation scene category to obtain an environment scene recognition model.
The low-level features and the semantic features are subjected to feature fusion to form integral features of the image, so that a deep learning model of the environment scene is built, in the embodiment, a Place database data set is adopted for training, and finally an environment scene recognition model is obtained.
In actual use, the image is preprocessed, after the scene features are extracted, the scene features are input into the environment scene recognition model, and then scene category probability distribution is obtained, for example, 20% probability is laboratory scene, 30% probability is office scene, and the like.
S130, extracting face features in the image, inputting the face features into a preset face recognition model, and obtaining user identity information.
Wherein the user identity information includes occupation, gender and age information. The face recognition method comprises the steps that a face database is preset, a user identifier, face information of a user, occupation, gender and age information corresponding to the user are arranged in the face database, after face features are obtained, a face recognition model identifies the user corresponding to the face features, the user identifier is obtained, and user identity information such as the occupation, the gender and the age information corresponding to the user is obtained according to the user identifier.
Face recognition technology is mature and will not be described in detail here. Notably, the face recognition technology is only used for determining the occupation of the user and acquiring the gender and age information of the user, and the features hide real-name information, but keep key information such as age, gender and the like required by model operation, so that the privacy protection requirement of the shared office user can be met.
It can be understood that when the PC host captures the unclear face features of the user when the PC host is started, the camera tracks the user in the picture in real time, so that more face features are obtained. When a plurality of users exist in the shooting picture, the camera tracks the user with the largest picture proportion in the shooting picture in real time.
In some optional embodiments, the PC host further includes a plurality of sound receiving modules, where the plurality of sound receiving modules are respectively disposed in different directions of the PC host, and when no user exists in the shot image and the plurality of sound receiving modules receive sound, the sound source direction is determined according to a time point when the plurality of sound receiving modules receive sound, and then the camera is controlled to shoot an image of the sound source direction.
When sufficient face features are not photographed, the PC host performs standby waiting until sufficient face features are photographed.
In some alternative embodiments, when the user identity information is not obtained according to the face features, the system may pop up a prompt box to prompt whether the registered user is not detected, whether to temporarily add a new user, if so, open a user registration interface to prompt the user to input registration information including necessary identity information such as a name, a employee number, etc., then collect face data of the new user, input face features generated in a face recognition model, and store the new user information in a user database. And then, carrying out face recognition again, judging the identity of the new user, and acquiring the identity information of the new user.
It can be appreciated that if a user does not add a new user, it can be set to only load a default entertainment scene operating system; and when the users are newly added, an administrator can be set for auditing, so that the registration of illegal users is prevented.
In some embodiments, when the user identity information cannot be obtained according to the face feature, a short message may be sent to notify a preset manager of the host, so that the manager can discover early when the host is stolen.
S140, combining the predicted scene type and the user identity information to determine the current use scene.
Wherein the current usage scenario includes a work scenario or an entertainment scenario.
Wherein, step S140 includes the following steps;
s141, acquiring probability sets of each scene type in the predicted scene types.
Let the scene type probability predicted by the image be P, p= [ P1, P2, ], pn ], where pi represents the probability predicted as scene i.
S142, acquiring an identity set of each identity in the user identity information.
U= [ U1, U2, ], um ], where uj represents the user identity j.
S143, inputting the probability set and the identity set into the fusion model to obtain a model output result.
Calculating joint probability distribution of scene types and identity features according to the probability set and the identity set; specifically, defining a joint probability distribution p (Si, uj) =p (Si|Uj) ×p (Uj) of the Scene i and the user identity j, and obtaining a using Scene probability p (scene|Si, uj) =p (Si, uj)/p (Si) of the user in a given Scene by using a Bayesian theorem; where science represents the usage field Jing Leixing (work or entertainment), P (Si, uj) represents the joint probability of Si and Uj, P (Si) represents the probability of scene Si; and then sequencing all the using scene probabilities, and determining the using scene with the largest using scene probability as the current using scene to obtain a model output result.
The following examples illustrate the calculation mode when only the professional characteristics of the user are considered, and assume that there are the following scenes and the user, scene 1 is an office area, scene 2 is a rest area, scene 3 is an entertainment area, and the scene recognition result is p= [0.7, 0.2, 0.1], which means that the probability of scene 1 is 0.7, the probability of scene 2 is 0.2, and the probability of scene 3 is 0.1;
then the user identity information is u= [ programmer, man, 30 years ];
then a usage scenario probability is calculated for each scenario:
p (job|scene 1, u) =0.8×0.9/0.7=1.0;
wherein 0.8 is the conditional probability of the programmer user working in scenario 1 (office area); 0.9 is the user characteristic probability of the programmer, namely the probability of the work of the programmer group is 0.9, and the probability of entertainment is 0.1;0.7 is the probability of scene 1.
Specifically, data accumulation is needed before the method is applied, in the data accumulation statistics stage, when each user uses the PC host, the system identifies the occupation and the current use environment of the user, then the user selects the started operating system, and the operating system finally started by the user is recorded to form statistical data. The statistical data comprises user occupation, a use environment scene and an operating system selected by the user, so that the conditional probability of working by the user corresponding to the occupation in the use environment scene and the conditional probability of entertaining by the user corresponding to the occupation in the use environment scene are obtained. When the subsequent user uses the PC host, the occupation of the user and the environment scene are identified, and then the conditional probability that the user of the occupation works or entertains in the environment scene is obtained.
It is understood that the user characteristic probability refers to a probability of selecting work or entertainment when the PC host is started corresponding to the occupation of the user in a case where only the occupation of the user is considered. Similarly, when each user uses the PC host, the system identifies the occupation of the user, records the operating system finally started by the user, forms statistical data, and establishes a comparison table of the probability of the occupation and the selected operating system. For example, the programmer has a probability of 0.9 of office selection and a probability of 0.1 of entertainment selection. Therefore, when the subsequent user uses the PC host, after the occupation of the user is identified, the user characteristic probability of the user of the occupation for work or entertainment is obtained.
p (entertainment |scene 1, u) =0.2×0.1/0.7=0.03;
p (job|scene 2, u) =0.6×0.9/0.2=2.7;
p (entertainment |scene 2, u) =0.4×0.1/0.2=0.2;
p (job|scene 3, u) =0.3×0.9/0.1=2.7;
p (entertainment |scene 3, u) =0.7×0.1/0.1=0.7;
according to the same method, the probability that the professional user selects to use the entertainment operating system and selects to use the working operating system in other use environment scenes can be obtained according to the statistical data.
Finally, the usage fields Jing Gailv of the respective scenes are integrated to determine the current usage scene, and for example, in the above example, the usage probability p (work|scene 2, u) of the scene 2 is calculated to be the largest, so that the current usage scene is finally determined as the work scene.
It should be noted that, in the above example, only professional characteristics of the user are considered as the basis for calculating the joint probability distribution, and in some embodiments, the age and the gender of the user may be added, so as to further optimize the joint probability distribution according to the user characteristic probabilities of different ages and sexes.
Specifically, in some embodiments, the method includes the steps of counting probabilities of finally selecting work or entertainment in different age groups in historical use users when counting data, obtaining tendency degrees of the different age groups to the work and the entertainment, obtaining the tendency degrees of the different age groups to the work and the entertainment according to the tendency degrees of the different age groups, setting a first weighting value according to the tendency degrees, and weighting user feature probabilities of occupations of the users according to the first weighting value; wherein, the selecting work refers to the starting of the operating system of the final guiding work scene, and the selecting entertainment refers to the starting of the operating system of the final guiding entertainment scene.
Similarly, in some embodiments, the statistics history uses the probability that different sexes in the user finally select work or entertainment when counting data, according to the tendency degree of the statistics of the different sexes to the work and the entertainment, the work tendency degree of the different sexes is obtained, a second weighting value is set according to the work tendency degree, and the user feature probability of the occupation of the user is weighted according to the first weighting value.
Through the method, after data statistics, the condition probability and the user characteristic probability of working or entertainment of the user corresponding to the profession can be determined, a model can be built, and when the user is used subsequently, even if the user does not use the PC host before, the PC host can directly determine that the user has high probability through the identity characteristic of the user and the current environment scene through the model, the working scene operating system is started or the entertainment scene operating system is started, so that intelligent starting is realized.
S144, determining the current use scene according to the model output result.
Among them, the current usage scenario includes a work scenario and an entertainment scenario.
And S150, if the current use scene is determined to belong to the working scene, reading the guide information of the default working scene operating system, and starting the boot loading working scene operating system.
The operation system of the work scene refers to an operation system which is provided with common software specially adapted to the requirements of users on work.
And S160, if the current use scene is determined to belong to the entertainment scene, reading the guide information of the default entertainment scene operating system, and starting the guide loading entertainment scene operating system.
The entertainment scene operating system is an operating system provided with common software which is specially adapted to the requirements of users for entertainment. The working scene operating system and the entertainment scene operating system can be both a window system or a Linux system, and can also be a system applying different underlying principles, for example, one is a window system and the other is a Linux system.
In some embodiments, referring to fig. 2, fig. 2 is an exemplary flowchart of adjusting joint probability distribution according to an embodiment of the present application, where the method further includes:
s210, acquiring historical use habits of the user.
S220, adjusting the joint probability distribution according to the historical use habit.
For the user who has the use history for many times on the PC host, the user characteristic probability can be adjusted according to the statistical work and entertainment probability of the user history, so that the final joint probability distribution is adjusted to obtain a more accurate current use scene.
S230, recognizing the current emotion state of the user according to the face features.
The current emotional states of the user comprise positive emotion and negative emotion, and different weighting values are set for the probability of the working scene and the probability of the entertainment scene when the current emotional states of the user are different, so that the recognized current use scene is more in line with the current emotional states of the user.
S240, adjusting the joint probability distribution according to the current emotion state.
Specifically, in this embodiment, it is first determined whether the current emotional state is a positive emotion or a negative emotion; when the current emotion state is determined to be the positive emotion, determining that the probability weighting value of the working scene is 0.1, and determining that the probability weighting value of the entertainment scene is 0.3; when the current emotion state is determined to be a negative emotion, determining that the probability weighting value of the working scene is 0.3, and determining that the probability weighting value of the entertainment scene is 0.1; and adjusting the working scene probability and the entertainment scene probability in the joint probability distribution according to the working scene probability weighted value and the entertainment scene probability weighted value.
For example, when the user is currently a positive emotion, P' (work scene) =p (work scene) ×1+0.1; p' (entertainment scene) =p (entertainment scene) ×1+0.3. Therefore, the probability of corresponding scenes is enhanced or weakened intuitively according to different emotions, and the influence of the emotion factors of the user on system judgment is integrated.
The implementation principle of the multi-system switching method based on scene recognition in the embodiment of the application is as follows: according to the method, the camera is activated to shoot the image of the use environment in real time when the computer is started, then the scene characteristics and the face characteristics in the image are extracted, so that scene recognition is carried out according to the scene characteristics, face recognition is carried out according to the face characteristics, the predicted scene type and user identity information are obtained, whether the current use scene belongs to a working scene or an entertainment scene is determined according to the user identity information, the computer is controlled to read the operating system of the corresponding scene, and accordingly the intelligent recognition of the current scene is realized, and different operating system starting is selected.
In a second aspect, the present application provides a multi-system switching device based on scene recognition, and the multi-system switching device based on scene recognition of the present application is described below in conjunction with the above multi-system switching method based on scene recognition. Referring to fig. 3, fig. 3 is a schematic block diagram of a multi-system switching device based on scene recognition according to an embodiment of the present application.
A multi-system switching device based on scene recognition, the device comprising:
the shooting module 310 is used for activating the camera to shoot the image of the use environment in real time when the computer is started;
the predicted scene type obtaining module 320 is configured to extract scene features in the image, input the scene features into a preset environmental scene recognition model, and obtain a predicted scene type of the use environment;
the user identity information obtaining module 330 is configured to extract a face feature in an image, input the face feature into a preset face recognition model, and obtain user identity information;
the current usage scenario determining module 340 is configured to determine whether the current usage scenario belongs to a working scenario or an entertainment scenario in combination with the predicted scenario type and the user identity information;
the first starting module 350 is configured to, if it is determined that the current usage scenario belongs to a working scenario, read, by the computer, guide information of a default working scenario operating system, and guide the computer to load the working scenario operating system for starting;
the second starting module 360 is configured to, if it is determined that the current usage scenario belongs to an entertainment scenario, read, by the computer, guiding information of a default entertainment scenario operating system, and guide the computer to load the entertainment scenario operating system for starting.
In one embodiment, the present application provides a PC host, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the above-described scene recognition-based multi-system handoff method when executing the computer program.
In one embodiment, the present application provides a computer device, which may be a server, whose internal structure may be as shown in fig. 4. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a multi-system switching device method based on scene recognition.
Those skilled in the art will appreciate that the structures shown in FIG. 4 are block diagrams only and do not constitute a limitation of the computer device on which the present aspects apply, and that a particular computer device may include more or less components than those shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The foregoing are all preferred embodiments of the present application, and are not intended to limit the scope of the present application in any way, therefore: all equivalent changes in structure, shape and principle of this application should be covered in the protection scope of this application.

Claims (10)

1. The multi-system switching method based on scene recognition is characterized by comprising the following steps:
when the computer is started, the camera is activated to shoot the image of the use environment in real time;
extracting scene characteristics in the image, inputting the scene characteristics into a preset environment scene recognition model, and obtaining a predicted scene type of a use environment;
extracting face features in the image, inputting the face features into a preset face recognition model, and obtaining user identity information;
determining a current use scene by combining the predicted scene type and the user identity information, wherein the current use scene comprises a work scene and an entertainment scene;
if the current use scene is determined to belong to a working scene, reading the guide information of a default working scene operating system, and starting the working scene operating system in a guide loading mode;
if the current use scene is determined to belong to the entertainment scene, the guiding information of a default entertainment scene operating system is read, and the entertainment scene operating system is guided and loaded for starting.
2. The scene recognition-based multi-system switching method according to claim 1, wherein in determining whether a current usage scene belongs to a work scene or an entertainment scene by combining the predicted scene type and the user identity information, comprising the steps of:
acquiring a probability set of each scene type in the predicted scene types;
acquiring an identity set of each identity in the user identity information;
inputting the probability set and the identity set into a fusion model to obtain a model output result;
and determining whether the current use scene belongs to a working scene or an entertainment scene according to the model output result.
3. The scene recognition-based multisystem switching method according to claim 2, wherein the process of inputting the probability set and the identity set into a fusion model to obtain a model output result comprises the following steps:
calculating joint probability distribution of scene types and identity features according to the probability set and the identity set;
calculating a usage field Jing Gailv of the user under each scene type according to the joint probability distribution, wherein the usage field comprises a work scene or an entertainment scene;
and sequencing all the using scene probabilities, determining the using scene with the largest using scene probability as the current using scene, and obtaining a model output result.
4. A scene recognition based multisystem handover method as claimed in claim 3, wherein the joint probability distribution is calculated according to the following formula: p (Si, uj) =p (si|uj) ×p (Uj);
where Si represents the scene type, uj represents the identity, P (Si|Uj) represents the probability of the scene Si given Uj, and P (Uj) represents the probability of the identity Uj.
5. The Scene recognition based multisystem handover method as claimed in claim 4, wherein the usage Scene probability is calculated according to the following formula P (scene|si, uj) =p (Si, uj)/P (Si);
where Scene represents the usage field Jing Leixing, P (Si, uj) represents the joint probability of Si and Uj, and P (Si) represents the probability of the Scene Si.
6. A scene recognition based multisystem handover method as claimed in claim 3, wherein the method further comprises:
acquiring historical use habits of the user;
and adjusting the joint probability distribution according to the historical usage habit.
7. A scene recognition based multisystem handover method as claimed in claim 3, wherein the method further comprises:
identifying a current emotional state of the user according to the face features;
and adjusting the joint probability distribution according to the current emotion state.
8. The scene recognition based multisystem handover method as claimed in claim 7, wherein the process of adjusting the joint probability distribution according to the current emotional state comprises the steps of:
determining whether the current emotional state is a positive emotion or a negative emotion;
when the current emotion state is determined to be the front emotion, determining that the probability weighting value of the working scene is 0.1, and determining that the probability weighting value of the entertainment scene is 0.3;
when the current emotion state is determined to be a negative emotion, determining that a working scene probability weighting value is 0.3, and an entertainment scene probability weighting value is 0.1;
and adjusting the probability of the working scene and the probability of the entertainment scene in the joint probability distribution according to the weighted value of the probability of the working scene and the weighted value of the probability of the entertainment scene.
9. A multi-system switching device based on scene recognition, the device comprising:
the shooting module is used for activating the camera to shoot the image of the use environment in real time when the computer is started;
the predicted scene type acquisition module is used for extracting scene characteristics in the image, inputting the scene characteristics into a preset environment scene recognition model and acquiring the predicted scene type of the use environment;
the user identity information acquisition module is used for extracting face features in the image, inputting the face features into a preset face recognition model and acquiring user identity information;
the current use scene determining module is used for determining a current use scene by combining the predicted scene type and the user identity information, wherein the current use scene comprises a working scene and an entertainment scene;
the first starting module is used for reading the guide information of the default working scene operating system and guiding and loading the starting of the working scene operating system if the current use scene is determined to belong to the working scene;
and the second starting module is used for reading the guide information of the default entertainment scene operating system and guiding and loading the entertainment scene operating system to start if the current use scene is determined to belong to the entertainment scene.
10. A PC host including a memory, a processor, and a computer program stored on the memory and executable on the processor, characterized by: the processor, when executing the computer program, implements the steps of the scene recognition based multisystem handover method of any of claims 1-8.
CN202311305750.5A 2023-10-10 2023-10-10 Multi-system switching method and device based on scene recognition and PC host Active CN117251219B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311305750.5A CN117251219B (en) 2023-10-10 2023-10-10 Multi-system switching method and device based on scene recognition and PC host

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311305750.5A CN117251219B (en) 2023-10-10 2023-10-10 Multi-system switching method and device based on scene recognition and PC host

Publications (2)

Publication Number Publication Date
CN117251219A true CN117251219A (en) 2023-12-19
CN117251219B CN117251219B (en) 2024-07-02

Family

ID=89132862

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311305750.5A Active CN117251219B (en) 2023-10-10 2023-10-10 Multi-system switching method and device based on scene recognition and PC host

Country Status (1)

Country Link
CN (1) CN117251219B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107490154A (en) * 2017-09-19 2017-12-19 广东美的制冷设备有限公司 Air conditioner and its control method, device and computer-readable recording medium
CN107992336A (en) * 2017-11-28 2018-05-04 深圳市筑泰防务智能科技有限公司 A kind of dual system switching method of enterprises mobile terminal
CN108875341A (en) * 2018-05-24 2018-11-23 北京旷视科技有限公司 A kind of face unlocking method, device, system and computer storage medium
CN109783047A (en) * 2019-01-18 2019-05-21 三星电子(中国)研发中心 Intelligent volume control method and device in a kind of terminal
US20200053276A1 (en) * 2018-08-08 2020-02-13 Samsung Electronics Co., Ltd. Method for processing image based on scene recognition of image and electronic device therefor
CN111597955A (en) * 2020-05-12 2020-08-28 博康云信科技有限公司 Smart home control method and device based on expression emotion recognition of deep learning
CN112329580A (en) * 2020-10-29 2021-02-05 珠海市大悦科技有限公司 Identity authentication method and device based on face recognition
CN114153501A (en) * 2020-09-07 2022-03-08 中兴通讯股份有限公司 GEO positioning-based terminal multi-system switching method and device, and terminal
CN115086478A (en) * 2022-05-10 2022-09-20 广东以诺通讯有限公司 Terminal information confidentiality method and device, electronic equipment and storage medium
CN116456155A (en) * 2023-04-21 2023-07-18 山东浪潮超高清视频产业有限公司 Android TV intelligent switching method and system based on face recognition

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107490154A (en) * 2017-09-19 2017-12-19 广东美的制冷设备有限公司 Air conditioner and its control method, device and computer-readable recording medium
CN107992336A (en) * 2017-11-28 2018-05-04 深圳市筑泰防务智能科技有限公司 A kind of dual system switching method of enterprises mobile terminal
CN108875341A (en) * 2018-05-24 2018-11-23 北京旷视科技有限公司 A kind of face unlocking method, device, system and computer storage medium
US20200053276A1 (en) * 2018-08-08 2020-02-13 Samsung Electronics Co., Ltd. Method for processing image based on scene recognition of image and electronic device therefor
CN109783047A (en) * 2019-01-18 2019-05-21 三星电子(中国)研发中心 Intelligent volume control method and device in a kind of terminal
CN111597955A (en) * 2020-05-12 2020-08-28 博康云信科技有限公司 Smart home control method and device based on expression emotion recognition of deep learning
CN114153501A (en) * 2020-09-07 2022-03-08 中兴通讯股份有限公司 GEO positioning-based terminal multi-system switching method and device, and terminal
CN112329580A (en) * 2020-10-29 2021-02-05 珠海市大悦科技有限公司 Identity authentication method and device based on face recognition
CN115086478A (en) * 2022-05-10 2022-09-20 广东以诺通讯有限公司 Terminal information confidentiality method and device, electronic equipment and storage medium
CN116456155A (en) * 2023-04-21 2023-07-18 山东浪潮超高清视频产业有限公司 Android TV intelligent switching method and system based on face recognition

Also Published As

Publication number Publication date
CN117251219B (en) 2024-07-02

Similar Documents

Publication Publication Date Title
CN109992237B (en) Intelligent voice equipment control method and device, computer equipment and storage medium
EP3617946A1 (en) Context acquisition method and device based on voice interaction
CN110188829B (en) Neural network training method, target recognition method and related products
CN111444826B (en) Video detection method, device, storage medium and computer equipment
CN111667001B (en) Target re-identification method, device, computer equipment and storage medium
CN111306700A (en) Air conditioner control method and device and computer storage medium
CN108920640A (en) Context acquisition methods and equipment based on interactive voice
CN110956691A (en) Three-dimensional face reconstruction method, device, equipment and storage medium
CN112381104A (en) Image identification method and device, computer equipment and storage medium
WO2023173646A1 (en) Expression recognition method and apparatus
CN108877787A (en) Audio recognition method, device, server and storage medium
CN103970804A (en) Information inquiring method and device
CN110414376A (en) Update method, face recognition cameras and the server of human face recognition model
CN108182270A (en) Search content transmission method, search content search method, smart pen, search terminal, and storage medium
CN112199530A (en) Multi-dimensional face library picture automatic updating method, system, equipment and medium
CN109635625A (en) Smart identity checking method, equipment, storage medium and device
CN115408710A (en) Image desensitization method and related device
EP3502940A1 (en) Information processing device, robot, information processing method, and program
CN117251219B (en) Multi-system switching method and device based on scene recognition and PC host
CN109522844B (en) Social affinity determination method and system
CN116261009B (en) Video detection method, device, equipment and medium for intelligently converting video audience
CN113723310B (en) Image recognition method and related device based on neural network
CN113780424A (en) Real-time online photo clustering method and system based on background similarity
CN109934082B (en) Grouping method and device based on head portrait recognition
CN114840328A (en) Face recognition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant