CN114065800A - Emotion detection method and device - Google Patents

Emotion detection method and device Download PDF

Info

Publication number
CN114065800A
CN114065800A CN202111175599.9A CN202111175599A CN114065800A CN 114065800 A CN114065800 A CN 114065800A CN 202111175599 A CN202111175599 A CN 202111175599A CN 114065800 A CN114065800 A CN 114065800A
Authority
CN
China
Prior art keywords
dynamic
current
target object
information
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111175599.9A
Other languages
Chinese (zh)
Inventor
史培荣
白金蓬
黎清顾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai, Zhuhai Lianyun Technology Co Ltd filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN202111175599.9A priority Critical patent/CN114065800A/en
Publication of CN114065800A publication Critical patent/CN114065800A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for detecting emotion, wherein the method comprises the following steps: acquiring current dynamic category information of a target object; establishing a current three-dimensional dynamic portrait model of the target object according to the current dynamic category information of the target object; obtaining a current dynamic category result of the target object according to the current three-dimensional dynamic portrait model and a pre-trained emotion detection model; and prompting according to the current dynamic category result. According to the technical scheme, when the unknown dynamic category of the target object is detected, a more accurate result with a smaller error can be obtained, the detection efficiency is improved, and the use experience of a user is improved.

Description

Emotion detection method and device
Technical Field
The invention belongs to the technical field of intelligent equipment, and particularly relates to an emotion detection method and device.
Background
In modern life, the rhythm of life of people is faster and faster, and the emotion becomes more and more sensitive, and along with the improvement of quality of life, people pay more and more attention to the change of emotion. For example, a baby's cry can leave the dad and mom stranded, without knowing what the baby wants to express at all times, and in panic the dad and mom can only try to sooth the baby by various methods, so that the baby's mood can be calmed.
At present, a camera is used in the market to capture facial features of a user and a sound receiving device is used to collect voice of the user, so that the current emotion of the user is inferred, and the problems of inaccurate judgment result, large error and the like exist.
Disclosure of Invention
The invention provides an emotion detection method and an emotion detection device, aiming at the problems that the judgment result is inaccurate and the error is large when a camera is used for capturing facial features of a user and a sound receiving device is used for collecting the sound of the user, and the technical problems that the judgment result is inaccurate and the error is large when the emotion of a target object is distinguished are solved.
In a first aspect of the present invention, there is provided a method for emotion detection, the method comprising:
acquiring current dynamic category information of a target object;
establishing a current three-dimensional dynamic portrait model of the target object according to the current dynamic category information of the target object;
obtaining a current dynamic category result of the target object according to the current three-dimensional dynamic portrait model and a pre-trained emotion detection model;
and prompting according to the current dynamic category result.
In some embodiments, the current dynamic category information of the target object is acquired through the sensor and the camera, wherein the current dynamic category information comprises limb dynamic information and physiological information.
In some embodiments, establishing a current three-dimensional stereoscopic dynamic representation model of the target object according to the current dynamic category information of the target object includes:
generating a limb dynamic vector set according to the current limb dynamic information of the target object, and mapping the limb dynamic vector set into a three-dimensional coordinate system to obtain the current limb dynamic vector three-dimensional coordinate data of the target object;
generating a physiological information vector set according to the current physiological information of the target object, and mapping the physiological information vector set to a three-dimensional coordinate system to obtain the current physiological information vector three-dimensional coordinate data of the target object;
and establishing a current three-dimensional dynamic portrait model of the target object according to the current limb dynamic vector three-dimensional coordinate data and the current physiological information vector three-dimensional coordinate data of the target object.
In some embodiments, the training process of the emotion detection model includes:
acquiring historical dynamic category information of a target object and a dynamic category result corresponding to the historical dynamic category information;
establishing a historical three-dimensional dynamic portrait model of the target object according to the historical dynamic category information of the target object;
and training the BP neural network according to the historical three-dimensional dynamic portrait model and the dynamic classification result to obtain an emotion detection model.
In some embodiments, the prompting according to the current dynamic category result includes:
selecting a corresponding indicator light color according to the current dynamic category result;
and prompting the user by displaying the color of the indicator light.
In a second aspect of the present invention, there is provided an emotion detection apparatus, including:
the acquisition module is used for acquiring the current dynamic category information of the target object;
the establishing module is used for establishing a current three-dimensional dynamic portrait model of the target object according to the current dynamic category information of the target object;
the detection module is used for obtaining a current dynamic category result of the target object according to the current three-dimensional dynamic portrait model and a pre-trained emotion detection model;
and the prompting module is used for prompting according to the current dynamic category result.
In some embodiments, the acquisition module is configured to acquire, through the sensor and the camera, current dynamic category information of the target object, where the current dynamic category information includes limb dynamic information and physiological information.
In some embodiments, the establishing module comprises:
the generation submodule is used for generating a limb dynamic vector set according to the current limb dynamic information of the target object, and mapping the limb dynamic vector set into a three-dimensional coordinate system to obtain the current limb dynamic vector three-dimensional coordinate data of the target object; and the combination of (a) and (b),
generating a physiological information vector set according to the current physiological information of the target object, and mapping the physiological information vector set to a three-dimensional coordinate system to obtain the current physiological information vector three-dimensional coordinate data of the target object;
and the establishing submodule is used for establishing a current three-dimensional dynamic portrait model of the target object according to the current limb dynamic vector three-dimensional coordinate data and the current physiological information vector three-dimensional coordinate data of the target object.
In some embodiments, the apparatus further comprises:
the training module is used for acquiring historical dynamic category information of the target object and a dynamic category result corresponding to the historical dynamic category information; establishing a historical three-dimensional dynamic portrait model of the target object according to the historical dynamic category information of the target object; and (4) training the BP neural network according to the historical three-dimensional dynamic portrait model and the dynamic classification result to obtain an emotion detection model.
In some embodiments, the hint module comprises:
the selecting subunit is used for selecting the corresponding color of the indicator light according to the current dynamic category result;
and the prompting subunit is used for prompting the user by displaying the color of the indicator lamp.
Compared with the prior art, the technical scheme of the application has the following advantages or beneficial effects:
a three-dimensional dynamic portrait model of a target object is established, a BP neural network is trained according to data in the three-dimensional dynamic portrait model to obtain an emotion detection model, and the trained model is closer to the real dynamic state of the target object, so that the unknown dynamic type of the target object is detected more accurately, errors are smaller, the detection efficiency is improved, and the use experience of a user is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention, in which:
fig. 1 is a schematic flowchart of an emotion detection method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of another emotion detection method provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of an emotion detection apparatus provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, embodiments of the present invention will be described in detail below with reference to the accompanying drawings and examples, so that how to apply technical means to solve technical problems and achieve the corresponding technical effects can be fully understood and implemented. The embodiments of the present invention and the features of the embodiments can be combined with each other without conflict, and the formed technical solutions are within the scope of the present invention.
Example one
The embodiment provides an emotion detection method. Fig. 1 is a schematic flowchart of an emotion detection method provided in an embodiment of the present application, and as shown in fig. 1, the method of the present embodiment may include the following steps:
and S100, acquiring the current dynamic category information of the target object.
In some embodiments, the current dynamic category information of the target object is acquired through the sensor and the camera, wherein the current dynamic category information comprises limb dynamic information and physiological information.
It should be noted that the sensor may be a set of sensors, and the sensor and the camera may detect the limb dynamic information (e.g., sound intensity, limb movement, facial expression, etc.) and the physiological information (e.g., body temperature, heart rate, blood pressure, etc.) of the target object.
S200, establishing a current three-dimensional dynamic portrait model of the target object according to the current dynamic category information of the target object.
In some embodiments, in establishing a current three-dimensional stereoscopic dynamic representation model of the target object, the following sub-steps may be included:
s201, generating a limb dynamic vector set according to the current limb dynamic information of the target object, and mapping the limb dynamic vector set to a three-dimensional coordinate system to obtain the current limb dynamic vector three-dimensional coordinate data of the target object.
When the body dynamic vector set is generated, the current body dynamic information of the target object is sampled to generate the current body dynamic vector set of the target object.
S202, generating a physiological information vector set according to the current physiological information of the target object, and mapping the physiological information vector set to a three-dimensional coordinate system to obtain the current physiological information vector three-dimensional coordinate data of the target object.
Wherein, in generating the physiological information vector set, the current physiological information of the target object is sampled to generate the current physiological information vector set of the target object.
S203, establishing a current three-dimensional dynamic portrait model of the target object according to the current limb dynamic vector three-dimensional coordinate data and the current physiological information vector three-dimensional coordinate data.
And S300, obtaining a current dynamic category result of the target object according to the current three-dimensional dynamic portrait model and the pre-trained emotion detection model.
In some embodiments, before obtaining the current dynamic category result of the target object, the method further comprises: and training the BP neural network model according to the mapping relation between the dynamic portrait model and the dynamic classification result to obtain an emotion detection model. The training process of the emotion detection model can comprise the following steps:
s301, acquiring historical dynamic category information of a target object and a dynamic category result corresponding to the historical dynamic category information;
s302, establishing a historical three-dimensional dynamic portrait model of the target object according to the historical dynamic category information of the target object;
and S303, training the BP neural network according to the historical three-dimensional dynamic portrait model and the dynamic classification result to obtain an emotion detection model.
Wherein, the emotion detection model stores the mapping relation between the dynamic portrait model of the target object and the dynamic classification result.
Further, based on the current three-dimensional dynamic portrait model of the target object, the current dynamic category result of the target object is determined according to the mapping relation between the dynamic portrait model of the target object and the dynamic category result.
Further, after a dynamic category result corresponding to the current three-dimensional dynamic portrait model of the target object is determined according to the pre-trained emotion detection model, the emotion detection model can be trained by using a mapping relation between the dynamic portrait model and the dynamic category result to obtain an updated emotion detection model.
And S400, prompting according to the current dynamic category result.
In some embodiments, the prompting according to the current dynamic category result includes:
selecting a corresponding indicator light color according to the current dynamic category result;
and prompting the user by displaying the color of the indicator light.
The embodiment of the application discloses an emotion detection method, which comprises the following steps: acquiring current dynamic category information of a target object; establishing a current three-dimensional dynamic portrait model of the target object according to the current dynamic category information of the target object; obtaining a current dynamic category result of the target object according to the current three-dimensional dynamic portrait model and a pre-trained emotion detection model; and prompting according to the current dynamic category result. By adopting the emotion detection method, the trained model is closer to the real dynamics of the target object, so that the detection of the unknown dynamics of the target object is more accurate in type and smaller in error, and the detection efficiency is improved.
Example two
The embodiment of the invention introduces an emotion detection method in more detail. Fig. 2 is a schematic flowchart of another emotion detection method provided in an embodiment of the present application, and as shown in fig. 2, the method of the present embodiment may include the following steps:
firstly, obtaining the current dynamic category information of the target object through a sensor and a camera.
In some embodiments, current dynamic category information of the target object is acquired through the sensor and the camera, wherein the current dynamic category information comprises limb dynamic information and physiological information.
It should be noted that the sensor may be a set of sensors, and the sensor and the camera may detect the limb dynamic information (e.g., sound intensity, limb movement, facial expression, etc.) and the physiological information (e.g., body temperature, heart rate, blood pressure, etc.) of the target object.
And secondly, establishing a current three-dimensional dynamic portrait model of the target object.
In some embodiments, in establishing a current three-dimensional stereoscopic dynamic representation model of the target object, the following sub-steps may be included:
1. and generating a limb dynamic vector set according to the current limb dynamic information of the target object, and mapping the limb dynamic vector set to a three-dimensional coordinate system to obtain the current limb dynamic vector three-dimensional coordinate data of the target object.
When the body dynamic vector set is generated, the current body dynamic information of the target object is sampled to generate the current body dynamic vector set of the target object.
Specifically, a limb dynamic vector set M [ a ] generated by sampling1,a2,...an],anAnd the nth body dynamic vector represents the collected body dynamic of the target object. These limb dynamic vector sets M [ a ]1,a2,…an]Mapping to three-dimensional coordinate system to form three-dimensional coordinate data (X) of limb dynamic vectorM,YM,ZM)。
2. And generating a physiological information vector set according to the current physiological information of the target object, and mapping the physiological information vector set to a three-dimensional coordinate system to obtain the current physiological information vector three-dimensional coordinate data of the target object.
Wherein, in generating the physiological information vector set, the current physiological information of the target object is sampled to generate the current physiological information vector set of the target object.
Specifically, a sample generated physiological information vector set K [ b [ ]1,b2,…bn],bnAn nth physiological information vector representing the acquired physiological information of the target subject. These physiological information vector sets K [ b ]1,b2,…bn]Mapping into three-dimensional coordinate system to form physiological information vector three-dimensional coordinate data (X)K,YK,ZK)。
3. And establishing a current three-dimensional dynamic portrait model of the target object according to the current limb dynamic vector three-dimensional coordinate data and the current physiological information vector three-dimensional coordinate data.
Specifically, three-dimensional coordinate data (X) is obtained according to the current limb dynamic vector of the target objectM,YM,ZM) And current physiological information vector three-dimensional coordinate data (X) of the target objectK,YK,ZK) And establishing a current three-dimensional dynamic portrait model L of the target object.
And thirdly, obtaining a current dynamic category result of the target object according to the current three-dimensional dynamic portrait model and a pre-trained emotion detection model.
In some embodiments, before obtaining the current dynamic category result of the target object, the step further comprises: and training the BP neural network model according to the mapping relation between the dynamic portrait model and the dynamic classification result to obtain an emotion detection model. The training process of the emotion detection model can comprise the following steps:
1. acquiring historical dynamic category information of a target object and a dynamic category result corresponding to the historical dynamic category information;
2. establishing a historical three-dimensional dynamic portrait model of the target object according to the historical dynamic category information of the target object;
3. and training the BP neural network according to the historical three-dimensional dynamic portrait model and the dynamic classification result to obtain an emotion detection model.
Wherein, the emotion detection model stores the mapping relation between the dynamic portrait model of the target object and the dynamic classification result.
Further, based on the current three-dimensional dynamic portrait model of the target object, the current dynamic category result of the target object is determined according to the mapping relation between the dynamic portrait model of the target object and the dynamic category result.
Further, after a dynamic category result corresponding to the current three-dimensional dynamic portrait model of the target object is determined according to the pre-trained emotion detection model, the emotion detection model can be trained by using a mapping relation between the dynamic portrait model and the dynamic category result to obtain an updated emotion detection model.
Further, if the current dynamic category result of the target object cannot be determined according to the mapping relationship between the dynamic portrait model and the dynamic category result of the target object, the emotion detection model may repeatedly calculate the current dynamic category information of the target object until the current dynamic category result of the target object is determined.
And fourthly, prompting according to the current dynamic classification result.
In some embodiments, the prompting according to the current dynamic category result includes:
selecting a corresponding indicator light color according to the current dynamic category result;
and prompting the user by displaying the color of the indicator light.
For example, the current dynamic category information of the target object may be expressed by lighting different color indicator lamps. For example, the indicator light indicates that the body temperature is abnormal when displaying green, indicates that the body temperature is normal when displaying yellow, indicates that the body temperature is hungry when displaying purple, and indicates that the body temperature is abnormal when displaying red.
Specifically, if the current dynamic classification result of the target object is in a normal state, it means that the target object is in the normal state, and the indicator light is turned on and displayed as green; if the current dynamic classification result of the target object is in an abnormal state, the indicator light is turned on and displayed in other colors, and the user is prompted to perform targeted processing according to different dynamic classification results. For example, when the target object is an infant, if the current dynamic category result belongs to a normal state, the indicator light is turned on and displayed as green, and then no further processing is needed; if the current dynamic classification result belongs to the hungry state, the indicator light is on and is displayed to be yellow, and then the father and the mother need to make preparation for feeding; if the current dynamic category result belongs to the abnormal body temperature state, the indicator light is on and displayed in red, and then the father and the mother need to prepare for going to the hospital; if the current dynamic classification result belongs to a frightening state, the indicator light is on and is displayed as purple, and then the father and the mother need to placate the baby by holding or relatives.
It should be noted that the normal state includes: states such as happy, playful, sound sleep, quiet, and the like, the abnormal states include: vital qi generation, fright, hunger, abnormal body temperature, abnormal heart rate, crying, etc.
In some embodiments, if the current dynamic category result of the target object is in an abnormal state, the current emotion of the target object can be identified by combining the number of times of flashing of the indicator light while displaying different colors.
In some embodiments, if the current dynamic category of the target object is in an abnormal state, the emotion of the target object can be relieved by playing some soothing music.
In some embodiments, if the current dynamic category result of the target object is in an abnormal state, a reminding message may be sent to the intelligent terminal to show the current state information of the target object, so that the current dynamic category information of the target object can be remotely grasped.
Furthermore, when the current dynamic category result displayed on the intelligent terminal is in an abnormal state, the emotion of the target object can be relieved in a voice/video call mode with the target object.
The embodiment of the application discloses another emotion detection method, which comprises the following steps: acquiring current dynamic category information of a target object; establishing a current three-dimensional dynamic portrait model of a target object; obtaining a current dynamic category result of the target object according to the current three-dimensional dynamic portrait model and a pre-trained emotion detection model; and prompting according to the current dynamic category result. By adopting the emotion detection method, the trained model is closer to the real dynamics of the target object, so that the detection of the unknown dynamics of the target object is more accurate in type and smaller in error, and the detection efficiency is improved.
EXAMPLE III
The embodiment of the invention provides an emotion detection device. Fig. 3 is a schematic structural diagram of an emotion detection apparatus provided in an embodiment of the present application, and as shown in fig. 3, the emotion detection apparatus of the present embodiment may include:
an obtaining module 301, configured to obtain current dynamic category information of the target object.
In some embodiments, the current dynamic category information of the target object is acquired through the sensor and the camera, wherein the current dynamic category information comprises limb dynamic information and physiological information.
It should be noted that the sensor may be a set of sensors, and the sensor and the camera may detect the limb dynamic information (e.g., sound intensity, limb movement, facial expression, etc.) and the physiological information (e.g., body temperature, heart rate, blood pressure, etc.) of the target object.
The establishing module 302 is configured to establish a current three-dimensional dynamic portrait model of the target object according to the current dynamic category information of the target object.
In some embodiments, in establishing a current three-dimensional stereoscopic dynamic representation model of the target object, the following sub-steps may be included:
1. and generating a limb dynamic vector set according to the current limb dynamic information of the target object, and mapping the limb dynamic vector set to a three-dimensional coordinate system to obtain the current limb dynamic vector three-dimensional coordinate data of the target object.
When the body dynamic vector set is generated, the current body dynamic information of the target object is sampled to generate the current body dynamic vector set of the target object.
Specifically, a limb dynamic vector set M [ a ] generated by sampling1,a2,...an],anAnd the nth body dynamic vector represents the collected body dynamic of the target object. These limb dynamic vector sets M [ a ]1,a2,…an]Mapping to three-dimensional coordinate system to form three-dimensional coordinate data (X) of limb dynamic vectorM,YM,ZM)。
2. And generating a physiological information vector set according to the current physiological information of the target object, and mapping the physiological information vector set to a three-dimensional coordinate system to obtain the current physiological information vector three-dimensional coordinate data of the target object.
Wherein, in generating the physiological information vector set, the current physiological information of the target object is sampled to generate the current physiological information vector set of the target object.
Specifically, a sample generated physiological information vector set K [ b [ ]1,b2,…bn],bnRepresenting the acquired physiological information of the target objectn physiological information vectors. These physiological information vector sets K [ b ]1,b2,…bn]Mapping into three-dimensional coordinate system to form physiological information vector three-dimensional coordinate data (X)K,YK,ZK)。
3. And establishing a current three-dimensional dynamic portrait model of the target object according to the current limb dynamic vector three-dimensional coordinate data and the current physiological information vector three-dimensional coordinate data.
Specifically, three-dimensional coordinate data (X) is obtained according to the current limb dynamic vector of the target objectM,YM,ZM) And current physiological information vector three-dimensional coordinate data (X) of the target objectK,YK,ZK) And establishing a current three-dimensional dynamic portrait model L of the target object.
And the detection module 303 is configured to obtain a current dynamic category result of the target object according to the current three-dimensional dynamic portrait model and the pre-trained emotion detection model.
In some embodiments, the apparatus may further include a training module, configured to obtain historical dynamic category information of the target object and a dynamic category result corresponding to the historical dynamic category information; establishing a historical three-dimensional dynamic portrait model of the target object according to the historical dynamic category information of the target object; and (4) training the BP neural network according to the historical three-dimensional dynamic portrait model and the dynamic classification result to obtain an emotion detection model.
Wherein, the emotion detection model stores the mapping relation between the dynamic portrait model of the target object and the dynamic classification result.
Further, before the current dynamic category result of the target object is obtained, an emotion detection model is obtained through a training module.
Further, based on the current three-dimensional dynamic portrait model of the target object, the current dynamic category result of the target object is determined according to the mapping relation between the dynamic portrait model of the target object and the dynamic category result.
Further, after a dynamic category result corresponding to the current three-dimensional dynamic portrait model of the target object is determined according to the pre-trained emotion detection model, the emotion detection model can be trained by using a mapping relation between the dynamic portrait model and the dynamic category result to obtain an updated emotion detection model.
Further, if the current dynamic category result of the target object cannot be determined according to the mapping relationship between the dynamic portrait model and the dynamic category result of the target object, the emotion detection model may repeatedly calculate the current dynamic category information of the target object until the current dynamic category result of the target object is determined.
And the prompting module 304 is used for prompting according to the current dynamic category result.
In some embodiments, the prompting according to the current dynamic category result includes:
the selecting subunit is used for selecting the corresponding color of the indicator light according to the current dynamic category result;
and the prompting subunit is used for prompting the user by displaying the color of the indicator light.
In some embodiments, if the current dynamic category of the target object is in an abnormal state, the current dynamic category of the target object may be identified by selectively lighting different color indicator lamps to prompt the user. For example, the indication or the like indicates a normal state when displayed in green, a hungry state when displayed in yellow, a frightened state when displayed in purple, and an abnormal body temperature state when displayed in red.
Specifically, if the current dynamic classification result of the target object is in a normal state, it means that the target object is in the normal state, and the indicator light is turned on and displayed as green; if the current dynamic classification result of the target object is in an abnormal state, the indicator light is turned on and displayed in other colors, and the user is prompted to perform targeted processing according to different dynamic classification results. For example, when the target object is an infant, if the current dynamic category result belongs to a normal state, the indicator light is turned on and displayed as green, and then no further processing is needed; if the current dynamic classification result belongs to the hungry state, the indicator light is on and is displayed to be yellow, and then the father and the mother need to make preparation for feeding; if the current dynamic category result belongs to the abnormal body temperature state, the indicator light is on and displayed in red, and then the father and the mother need to prepare for going to the hospital; if the current dynamic classification result belongs to a frightening state, the indicator light is on and is displayed as purple, and then the father and the mother need to placate the baby by holding or relatives.
It should be noted that the normal state includes: states such as happy, playful, sound sleep, quiet, and the like, the abnormal states include: vital qi generation, fright, hunger, abnormal body temperature, abnormal heart rate, crying, etc.
In some embodiments, if the current dynamic category result of the target object is in an abnormal state, the current emotion of the target object can be identified by combining the number of times of flashing of the indicator light while displaying different colors.
In some embodiments, the prompt module further includes an audio/video playing subunit, and if the current dynamic category result of the target object is in an abnormal state, the emotion of the target object can be relieved by playing some soothing music.
In some embodiments, the prompt module further includes a communication subunit, and if the current dynamic category result of the target object is in an abnormal state, a prompt message may be sent to the intelligent terminal to show the current state information of the target object, so that the current dynamic category information of the target object can be remotely grasped.
Furthermore, when the current dynamic category result displayed on the intelligent terminal is in an abnormal state, the emotion of the target object can be relieved by carrying out voice/video call with the target object.
The embodiment of the application discloses emotion detection device includes: the acquisition module is used for acquiring the current dynamic category information of the target object; the establishing module is used for establishing a current three-dimensional dynamic portrait model of the target object according to the current dynamic category information of the target object; the detection module is used for obtaining a current dynamic category result of the target object according to the current three-dimensional dynamic portrait model and a pre-trained emotion detection model; and the prompting module is used for prompting according to the current dynamic category result. The emotion detection device provided by the invention improves the judgment accuracy, improves the detection efficiency and improves the use experience of a user aiming at the current dynamic category information of the target object.
In summary, according to the emotion detection method and the emotion detection device provided by the present invention, the accuracy of determining the current dynamic category information of the target object is improved, the complexity of operation is reduced, and the user experience is improved.
In some embodiments provided by the present invention, the emotion detection method can be used for intelligently detecting emotion, so that beneficial effects achieved by the emotion detection method can be referred to the beneficial effects in the corresponding methods provided above, and details are not repeated herein.
It should be further understood that the methods disclosed in the several embodiments of the present invention may be implemented in other ways. The method embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of methods and apparatus according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, a segment, or a portion of a computer program, which comprises one or more computer programs for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures, or indeed, may be executed substantially concurrently, or in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer programs.
In the present invention, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, apparatus or device comprising the element; if the description to "first", "second", etc. is used for descriptive purposes only, it is not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated; in the description of the present invention, unless explicitly defined otherwise, the terms "normal state", "abnormal state", "BP neural network", etc. should be understood in a broad sense, and those skilled in the art can reasonably determine the specific meaning of the terms in the present invention by combining the specific contents of the technical solutions; in the description of the present invention, the terms "plurality" and "plurality" mean at least two unless otherwise specified.
Finally, it is noted that throughout the description of the present specification, references to the description of "one embodiment," "some embodiments," "an example" or "some examples" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it is to be understood that the above embodiments are exemplary and that the present invention is illustrative only and is not to be construed as limited thereto. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method of emotion detection, the method comprising:
acquiring current dynamic category information of a target object;
establishing a current three-dimensional dynamic portrait model of the target object according to the current dynamic category information of the target object;
obtaining a current dynamic category result of the target object according to the current three-dimensional dynamic portrait model and a pre-trained emotion detection model;
and prompting according to the current dynamic category result.
2. The method of claim 1, wherein the obtaining current dynamic category information of the target object comprises:
and acquiring current dynamic category information of the target object through a sensor and a camera, wherein the current dynamic category information comprises limb dynamic information and physiological information.
3. The method of claim 2, wherein said building a current three-dimensional stereoscopic moving image model of the target object according to the current moving category information of the target object comprises:
generating a limb dynamic vector set according to the current limb dynamic information of the target object, and mapping the limb dynamic vector set into a three-dimensional coordinate system to obtain the current limb dynamic vector three-dimensional coordinate data of the target object;
generating a physiological information vector set according to the current physiological information of the target object, and mapping the physiological information vector set to a three-dimensional coordinate system to obtain the current physiological information vector three-dimensional coordinate data of the target object;
and establishing a current three-dimensional dynamic portrait model of the target object according to the current limb dynamic vector three-dimensional coordinate data and the current physiological information vector three-dimensional coordinate data.
4. The method of claim 1, wherein the training process of the emotion detection model comprises:
acquiring historical dynamic category information of a target object and a dynamic category result corresponding to the historical dynamic category information;
establishing a historical three-dimensional dynamic portrait model of the target object according to the historical dynamic category information of the target object;
and training a BP neural network according to the historical three-dimensional dynamic portrait model and the dynamic classification result to obtain the emotion detection model.
5. The method of claim 1, wherein the prompting based on the current dynamic category result comprises:
selecting a corresponding indicator light color according to the current dynamic category result;
and prompting the user by displaying the color of the indicator light.
6. An emotion detection apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring the current dynamic category information of the target object;
the establishing module is used for establishing a current three-dimensional dynamic portrait model of the target object according to the current dynamic category information of the target object;
the detection module is used for obtaining a current dynamic category result of the target object according to the current three-dimensional dynamic portrait model and a pre-trained emotion detection model;
and the prompting module is used for prompting according to the current dynamic category result.
7. The apparatus of claim 6, wherein the obtaining module comprises:
and acquiring the current dynamic category information of the target object through a sensor and a camera, wherein the current dynamic category information comprises limb dynamic information and physiological information.
8. The apparatus of claim 7, wherein the establishing module comprises:
the generation submodule is used for generating a limb dynamic vector set according to the current limb dynamic information of the target object, and mapping the limb dynamic vector set into a three-dimensional coordinate system to obtain the current limb dynamic vector three-dimensional coordinate data of the target object; and the combination of (a) and (b),
generating a physiological information vector set according to the current physiological information of the target object, and mapping the physiological information vector set to a three-dimensional coordinate system to obtain the current physiological information vector three-dimensional coordinate data of the target object;
and the establishing submodule is used for establishing a current three-dimensional dynamic portrait model of the target object according to the current limb dynamic vector three-dimensional coordinate data and the current physiological information vector three-dimensional coordinate data.
9. The apparatus of claim 6, further comprising:
the training module is used for acquiring historical dynamic category information of the target object and a dynamic category result corresponding to the historical dynamic category information; establishing a historical three-dimensional dynamic portrait model of the target object according to the historical dynamic category information of the target object; and training a BP neural network according to the historical three-dimensional dynamic portrait model and the dynamic classification result to obtain the emotion detection model.
10. The apparatus of claim 6, wherein the prompt module comprises:
the selecting subunit is used for selecting the corresponding color of the indicator light according to the current dynamic category result;
and the prompting subunit is used for prompting the user by displaying the color of the indicator light.
CN202111175599.9A 2021-10-09 2021-10-09 Emotion detection method and device Pending CN114065800A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111175599.9A CN114065800A (en) 2021-10-09 2021-10-09 Emotion detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111175599.9A CN114065800A (en) 2021-10-09 2021-10-09 Emotion detection method and device

Publications (1)

Publication Number Publication Date
CN114065800A true CN114065800A (en) 2022-02-18

Family

ID=80234263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111175599.9A Pending CN114065800A (en) 2021-10-09 2021-10-09 Emotion detection method and device

Country Status (1)

Country Link
CN (1) CN114065800A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114883014A (en) * 2022-04-07 2022-08-09 南方医科大学口腔医院 Patient emotion feedback device and method based on biological recognition and treatment couch

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114883014A (en) * 2022-04-07 2022-08-09 南方医科大学口腔医院 Patient emotion feedback device and method based on biological recognition and treatment couch

Similar Documents

Publication Publication Date Title
CN103561652B (en) Method and system for assisting patients
JP3968522B2 (en) Recording apparatus and recording method
US10311303B2 (en) Information processing apparatus, information processing method, and program
US10504031B2 (en) Method and apparatus for determining probabilistic context awareness of a mobile device user using a single sensor and/or multi-sensor data fusion
CN109480868B (en) Intelligent infant monitoring system
CN110291489A (en) The efficient mankind identify intelligent assistant's computer in calculating
KR20180137490A (en) Personal emotion-based computer-readable cognitive memory and cognitive insights for memory and decision making
JP7285589B2 (en) INTERACTIVE HEALTH CONDITION EVALUATION METHOD AND SYSTEM THEREOF
CN110706449A (en) Infant monitoring method and device, camera equipment and storage medium
JP6900058B2 (en) Personal assistant control system
CN108670196B (en) Method and device for monitoring sleep state of infant
US20240205584A1 (en) Ear-wearable devices for control of other devices and related methods
CN112069949A (en) Artificial intelligence-based infant sleep monitoring system and monitoring method
US20220401033A1 (en) Monitoring system for a feeding bottle
CN114065800A (en) Emotion detection method and device
WO2017143951A1 (en) Expression feedback method and smart robot
KR20140032651A (en) Method for emotion feedback service and smart device using the same
KR101927373B1 (en) Method, apparatus and system for monitering resident
CN113764099A (en) Psychological state analysis method, device, equipment and medium based on artificial intelligence
JP2008076904A (en) Feeling discrimination method, feeling discrimination device, and atmosphere information communication terminal
CN107203259B (en) Method and apparatus for determining probabilistic content awareness for mobile device users using single and/or multi-sensor data fusion
CN209220258U (en) Intelligent necklace, terminal and its system
JP6900089B2 (en) Personal assistant control system
CN110353637A (en) Data processing method, device and temperature check system
CN109124656A (en) Information processing unit, terminal, system and information processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination