CN111967380A - Content recommendation method and system - Google Patents

Content recommendation method and system Download PDF

Info

Publication number
CN111967380A
CN111967380A CN202010822238.8A CN202010822238A CN111967380A CN 111967380 A CN111967380 A CN 111967380A CN 202010822238 A CN202010822238 A CN 202010822238A CN 111967380 A CN111967380 A CN 111967380A
Authority
CN
China
Prior art keywords
facial expression
image information
scene
expression
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010822238.8A
Other languages
Chinese (zh)
Inventor
高扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unisound Intelligent Technology Co Ltd
Xiamen Yunzhixin Intelligent Technology Co Ltd
Original Assignee
Unisound Intelligent Technology Co Ltd
Xiamen Yunzhixin Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unisound Intelligent Technology Co Ltd, Xiamen Yunzhixin Intelligent Technology Co Ltd filed Critical Unisound Intelligent Technology Co Ltd
Priority to CN202010822238.8A priority Critical patent/CN111967380A/en
Publication of CN111967380A publication Critical patent/CN111967380A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a content recommendation method and a system, wherein the content recommendation method comprises the following steps: acquiring voice information of a speaker; acquiring image information of a speaker; the image information comprises face image information and surrounding environment image information; processing the facial image information to obtain facial expression characteristics; matching the facial expression characteristics of the five sense organs with a preset facial expression template to obtain scores of the facial expression characteristics of the five sense organs; determining the current expression type according to the scores of the facial expression characteristics of the five sense organs; matching the surrounding environment image information with a scene model to determine a current scene; and recommending the content according to the current expression type, the current scene and the voice information of the speaker. Therefore, the user experience is improved.

Description

Content recommendation method and system
Technical Field
The invention relates to the technical field of data processing, in particular to a content recommendation method and system.
Background
In the prior art, content recommendation can be performed by using search behaviors and hot data of a user. However, content recommendation is performed only by means of hotspot data and user search behavior data, and diversification and accuracy of recommended content are lacking. The recommendation does not combine the state and the scene information of the user, so that the recommended content does not meet the requirements of the user or the content is not suitable for the current state of the user.
Therefore, a content recommendation method is urgently needed to meet the multi-scenario requirement and meet the user requirement.
Disclosure of Invention
The embodiment of the invention aims to provide a content recommendation method and a content recommendation system, so as to solve the problem that in the prior art, recommendation is only performed according to search behaviors or hot data of a user without considering scenes and expressions when recommendation is performed.
In a first aspect, the present invention provides a content recommendation method, including:
acquiring voice information of a speaker;
acquiring image information of a speaker; the image information comprises face image information and surrounding environment image information;
processing the facial image information to obtain facial expression characteristics;
matching the facial expression features with a preset facial expression template to obtain scores of the facial expression features;
determining the current expression type according to the scores of the facial expression characteristics of the five sense organs;
matching the surrounding environment image information with a scene model to determine a current scene;
and recommending contents according to the current expression type, the current scene and the voice information of the speaker.
In one possible implementation, the method further includes, before the step of:
obtaining a plurality of test expression types;
training the test expression types, and determining facial expression characteristics corresponding to each expression type in the trained model;
setting weights for the facial expression features corresponding to each expression type in the model;
and obtaining an expression template of the model according to the facial expression characteristics of the five sense organs and the weights of the facial expression characteristics of the five sense organs.
In a possible implementation manner, the matching the facial expression features of the five sense organs with a preset facial expression template to obtain scores of the facial expression features specifically includes:
matching the facial expression features with facial expression features in the model, and determining the matching degree of each facial expression in the facial expression features with the facial expression features in the model;
and obtaining the score of the facial expression characteristics of the facial expressions in each expression type according to the matching degree of the facial expression characteristics of each facial expression and the facial expression characteristics of each facial expression type in the model and the weight of each facial expression.
In a possible implementation manner, the determining, according to the score of the facial expression feature of the five sense organs, a current expression type specifically includes:
judging whether the score is larger than a preset score threshold value or not;
when the score is larger than a preset score threshold value, determining that the score is the highest and is the current expression type;
and when the score is not larger than a preset score threshold value, determining that the current expression type is zero.
In a possible implementation manner, the matching the image information of the surrounding environment with a scene model, and the determining the current scene specifically includes:
processing the surrounding image information, and extracting a plurality of elements in the surrounding image information;
and matching the elements with a scene model, and determining the scene with the maximum number of matched elements in the scene model as the current scene.
In one possible implementation, the scene models include a KTV/vocal bar scene, a home scene, and an outdoor scene.
In a possible implementation manner, after acquiring the voice information of the speaker, the method further includes:
when a plurality of speakers are provided, determining a target speaker according to the volume of the speakers;
setting the target speaker as a following object;
and controlling the camera according to the following object to acquire the image information of the following object.
In one possible implementation, the method further includes, after the step of:
acquiring search behavior data of a user;
and recommending contents according to the current searching behavior data, the expression type, the current scene and the voice information of the speaker.
In a second aspect, the present invention provides a content recommendation system, including:
the voice information acquisition device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring voice information of a speaker;
the acquisition unit is further used for acquiring image information of a speaker; the image information comprises face image information and surrounding environment image information;
the processing unit is used for processing the facial image information to obtain facial expression characteristics;
the matching unit is used for matching the facial expression characteristics of the five sense organs with a preset expression template to obtain scores of the facial expression characteristics of the five sense organs;
the determining unit is used for determining the current expression type according to the scores of the facial expression features of the five sense organs;
the determining unit is further configured to match the ambient image information with a scene model to determine a current scene;
and the recommending unit is used for recommending contents according to the current expression type, the current scene and the voice information of the speaker.
In a third aspect, the invention provides an apparatus comprising a memory for storing a program and a processor for performing the method of any of the first aspects.
In a fourth aspect, the present invention provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method according to any one of the first aspect.
In a fifth aspect, the invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method of any of the first aspects.
By applying the content recommendation method provided by the embodiment of the invention, the existing hotspot data and user search behavior data can be matched for recommendation by utilizing the camera image detection and voice recognition technology, so that the content recommendation accuracy is improved, the diversification of recommended content is realized, and the user experience is improved.
Drawings
Fig. 1 is a schematic flow chart of a content recommendation method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a content recommendation system according to a second embodiment of the present invention.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be further noted that, for the convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 is a schematic flow chart of a content recommendation method according to an embodiment of the present invention. The execution subject of the content recommendation method is a processor, a terminal and other equipment with a calculation function. Before content recommendation is carried out, a model needs to be trained, and scores of facial expression characteristics of five sense organs can be obtained through the model.
The model can be obtained by:
firstly, acquiring a plurality of test expression types; secondly, training the test expression types, and determining facial expression characteristics corresponding to each expression type in the trained model; thirdly, setting weights for facial expression features corresponding to each expression type in the model; and finally, obtaining the expression template of the model according to the facial expression characteristics of the five sense organs and the weights of the facial expression characteristics of the five sense organs.
Specifically, a large number of pictures with user expressions, including angry, happy, sad, and the like, can be obtained, and the pictures are trained, so as to obtain facial expression features corresponding to each expression type, for example, the expression type is happy, facial expressions of the facial organs include mouth and eyes, wherein the facial expression features are that the mouth slightly opens the mouth angle and the eyes are crescent-shaped. The mouth weight may be set to 60% and the eye characteristics to 40%. For another example, the expression type is puzzled, the facial expressions include eyes and eyebrows, the facial expressions are characterized by being distracted, frown, the eye weight is set to 40%, and the eyebrow weight is set to 60%. As another example, the expression type is anger, the facial expression features are mouth bleep, eye gazing and nostril enlargement, the mouth weight is set to 20%, the eye weight is set to 40%, and the nose feature is set to 40%.
Wherein the five sense organs include ears, eyebrows, eyes, nose and lips. For each expression type, corresponding facial features are included, and the facial features of each expression type correspond to different weights.
Therefore, according to each facial expression feature of the five sense organs under each facial expression type and the weight corresponding to each facial expression feature, a template corresponding to each facial expression type is obtained and is called as a facial expression template, so that the score of the user can be obtained through calculation of each facial expression template in the model.
The content recommendation method of the present application is specifically described below with reference to fig. 1:
step 110, obtaining the voice information of the speaker.
Specifically, the voice information of the speaker can be acquired through the microphone. The target speaker can be determined according to the volume, and the target speaker is used as a following object, and the camera is controlled to follow the following object.
For example, the voice message may be "i want to listen to a song".
Step 120, acquiring image information of a speaker; the image information includes face image information and surrounding image information.
Specifically, after the following object is set, the image information of the following object can be acquired as the camera rotates.
The image information may include face image information and surrounding image information.
And step 130, processing the face image information to obtain facial expression characteristics of the five sense organs.
Specifically, feature extraction is carried out on the face image information, and facial expression features are extracted from the face image information. For example, the feature images of the ear, the eyebrow, the eye, the nose, and the lip can be extracted respectively.
And 140, matching the facial expression characteristics of the five sense organs with a preset expression template to obtain scores of the facial expression characteristics of the five sense organs.
The facial expression characteristics of the five sense organs can be matched with the facial expression characteristics of the five sense organs in the model, and the matching degree of each facial expression in the facial expression characteristics of the five sense organs and the facial expression characteristics of the five sense organs in the model is determined; and obtaining the scores of the facial expression features of the five sense organs in each expression type according to the matching degree of the facial expression features of the five sense organs and each expression type in the model and the weight of each expression.
Specifically, the extracted ears, eyebrows, eyes, nose, and lips may be respectively matched with different expression templates, for example, the matching degree of the eye features and the eye features whose expression type is happy is 95%, the matching degree of the mouth features and the mouth features whose expression type is happy is 90%, and then, according to the weight of the eyes of 40% and the weight of the mouth of 60%, the score of the facial expression features and the expression type of happy is obtained as follows: (95% + 40% + 90% + 60%) 100 ═ 92 minutes. Thus, the scores of the facial expression characteristics of the speaker and each expression template can be obtained.
And 150, determining the current expression type according to the scores of the facial expression characteristics of the five sense organs.
Specifically, whether the score is larger than a preset score threshold value is judged; when the score is larger than a preset score threshold value, determining that the score is the highest as the current expression type; and when the score is not greater than a preset score threshold value, determining that the current expression type is zero.
For example, the score of the facial expression feature of the speaker and each expression template and the scores of the happy expression template, the sad expression module and the angry expression template are respectively 92, 32 and 40, and only 92 meets the requirement, the expression type of the speaker can be determined to be happy.
And if the matching value of the expression templates is greater than 80 points, the expression of the highest-point template is taken and determined as the user expression, and if the expression templates with the score of more than 80 are not obtained, the expression dimension of the user does not participate in content recommendation.
Step 160, matching the surrounding image information with the scene model to determine the current scene.
Processing surrounding image information, and extracting a plurality of elements in the surrounding image information; and matching the elements with the scene model, and determining the scene with the maximum number of matched elements in the scene model as the current scene. The scene models include a KTV/sing bar scene, a home scene, and an outdoor scene.
The method comprises the steps of extracting a plurality of elements from surrounding image information, wherein the elements comprise but are not limited to a song order table, light, a bed, a sofa, a refrigerator, a wardrobe, a tree, a fountain, a building and the like, matching the extracted elements with a scene model, and determining a current scene if three of the extracted elements are matched with elements in a certain scene.
For example, the extracted elements including sofas, refrigerators, wardrobes, and beds may be matched to a family scene.
And 170, recommending the content according to the current expression type, the current scene and the voice information of the speaker.
Specifically, after the expression type, scene and voice information of the speaker are extracted, the three and user search behavior data or hot spot data can be combined to perform content recommendation.
For example, the user says "i want to listen to music", and recognizes that the user is happy to recommend a cheerful KTV hot list song in a KTV or sing bar scene. At home, the user is angry and recommends quiet lyrics, relaxing class songs; outdoor square music is recommended in a square scene. When the user is confused, the user recommends an exciting song with the most number of times of searching for by the inspiring user. Therefore, the expression, scene and voice information of the user are combined, the user is intelligently recommended, and the user experience is improved.
Furthermore, when the expression type is absent, if the camera rotates for 360 degrees and no face image of the speaker is found, the face image is matched with the scene according to the collected surrounding environment image, and intelligent recommendation is performed by combining the search behavior of the user.
Further, if the scene information is not matched, recommending according to the search behavior data of the user directly; and if the search behavior data of the user is not found, degrading the data into recommended hotspot data.
By applying the content recommendation method provided by the embodiment of the invention, the existing hotspot data and user search behavior data can be matched for recommendation by utilizing the camera image detection and voice recognition technology, so that the content recommendation accuracy is improved, the diversification of recommended content is realized, and the user experience is improved.
Fig. 2 is a schematic structural diagram of a content recommendation system according to a second embodiment of the present invention, where the content recommendation system is applied in the content recommendation method according to the first embodiment, as shown in fig. 2, the content recommendation system includes: the type recommendation system comprises: an obtaining unit 210, a processing unit 220, a matching unit 230, a determining unit 240 and a recommending unit 250.
The obtaining unit 210 is configured to obtain voice information of a speaker;
the obtaining unit 210 is further configured to obtain image information of a speaker; the image information comprises face image information and surrounding environment image information;
the processing unit 220 is configured to process the facial image information to obtain facial expression features;
the matching unit 230 is used for matching the facial expression characteristics of the five sense organs with a preset expression template to obtain scores of the facial expression characteristics of the five sense organs;
the determining unit 240 is configured to determine a current expression type according to the scores of the facial expression features of the five sense organs;
the determining unit 240 is further configured to match the image information of the surrounding environment with the scene model, and determine a current scene;
the recommending unit 250 is configured to recommend content according to the current expression type, the current scene, and the voice information of the speaker.
Further, the obtaining unit 210 is further configured to obtain a plurality of test expression types;
the determining unit 240 is further configured to train the test expression types, and determine facial expression features corresponding to each expression type in the trained model;
the processing unit 220 is further configured to set a weight for the facial expression feature corresponding to each expression type in the model; and obtaining an expression template of the model according to the facial expression characteristics of the five sense organs and the weights of the facial expression characteristics of the five sense organs.
Further, the matching unit 230 is specifically configured to:
matching the facial expression characteristics of the five sense organs with facial expression characteristics of the five sense organs in the model, and determining the matching degree of each facial expression in the facial expression characteristics of the five sense organs with the facial expression characteristics of the five sense organs in the model;
and obtaining the scores of the facial expression features of the five sense organs in each expression type according to the matching degree of the facial expression features of the five sense organs and each expression type in the model and the weight of each expression.
Further, the determining unit 240 is specifically configured to: judging whether the score is larger than a preset score threshold value or not;
when the score is larger than a preset score threshold value, determining that the score is the highest as the current expression type;
and when the score is not greater than a preset score threshold value, determining that the current expression type is zero.
Further, the determining unit 240 is further configured to process the surrounding image information to extract a plurality of elements in the surrounding image information;
and matching the elements with the scene model, and determining the scene with the maximum number of matched elements in the scene model as the current scene.
The scene model comprises a KTV/vocal bar scene, a family scene and an outdoor scene.
Further, the determining unit 240 is further configured to determine the target speaker according to the volume of the speaker when there are multiple speakers; setting a target speaker as a following object; and controlling the camera according to the following object to acquire the image information of the following object.
By applying the content recommendation system provided by the embodiment of the invention, the existing hotspot data and user search behavior data can be matched for recommendation by utilizing the camera image detection and voice recognition technology, so that the content recommendation accuracy is improved, the diversification of recommended content is realized, and the user experience is improved.
The third embodiment of the invention provides equipment, which comprises a memory and a processor, wherein the memory is used for storing programs, and the memory can be connected with the processor through a bus. The memory may be a non-volatile memory such as a hard disk drive and a flash memory, in which a software program and a device driver are stored. The software program is capable of performing various functions of the above-described methods provided by embodiments of the present invention; the device drivers may be network and interface drivers. The processor is used for executing a software program, and the software program can realize the method provided by the first embodiment of the invention when being executed.
A fourth embodiment of the present invention provides a computer program product including instructions, which, when the computer program product runs on a computer, causes the computer to execute the method provided in the first embodiment of the present invention.
The fifth embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method provided in the first embodiment of the present invention is implemented.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, it should be understood that the above embodiments are merely exemplary embodiments of the present invention and are not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A content recommendation method, characterized in that the content recommendation method comprises:
acquiring voice information of a speaker;
acquiring image information of a speaker; the image information comprises face image information and surrounding environment image information;
processing the facial image information to obtain facial expression characteristics;
matching the facial expression features with a preset facial expression template to obtain scores of the facial expression features;
determining the current expression type according to the scores of the facial expression characteristics of the five sense organs;
matching the surrounding environment image information with a scene model to determine a current scene;
and recommending contents according to the current expression type, the current scene and the voice information of the speaker.
2. The method of claim 1, further comprising, prior to the method:
obtaining a plurality of test expression types;
training the test expression types, and determining facial expression characteristics corresponding to each expression type in the trained model;
setting weights for the facial expression features corresponding to each expression type in the model;
and obtaining an expression template of the model according to the facial expression characteristics of the five sense organs and the weights of the facial expression characteristics of the five sense organs.
3. The method according to claim 2, wherein the matching of the facial expression features of the five sense organs with a preset facial expression template to obtain the scores of the facial expression features specifically comprises:
matching the facial expression features with facial expression features in the model, and determining the matching degree of each facial expression in the facial expression features with the facial expression features in the model;
and obtaining the score of the facial expression characteristics of the facial expressions in each expression type according to the matching degree of the facial expression characteristics of each facial expression and the facial expression characteristics of each facial expression type in the model and the weight of each facial expression.
4. The method according to claim 1, wherein the determining a current expression type according to the scores of the facial expression features of the five sense organs specifically comprises:
judging whether the score is larger than a preset score threshold value or not;
when the score is larger than a preset score threshold value, determining that the score is the highest and is the current expression type;
and when the score is not larger than a preset score threshold value, determining that the current expression type is zero.
5. The method of claim 1, wherein matching the ambient image information with a scene model to determine the current scene specifically comprises:
processing the surrounding image information, and extracting a plurality of elements in the surrounding image information;
and matching the elements with a scene model, and determining the scene with the maximum number of matched elements in the scene model as the current scene.
6. The method of claim 1, wherein said scene models include a KTV/vocal bar scene, a family scene, and an outdoor scene.
7. The method of claim 1, wherein after obtaining the voice information of the speaker, the method further comprises:
when a plurality of speakers are provided, determining a target speaker according to the volume of the speakers;
setting the target speaker as a following object;
and controlling the camera according to the following object to acquire the image information of the following object.
8. The method of claim 1, further comprising, after the method:
acquiring search behavior data of a user;
and recommending contents according to the current searching behavior data, the expression type, the current scene and the voice information of the speaker.
9. A content recommendation system, characterized in that the type recommendation system comprises:
the voice information acquisition device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring voice information of a speaker;
the acquisition unit is further used for acquiring image information of a speaker; the image information comprises face image information and surrounding environment image information;
the processing unit is used for processing the facial image information to obtain facial expression characteristics;
the matching unit is used for matching the facial expression characteristics of the five sense organs with a preset expression template to obtain scores of the facial expression characteristics of the five sense organs;
the determining unit is used for determining the current expression type according to the scores of the facial expression features of the five sense organs;
the determining unit is further configured to match the ambient image information with a scene model to determine a current scene;
and the recommending unit is used for recommending contents according to the current expression type, the current scene and the voice information of the speaker.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method of any one of claims 1-8.
CN202010822238.8A 2020-08-16 2020-08-16 Content recommendation method and system Pending CN111967380A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010822238.8A CN111967380A (en) 2020-08-16 2020-08-16 Content recommendation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010822238.8A CN111967380A (en) 2020-08-16 2020-08-16 Content recommendation method and system

Publications (1)

Publication Number Publication Date
CN111967380A true CN111967380A (en) 2020-11-20

Family

ID=73388182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010822238.8A Pending CN111967380A (en) 2020-08-16 2020-08-16 Content recommendation method and system

Country Status (1)

Country Link
CN (1) CN111967380A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114855416A (en) * 2022-04-25 2022-08-05 青岛海尔科技有限公司 Recommendation method and device of washing program, storage medium and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140032359A1 (en) * 2012-07-30 2014-01-30 Infosys Limited System and method for providing intelligent recommendations
CN108509660A (en) * 2018-05-29 2018-09-07 维沃移动通信有限公司 A kind of broadcasting object recommendation method and terminal device
CN108920585A (en) * 2018-06-26 2018-11-30 深圳市赛亿科技开发有限公司 The method and device of music recommendation, computer readable storage medium
CN109919001A (en) * 2019-01-23 2019-06-21 深圳壹账通智能科技有限公司 Customer service monitoring method, device, equipment and storage medium based on Emotion identification
CN110113646A (en) * 2019-03-27 2019-08-09 深圳康佳电子科技有限公司 Intelligent interaction processing method, system and storage medium based on AI voice

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140032359A1 (en) * 2012-07-30 2014-01-30 Infosys Limited System and method for providing intelligent recommendations
CN108509660A (en) * 2018-05-29 2018-09-07 维沃移动通信有限公司 A kind of broadcasting object recommendation method and terminal device
CN108920585A (en) * 2018-06-26 2018-11-30 深圳市赛亿科技开发有限公司 The method and device of music recommendation, computer readable storage medium
CN109919001A (en) * 2019-01-23 2019-06-21 深圳壹账通智能科技有限公司 Customer service monitoring method, device, equipment and storage medium based on Emotion identification
CN110113646A (en) * 2019-03-27 2019-08-09 深圳康佳电子科技有限公司 Intelligent interaction processing method, system and storage medium based on AI voice

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114855416A (en) * 2022-04-25 2022-08-05 青岛海尔科技有限公司 Recommendation method and device of washing program, storage medium and electronic device
CN114855416B (en) * 2022-04-25 2024-03-22 青岛海尔科技有限公司 Method and device for recommending washing program, storage medium and electronic device

Similar Documents

Publication Publication Date Title
CN107818798B (en) Customer service quality evaluation method, device, equipment and storage medium
CN111415677B (en) Method, apparatus, device and medium for generating video
US20220254343A1 (en) System and method for intelligent initiation of a man-machine dialogue based on multi-modal sensory inputs
US11468894B2 (en) System and method for personalizing dialogue based on user's appearances
EP2877254B1 (en) Method and apparatus for controlling augmented reality
CN111368609A (en) Voice interaction method based on emotion engine technology, intelligent terminal and storage medium
CN114419205B (en) Driving method of virtual digital person and training method of pose acquisition model
JP7227395B2 (en) Interactive object driving method, apparatus, device, and storage medium
CN111145777A (en) Virtual image display method and device, electronic equipment and storage medium
CN107825429A (en) Interface and method
TW201937344A (en) Smart robot and man-machine interaction method
CN109542389B (en) Sound effect control method and system for multi-mode story content output
CN111383642B (en) Voice response method based on neural network, storage medium and terminal equipment
CN116704085B (en) Avatar generation method, apparatus, electronic device, and storage medium
WO2022179453A1 (en) Sound recording method and related device
CN117541444B (en) Interactive virtual reality talent expression training method, device, equipment and medium
CN114121006A (en) Image output method, device, equipment and storage medium of virtual character
CN114556469A (en) Data processing method and device, electronic equipment and storage medium
JP2023055910A (en) Robot, dialogue system, information processing method, and program
KR20200059112A (en) System for Providing User-Robot Interaction and Computer Program Therefore
CN112632349A (en) Exhibition area indicating method and device, electronic equipment and storage medium
CN115167656A (en) Interactive service method and device based on artificial intelligence virtual image
CN104270501B (en) The head portrait setting method of a kind of contact person in address list and relevant apparatus
CN111506183A (en) Intelligent terminal and user interaction method
CN117152308B (en) Virtual person action expression optimization method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination