CN106886770A - A kind of video communication sentiment analysis householder method - Google Patents
A kind of video communication sentiment analysis householder method Download PDFInfo
- Publication number
- CN106886770A CN106886770A CN201710130178.1A CN201710130178A CN106886770A CN 106886770 A CN106886770 A CN 106886770A CN 201710130178 A CN201710130178 A CN 201710130178A CN 106886770 A CN106886770 A CN 106886770A
- Authority
- CN
- China
- Prior art keywords
- analysis
- image
- people face
- people
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/176—Dynamic expression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/142—Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/147—Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention provides a kind of video communication sentiment analysis householder method, it is connected into Video chat software, video image is intercepted from the chat window in Video chat software, Treatment Analysis are carried out to image by big data using the auxiliary of the webserver, set up micro- expression shape change and behavior dynamic of people face mould type analysis patient, so as to analyze the emotion change of chatting object, the method that the present invention is provided has the advantages that can the change of assistant analysis chatting object emotion, promotion raising exchange efficiency.
Description
Technical field
The invention belongs to information-based communication technique field, more particularly to a kind of video communication sentiment analysis householder method.
Background technology
With the development of the network technology, video communication is increasingly popularized, and increasing field uses video communication method,
Including aspects such as Internet chat, video conference, remote teaching.Video communication has furthered interpersonal distance, if regarding
It will be seen that the emotion of more other side, psychological activity will substantially reduce exchange misunderstanding, promote person to person in the middle of frequency communication process
Between exchange.
Recognition of face, is that the facial feature information based on people carries out a kind of biological identification technology of identification.With shooting
Machine or camera image or video flowing of the collection containing face, and automatic detect and track face in the picture, and then to detection
To face carry out a series of correlation techniques of face, generally also referred to as Identification of Images, face recognition.
Micro- expression, is psychology noun.People see heart impression expression to other side by doing some expressions, are done in people
Different expressions between, or in certain expression, face's meeting " leakage " goes out other information." micro- expression " is most short by sustainable 1/25
Second, although a subconscious expression may be only lasted in a flash, but this characteristic, it is easy to expose mood.When face is doing
During certain expression, these duration extremely short expression can flash across suddenly, and express opposite mood sometimes." micro- table
Feelings " flash across, and the people and observer that typically even clear-headed work is expressed one's feelings are detectable.In experiment, only 10% people examines
Feel.Compared with it is intended to know the expression made, " micro- expression " can more embody people really impression and motivation.
People face recognize on the basis of enter pedestrian's surface analysis in conjunction with " micro- expression ", using computer high speed catch and
Computing capability can preferably recognize and analyze micro- expression shape change of people, such that it is able to judge the mood of analysis object, for example
Whether glad, sad, painful, disappointed, excited etc..
The content of the invention
Based on the deficiencies in the prior art, the present invention provides a kind of video communication sentiment analysis householder method, and it leads to
Video chat software is accessed, video image is intercepted from the chat window in Video chat software, using the auxiliary of the webserver
Treatment Analysis are carried out to image by big data, micro- expression shape change and behavior dynamic of people face mould type analysis patient is set up, so that
The emotion for analyzing chatting object changes, and the method that the present invention is provided has can the change of assistant analysis chatting object emotion, promotion
Improve the advantage of exchange efficiency.
A kind of video communication sentiment analysis householder method, it is comprised the following steps:
Step S10 image interceptions, intercept video pictures from ordinary video bitcom, and anti-according to color analysis module at any time
The size of feedback adjustment video intercepting and the position of interception;
Step S20 color of image is analyzed, and the image information that will be truncated to is analyzed, and analyzes the color change of image, distinguishes people
Face region and background area, and determine people face position, interception video pictures region is adjusted in the position according to people face, makes one face in interception
The picture center arrived;
Step S30 pixels statisticses are analyzed, and pixelation is carried out to image, then pixels statisticses point are carried out to the people face region in image
Analysis, careful identification is carried out to people face;
Step S40 marker characteristic points, compare with reference to the careful recognition result in people face and biological information, mark people face in image
Characteristic point;
Step S50 sets up people's surface model, and people's surface model is set up to analysis object according to characteristic point and people face information, and simulation people face is special
Levy a mutation analysis;
Step S60 moods are analyzed, and the change according to people face combines the information in biological emotion-directed behavior information and personal analysis archives
Comparison draw analysis object instantaneous mood, by mood analysis result feed back to communication picture in, communicate picture in display point
Analysis result;
Step S70 good friends file, and calls buddy list info combination people's surface analysis result of bitcom, is each chatting object
Set up analysis archives or more new individual analysis archives.
Wherein, described step S10 IMAQs can be divided into the collection of step S11 still images and step S12 Dynamic Graphs
As collection, step S20 color analysis are performed after the collection of step S11 still images, adjust shooting angle;Step S12 dynamic images
Step S30 is performed after collection, pixels statisticses analysis is carried out to people face region.
Wherein, the biological emotion-directed behavior information in described step S60 includes the micro- expression information in people face and artificial action letter
Breath.
Wherein, described step S30 pixels statisticses analysis also includes step S31, and connection interconnected server auxiliary is to image
Carry out pixels statisticses analytical calculation.
Wherein, described analysis method also includes that step S80 big datas analyze updating maintenance, to dividing after execution step S60
Analysis result combination internet data carries out big data analysis and reaffirms analysis result, and according to analysis result to biological Information Number
Maintenance is updated according to storehouse.
Wherein, the mode that analysis result shows in picture is communicated in described step S60 is barrage or in newly-built window
Mouth display.
Specific embodiment
With reference to specific embodiment, the invention will be further described.
A kind of video communication sentiment analysis householder method, it is comprised the following steps:
Step S10 image interceptions, intercept video pictures from ordinary video bitcom, and anti-according to color analysis module at any time
The size of feedback adjustment video intercepting and the position of interception;
Step S20 color of image is analyzed, and the image information that will be truncated to is analyzed, and analyzes the color change of image, distinguishes people
Face region and background area, and determine people face position, interception video pictures region is adjusted in the position according to people face, makes one face in interception
The picture center arrived;
Step S30 pixels statisticses are analyzed, and pixelation is carried out to image, then pixels statisticses point are carried out to the people face region in image
Analysis, careful identification is carried out to people face;
Step S40 marker characteristic points, compare with reference to the careful recognition result in people face and biological information, mark people face in image
Characteristic point;
Step S50 sets up people's surface model, and people's surface model is set up to analysis object according to characteristic point and people face information, and simulation people face is special
Levy a mutation analysis;
Step S60 moods are analyzed, and the change according to people face combines the information in biological emotion-directed behavior information and personal analysis archives
Comparison draw analysis object instantaneous mood, by mood analysis result feed back to communication picture in, communicate picture in display point
Analysis result;
Step S70 good friends file, and calls buddy list info combination people's surface analysis result of bitcom, is each chatting object
Set up analysis archives or more new individual analysis archives.
As the presently preferred embodiments, described step S10 IMAQs can be divided into the collection of step S11 still images and step
S12 dynamic image acquisitions, step S20 color analysis are performed after the collection of step S11 still images, adjust shooting angle;Step S12
Step S30 is performed after dynamic image acquisition, pixels statisticses analysis is carried out to people face region.
As the presently preferred embodiments, biological emotion-directed behavior information in described step S60 include the micro- expression information in people face and
Artificial action message.
As the presently preferred embodiments, described step S30 pixels statisticses analysis also includes step S31, connects interconnected server
Auxiliary carries out pixels statisticses analytical calculation to image.
As the presently preferred embodiments, described analysis method also includes that step S80 big datas analyze updating maintenance, perform step
Big data analysis is carried out to analysis result combination internet data after S60 and reaffirms analysis result, and according to analysis result pair
Biomolecule information database is updated maintenance.
As the presently preferred embodiments, in described step S60 analysis result communicate picture in show mode be barrage or
Person shows in new window.
Embodiment described above only expresses one embodiment of the present invention, and its description is more specific and detailed, but simultaneously
Therefore the limitation to the scope of the claims of the present invention can not be interpreted as.It should be pointed out that for one of ordinary skill in the art
For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to guarantor of the invention
Shield scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.
Claims (6)
1. a kind of video communication sentiment analysis householder method, it is characterised in that it is comprised the following steps:
Step S10 image interceptions, intercept video pictures from ordinary video bitcom, and anti-according to color analysis module at any time
The size of feedback adjustment video intercepting and the position of interception;
Step S20 color of image is analyzed, and the image information that will be truncated to is analyzed, and analyzes the color change of image, distinguishes people
Face region and background area, and determine people face position, interception video pictures region is adjusted in the position according to people face, makes one face in interception
The picture center arrived;
Step S30 pixels statisticses are analyzed, and pixelation is carried out to image, then pixels statisticses point are carried out to the people face region in image
Analysis, careful identification is carried out to people face;
Step S40 marker characteristic points, compare with reference to the careful recognition result in people face and biological information, mark people face in image
Characteristic point;
Step S50 sets up people's surface model, and people's surface model is set up to analysis object according to characteristic point and people face information, and simulation people face is special
Levy a mutation analysis;
Step S60 moods are analyzed, and the change according to people face combines the information in biological emotion-directed behavior information and personal analysis archives
Comparison draw analysis object instantaneous mood, by mood analysis result feed back to communication picture in, communicate picture in display point
Analysis result;
Step S70 good friends file, and calls buddy list info combination people's surface analysis result of bitcom, is each chatting object
Set up analysis archives or more new individual analysis archives.
2. a kind of video communication sentiment analysis householder method according to claim 1, it is characterised in that described step
S10 IMAQs can be divided into the collection of step S11 still images and step S12 dynamic image acquisitions, and step S11 still images are adopted
Step S20 color analysis are performed after collection, shooting angle is adjusted;Step S30 is performed after step S12 dynamic image acquisitions, to people face
Region carries out pixels statisticses analysis.
3. a kind of video communication sentiment analysis householder method according to claim 1, it is characterised in that described step
Biological emotion-directed behavior information in S60 includes the micro- expression information in people face and artificial action message.
4. a kind of video communication sentiment analysis householder method according to claim 1, it is characterised in that described step
The analysis of S30 pixels statisticses also includes step S31, and connection interconnected server auxiliary carries out pixels statisticses analytical calculation to image.
5. a kind of video communication sentiment analysis householder method according to claim 1, it is characterised in that described analysis side
Method also includes that step S80 big datas analyze updating maintenance, are carried out greatly after performing step S60 to analysis result combination internet data
Analysis result is reaffirmed in data analysis, and maintenance is updated to biomolecule information database according to analysis result.
6. a kind of video communication sentiment analysis householder method according to claim 1, it is characterised in that described step
The mode that analysis result shows in picture is communicated in S60 is barrage or is shown in new window.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710130178.1A CN106886770A (en) | 2017-03-07 | 2017-03-07 | A kind of video communication sentiment analysis householder method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710130178.1A CN106886770A (en) | 2017-03-07 | 2017-03-07 | A kind of video communication sentiment analysis householder method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106886770A true CN106886770A (en) | 2017-06-23 |
Family
ID=59179235
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710130178.1A Pending CN106886770A (en) | 2017-03-07 | 2017-03-07 | A kind of video communication sentiment analysis householder method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106886770A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109446907A (en) * | 2018-09-26 | 2019-03-08 | 百度在线网络技术(北京)有限公司 | A kind of method, apparatus of Video chat, equipment and computer storage medium |
CN112487904A (en) * | 2020-11-23 | 2021-03-12 | 成都尽知致远科技有限公司 | Video image processing method and system based on big data analysis |
WO2022156084A1 (en) * | 2021-01-22 | 2022-07-28 | 平安科技(深圳)有限公司 | Method for predicting behavior of target object on the basis of face and interactive text, and related device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103365867A (en) * | 2012-03-29 | 2013-10-23 | 腾讯科技(深圳)有限公司 | Method and device for emotion analysis of user evaluation |
CN103488293A (en) * | 2013-09-12 | 2014-01-01 | 北京航空航天大学 | Man-machine motion interaction system and method based on expression recognition |
CN104601870A (en) * | 2015-02-15 | 2015-05-06 | 广东欧珀移动通信有限公司 | Rotating camera shooting method and mobile terminal |
CN105847735A (en) * | 2016-03-30 | 2016-08-10 | 宁波三博电子科技有限公司 | Face recognition-based instant pop-up screen video communication method and system |
CN105959612A (en) * | 2016-04-22 | 2016-09-21 | 惠州Tcl移动通信有限公司 | Method and system for automatically correcting frame angle in mobile terminal video communication |
CN106331890A (en) * | 2015-06-24 | 2017-01-11 | 中兴通讯股份有限公司 | Processing method and device for video communication image |
-
2017
- 2017-03-07 CN CN201710130178.1A patent/CN106886770A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103365867A (en) * | 2012-03-29 | 2013-10-23 | 腾讯科技(深圳)有限公司 | Method and device for emotion analysis of user evaluation |
CN103488293A (en) * | 2013-09-12 | 2014-01-01 | 北京航空航天大学 | Man-machine motion interaction system and method based on expression recognition |
CN104601870A (en) * | 2015-02-15 | 2015-05-06 | 广东欧珀移动通信有限公司 | Rotating camera shooting method and mobile terminal |
CN106331890A (en) * | 2015-06-24 | 2017-01-11 | 中兴通讯股份有限公司 | Processing method and device for video communication image |
CN105847735A (en) * | 2016-03-30 | 2016-08-10 | 宁波三博电子科技有限公司 | Face recognition-based instant pop-up screen video communication method and system |
CN105959612A (en) * | 2016-04-22 | 2016-09-21 | 惠州Tcl移动通信有限公司 | Method and system for automatically correcting frame angle in mobile terminal video communication |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109446907A (en) * | 2018-09-26 | 2019-03-08 | 百度在线网络技术(北京)有限公司 | A kind of method, apparatus of Video chat, equipment and computer storage medium |
CN112487904A (en) * | 2020-11-23 | 2021-03-12 | 成都尽知致远科技有限公司 | Video image processing method and system based on big data analysis |
WO2022156084A1 (en) * | 2021-01-22 | 2022-07-28 | 平安科技(深圳)有限公司 | Method for predicting behavior of target object on the basis of face and interactive text, and related device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107066983A (en) | A kind of auth method and device | |
CN106909907A (en) | A kind of video communication sentiment analysis accessory system | |
CN108805009A (en) | Classroom learning state monitoring method based on multimodal information fusion and system | |
US20140341442A1 (en) | Image masks for face-related selection and processing in images | |
Lüsi et al. | Joint challenge on dominant and complementary emotion recognition using micro emotion features and head-pose estimation: Databases | |
JP2019530041A (en) | Combining the face of the source image with the target image based on the search query | |
CN111523473A (en) | Mask wearing identification method, device, equipment and readable storage medium | |
CN106886770A (en) | A kind of video communication sentiment analysis householder method | |
DE112012000853T5 (en) | Discovery, recognition and bookmarking of faces in videos | |
CN110674664A (en) | Visual attention recognition method and system, storage medium and processor | |
US20220262163A1 (en) | Method of face anti-spoofing, device, and storage medium | |
CN111523476A (en) | Mask wearing identification method, device, equipment and readable storage medium | |
CN110135282A (en) | A kind of examinee based on depth convolutional neural networks model later plagiarizes cheat detection method | |
CN109544523A (en) | Quality of human face image evaluation method and device based on more attribute face alignments | |
Jongerius et al. | Eye-tracking glasses in face-to-face interactions: Manual versus automated assessment of areas-of-interest | |
Mancini et al. | Computing and evaluating the body laughter index | |
CN106033539A (en) | Meeting guiding method and system based on video face recognition | |
CN106920074A (en) | A kind of remote interview method with psychological auxiliary judgment | |
CN111444389A (en) | Conference video analysis method and system based on target detection | |
CN111666829A (en) | Multi-scene multi-subject identity behavior emotion recognition analysis method and intelligent supervision system | |
Chen et al. | A region group adaptive attention model for subtle expression recognition | |
CN106875534A (en) | A kind of intelligent hospital self-help hospital registration system | |
JP2018077766A5 (en) | ||
US11699162B2 (en) | System and method for generating a modified design creative | |
CN106919924A (en) | A kind of mood analysis system based on the identification of people face |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170623 |
|
WD01 | Invention patent application deemed withdrawn after publication |