CN104239304B - A kind of method, apparatus and equipment of data processing - Google Patents

A kind of method, apparatus and equipment of data processing Download PDF

Info

Publication number
CN104239304B
CN104239304B CN201310226296.4A CN201310226296A CN104239304B CN 104239304 B CN104239304 B CN 104239304B CN 201310226296 A CN201310226296 A CN 201310226296A CN 104239304 B CN104239304 B CN 104239304B
Authority
CN
China
Prior art keywords
communicatee
audio
value
expression
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310226296.4A
Other languages
Chinese (zh)
Other versions
CN104239304A (en
Inventor
周洪凯
朱建平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201310226296.4A priority Critical patent/CN104239304B/en
Publication of CN104239304A publication Critical patent/CN104239304A/en
Application granted granted Critical
Publication of CN104239304B publication Critical patent/CN104239304B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/635Filtering based on additional data, e.g. user or group profiles
    • G06F16/636Filtering based on additional data, e.g. user or group profiles by using biological or physiological data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Physiology (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a kind of method, apparatus of data processing and equipment, the method includes the performance informations of communicatee in acquisition exchange scene, then the performance information collected is matched with pre-stored all default performance informations in database, the default performance information to be matched with the performance information for obtaining and collecting, then default psychological condition type corresponding with the default performance information that matching obtains is obtained, and then the psychological condition type of communicatee is obtained, and export the psychological condition type of the communicatee.By the above-mentioned means, the present invention enables to user according to the psychological condition of communicatee and adjusts exchange measure in time, to improve satisfaction of the communicatee in communication process.

Description

A kind of method, apparatus and equipment of data processing
Technical field
The present invention relates to a kind of method, apparatus of data processing and equipment.
Background technology
With the development of the communication technology, voice communication and video communication quality obtain larger raising.Most enterprises at present, Especially service class enterprise, all sets up remote service center, and remote service is provided to client by voice or video.
Currently, attendant during remote service, is by artificially judging other side's mood, and according to other side's mood Service strategy.In people from susceptible thread, micro- expression is a kind of by artificially paying attention to being difficult the mood found, it is generally the case that micro- Expression flashes across, and the awake people for making expression and observer are detectable.But it is true that micro- expression can more embody people Impression and motivation.If attendant can discover micro- expression of other side in time, and generate not defeated fast mood in other side, it is right to have pacified Side, prevents the generation for servicing situation not in place.
Wherein, attendant is in the service of offer, again the moment pay attention to other side's emotional change, be one for attendant A heavy burden.More personnel are serviced at the same time, alternatively, when service time is long, attendant is susceptible to tired out, ignores pair The variation of square mood services situation generation not in place from without adjusting service strategy in time, in turn resulting in, influences Service Quality Amount.
Invention content
The invention mainly solves the technical problem of providing a kind of method, apparatus of data processing and equipment, can be timely Exchange measure is adjusted, to improve the satisfaction of communicatee.
The first aspect of the present invention is:A kind of data processing method is provided, the method includes:Acquisition exchange is handed in scene The performance information of flow object, the performance information include mood action message and/or voice of the communicatee in communication process Information;The performance information collected is matched with pre-stored all default performance informations in database, with The default performance information to match with the performance information collected is obtained, the database is additionally operable to be stored in storage institute When stating all default performance informations, default psychological condition type predefined to default performance information described in each is described default Psychological condition type is the type of psychological condition;Obtain default psychology corresponding with the obtained default performance information is matched Status Type, to obtain the psychological condition type of the communicatee;Export the psychological condition type of the communicatee.
The possible realization method of with reference to first aspect the first:The performance information and data that will be collected Pre-stored all default performance informations are matched in library, are matched with the performance information collected with acquisition The step of default performance information includes:By pre-stored all preset tables in the performance information collected and database Existing information is compared, to obtain the default performance letter for being more than given threshold with the similarity of the performance information collected Breath, and then obtain the default performance information to match with the performance information collected.
The possible realization method of with reference to first aspect the first, in second of possible realization method, the acquisition is handed over The step of performance information of communicatee, includes in stream scene:The image data of communicatee is to obtain in acquisition exchange scene Mood action message of the communicatee in communication process is stated, and then obtains the performance information of the communicatee;It is described to adopt Collect the obtained performance information to be compared with pre-stored all default performance informations in database, with acquisition with it is described The similarity of the performance information collected be more than given threshold default performance information the step of include:From the communicatee Image data in obtain communicatee facial predetermined point characteristic;It is obtained according to the characteristic of the facial predetermined point Communicatee facial expression image, and by the facial expression image of the communicatee with it is pre-stored all pre- in the database Determine facial expression image to be compared, to obtain the preset table for being more than given threshold with the similarity of the facial expression image of the communicatee Feelings image, and then obtain the predetermined facial expression image to match with the facial expression image of the communicatee;It is described to obtain and match To the default performance information corresponding default psychological condition type the step of include:It obtains and is depositing in the database The predetermined facial expression image is defined when the predetermined facial expression image that the storage facial expression image with the communicatee matches Default expression type, and then obtain the psychological condition type of the communicatee.
The possible realization method of second of the possible realization method of with reference to first aspect the first, in the third possible reality In existing mode, described image data are the current image date of communicatee, and the default expression type corresponds to communicatee Current preset expression type.
The 4th kind of the possible realization method of second of the possible realization method of with reference to first aspect the first may be realized Mode is looked into the 5th kind of possible realization method in the predefined expression value table pre-stored in the database Inquiry obtains current expression value corresponding with the current preset expression type, and opposite with the default expression type of the history After the step of history expression value answered, including:It is obtained in setting week according to the current expression value and history expression value The expression value change curve of communicatee in phase;The step of psychological condition type of the output communicatee includes:It removes Except the psychological condition type of the output communicatee, the expression value change curve of the communicatee is also exported.
The possible realization method of with reference to first aspect the first, in the 6th kind of possible realization method, the acquisition is handed over The step of performance information of communicatee, includes in stream scene:The voice messaging of communicatee in acquisition exchange scene, and then To the performance information of the communicatee.
6th kind of possible realization method of the possible realization method of with reference to first aspect the first, in the 7th kind of possible reality It is described to carry out pre-stored all default performance informations in the performance information collected and database in existing mode Comparison is wrapped with obtaining the step of being more than the default performance information of given threshold with the similarity of the performance information collected It includes:The audio volume control figure of communicatee is obtained according to the voice messaging of the communicatee;By the audio wave of the communicatee Shape figure is compared with pre-stored all predetermined audio oscillograms in the database, to obtain with the communicatee's The similarity of audio volume control figure is more than the predetermined audio oscillogram of given threshold, and then obtains the audio wave with the communicatee The predetermined audio oscillogram that shape figure matches;The acquisition default heart corresponding with the obtained default performance information is matched Manage Status Type the step of include:It is obtained in the database in the storage audio volume control figure phase with the communicatee To preset audio type defined in the predetermined audio oscillogram when matched predetermined audio oscillogram, and then obtain the friendship The psychological condition type of flow object.
The 7th kind of 6th kind of possible realization method of the possible realization method of with reference to first aspect the first may be realized In mode, in the 8th kind of possible realization method, the voice messaging is the current speech information of communicatee, described default Audio types correspond to the current preset audio types of communicatee.
The 7th kind of 6th kind of possible realization method of the possible realization method of with reference to first aspect the first may be realized In mode, in the 9th kind of possible realization method, the voice messaging includes the current speech information and at least one of communicatee A history voice messaging, the preset audio type include the current preset audio types and institute corresponding to the current speech information State the history preset audio type corresponding to history voice messaging;The step of the voice messaging of communicatee in acquisition exchange scene Suddenly include:At least the two of communicatee in at least two time points acquisition exchange scene of interval setting duration within the setting period A voice messaging, no more than the duration in the setting period, at least two time point includes at least described the setting duration The end time for setting the period is described work as in the voice messaging of the collected communicatee of end time in the setting period Preceding voice messaging is the history voice messaging in the voice messaging of remaining time point collected communicatee;Institute State the predetermined audio oscillogram for obtaining match in the storage voice messaging with the communicatee in the database When to defined in the predetermined audio oscillogram the step of preset audio type after, including:In the database in advance Inquiry obtains present video value corresponding with the current preset audio types, Yi Jiyu in the predefined audio value table of storage The corresponding history audio value of history preset audio type;According to formula: Calculate the general audio value within the setting period, wherein the K is the general audio value, and L is in the setting period Interior all time points, Sc are the present video value, ShiFor in i-th of time point collected history voice I-th of history audio value corresponding to information, b are the present video value in two kinds of the present video value and history audio value Shared proportion in type audio value;Inquiry obtains corresponding with the general audio value in the predefined audio value table Preset audio type, and then obtain the psychological condition type of the communicatee.
The 7th kind of 6th kind of possible realization method of the possible realization method of with reference to first aspect the first may be realized It is described to deposit in advance in the database in the tenth kind of possible realization method in 9th kind of possible realization method of mode Inquiry obtains present video value corresponding with the current preset audio types in the predefined audio value table of storage, and with institute After the step of stating history preset audio type corresponding history audio value, including:According to the present video value and history Audio value obtains the audio value change curve of the communicatee within the setting period;The psychology of the output communicatee The step of Status Type includes:Other than exporting the psychological condition type of the communicatee, the communicatee is also exported Audio value change curve.
6th kind of possible realization method of the possible realization method of with reference to first aspect the first, it is a kind of possible the tenth In realization method, it is described by pre-stored all default performance informations in the performance information collected and database into Row comparison, to obtain the step of being more than the default performance information of given threshold with the similarity of the performance information collected Including:Using speech recognition technology from the voice messaging extract communicatee's language in crucial words;By the exchange Crucial words in object language is compared with pre-stored all predetermined keyword words in the database, with obtain with The similarity of crucial words in communicatee's language be more than given threshold predetermined keyword word, and then obtain with it is described The predetermined keyword word that crucial words in communicatee's language matches;The acquisition and the default performance matched The step of information corresponding default psychological condition type includes:It obtains and is exchanged with described described in storage in the database To meaning of a word type defined in the predetermined keyword word when predetermined keyword word that the crucial words in object language matches, And count the quantity for the predetermined keyword word that the matching corresponding to acquired each described meaning of a word type obtains;Described It is corresponding with each acquired described meaning of a word type to account for inquiry in score table for pre-stored predefined word in database Word accounts for score value;The word obtained according to inquiry accounts for the matching corresponding to score value and each acquired described meaning of a word type The quantity of the obtained predetermined keyword word calculates comprehensive word and accounts for score value;It accounts in score table and looks into the predefined word It askes and accounts for the corresponding meaning of a word type of score value, and then the psychological condition type of the acquisition communicatee with the comprehensive word.
Second aspect of the present invention is:A kind of data processing equipment, including acquisition module are provided, for acquiring in exchange scene The performance information of communicatee, the performance information include mood action message and/or language of the communicatee in communication process Message ceases;First acquisition module, for will in the performance information that collected and database it is pre-stored all default Performance information is matched, to obtain the default performance information to match with the performance information collected, the data Library is additionally operable to store when storing all default performance informations, to each default predefined default heart of performance information Status Type is managed, the default psychological condition type is the type of psychological condition;Second acquisition module, for obtaining and matching The default corresponding default psychological condition type of performance information arrived, to obtain the psychological condition class of the communicatee Type;Output module, the psychological condition type for exporting the communicatee.
In conjunction with the first possible realization method of second aspect, first acquisition module is specifically used for collect The performance information compared with pre-stored all default performance informations in database, to obtain and described acquire The similarity of the performance information arrived is more than the default performance information of given threshold, and then obtains and believe with the performance collected The matched default performance information of manner of breathing.
In conjunction with the first possible realization method of second aspect, in second of possible realization method, the acquisition mould Block includes image acquisition units, and described image collecting unit is used to acquire the image data of communicatee in exchange scene to obtain Mood action message of the communicatee in communication process, and then obtain the performance information of the communicatee;Described One acquisition module includes:First acquisition unit, the face for obtaining communicatee from the image data of the communicatee The characteristic of predetermined point;Second acquisition unit, for obtaining communicatee's according to the characteristic of the facial predetermined point Facial expression image, and by pre-stored all predetermined facial expression images in the facial expression image of the communicatee and the database into Row comparison, to obtain the default facial expression image for being more than given threshold with the similarity of the facial expression image of the communicatee, in turn Obtain the predetermined facial expression image to match with the facial expression image of the communicatee;Second acquisition module includes that third obtains Unit, the third acquiring unit are used to obtain in the database in the storage facial expression image with the communicatee To presetting expression type defined in the predetermined facial expression image when the predetermined facial expression image to match, and then obtain the exchange The psychological condition type of object.
In conjunction with second of possible realization method of the first possible realization method of second aspect, in the third possible reality In existing mode, the described image data that described image collecting unit is acquired are the current image date of communicatee, described the The default expression type acquired in two acquisition modules corresponds to the current preset expression type of communicatee.
The 4th kind in conjunction with second of possible realization method of the first possible realization method of second aspect may realize Further include third acquisition module in the 5th kind of possible realization method in mode, for according to the current expression value and going through History expression value obtains the expression value change curve of the communicatee within the setting period;Wherein, the output module is in addition to defeated Go out except the psychological condition type of the communicatee, also exports the expression value change curve of the communicatee.
In conjunction with the first possible realization method of second aspect, in the 6th kind of possible realization method, the acquisition mould Block includes voice collecting unit, and the voice collecting unit is used to acquire the voice messaging of communicatee in exchange scene, in turn Obtain the performance information of the communicatee.
In conjunction with the 6th kind of possible realization method of the first possible realization method of second aspect, in the 7th kind of possible reality In existing mode, first acquisition module includes:6th acquiring unit, for being obtained according to the voice messaging of the communicatee The audio volume control figure of communicatee;7th acquiring unit is used for the audio volume control figure of the communicatee and the database In pre-stored all predetermined audio oscillograms compared, with obtain it is similar to the audio volume control figure of the communicatee Degree is more than the predetermined audio oscillogram of given threshold, and then obtains and make a reservation for what the audio volume control figure of the communicatee matched Audio volume control figure;Second acquisition module includes the 8th acquiring unit, and the 8th acquiring unit is used in the database It is middle to obtain in the predetermined audio oscillogram that the storage audio volume control figure with the communicatee matches to described predetermined Preset audio type defined in audio volume control figure, and then obtain the psychological condition type of the communicatee.
The 7th kind in conjunction with the 6th kind of possible realization method of the first possible realization method of second aspect may realize In mode, in the 8th kind of possible realization method, the voice messaging acquired in the voice collecting unit is exchange pair The current speech information of elephant, the preset audio type acquired in the 8th acquiring unit correspond to the current of communicatee Preset audio type.
The 7th kind in conjunction with the 6th kind of possible realization method of the first possible realization method of second aspect may realize In mode, in the 9th kind of possible realization method, the voice messaging acquired in the voice collecting unit includes exchange The current speech information of object and at least one history voice messaging, the preset audio acquired in the 8th acquiring unit Type includes going through corresponding to current preset audio types and the history voice messaging corresponding to the current speech information History preset audio type;The voice collecting unit is specifically used for when setting at least two of the interval setting duration in the period Between in point acquisition exchange scene communicatee at least two voice messagings, the setting duration is no more than setting period Duration, at least two time point include at least the end time in the setting period, whole in the time in the setting period The voice messaging of the collected communicatee of point is the current speech information, in remaining time point collected exchange The voice messaging of object is the history voice messaging;Second acquisition module further includes:9th acquiring unit is used for In the database in pre-stored predefined audio value table inquiry obtain it is corresponding with the current preset audio types Present video value, and history audio value corresponding with the history preset audio type;Second computing unit is used for basis Formula:Calculate the general audio value within the setting period, wherein The K is the general audio value, and L is all time points within the setting period, and Sc is the present video value, ShiFor i-th of history audio value corresponding to i-th of time point collected history voice messaging, b is described current Audio value proportion shared in the present video value and history audio value two types audio value;Tenth acquiring unit is used Preset audio type corresponding with the general audio value is obtained in being inquired in the predefined audio value table, and then is obtained The psychological condition type of the communicatee.
The 7th kind in conjunction with the 6th kind of possible realization method of the first possible realization method of second aspect may realize 9th kind of possible realization method of mode further include in the tenth kind of possible realization method:4th acquisition module is used for root The audio value change curve of the communicatee within the setting period is obtained according to the present video value and history audio value;Its In, the output module also exports the sound of the communicatee other than exporting the psychological condition type of the communicatee Frequency value change curve.
It is a kind of possible the tenth in conjunction with the 6th kind of possible realization method of the first possible realization method of second aspect In realization method, first acquisition module includes:11st acquiring unit, for utilizing speech recognition technology from the voice The crucial words in communicatee's language is extracted in information;12nd acquiring unit, being used for will be in communicatee's language Crucial words is compared with pre-stored all predetermined keyword words in the database, to obtain and the communicatee The similarity of crucial words in language is more than the predetermined keyword word of given threshold, and then obtains and communicatee's language In the predetermined keyword word that matches of crucial words;Second acquisition module includes:13rd acquiring unit, in institute It states and is obtained in database in the predetermined keyword word that the storage crucial words with communicatee's language matches To meaning of a word type defined in the predetermined keyword word, and count the matching corresponding to acquired each described meaning of a word type The quantity of the obtained predetermined keyword word;14th acquiring unit, for pre-stored predetermined in the database Adopted word accounts in score table inquiry word corresponding with each acquired described meaning of a word type and accounts for score value;15th obtains list Member, the matching that the word for being obtained according to inquiry accounts for corresponding to score value and each acquired described meaning of a word type obtain The predetermined keyword word quantity, calculate comprehensive word and account for score value;16th acquiring unit is accounted in the predefined word Inquiry accounts for the corresponding meaning of a word type of score value, and then the psychological shape of the acquisition communicatee with the comprehensive word in score table State type.
Third aspect present invention is to provide a kind of data processing equipment, including memory, processor and output device, The memory, output device are connected to the processor by bus respectively;The memory is for storing at the data Manage the data of equipment;The processor is used to acquire the performance information of communicatee in exchange scene, and the performance information includes Mood action message and/or voice messaging of the communicatee in communication process, and by the performance information collected with Pre-stored all default performance informations are matched in database, to obtain and the performance information phase collected The default performance information matched, the database are additionally operable to store when storing all default performance informations, to each described The predefined default psychological condition type of default performance information, the default psychological condition type are the type of psychological condition;Institute It states processor to be additionally operable to obtain default psychological condition type corresponding with the default performance information that matching obtains, with acquisition The psychological condition type of the communicatee;The output device is used for the psychological condition type of the communicatee.
The beneficial effects of the invention are as follows:Then the present invention will by the performance information of communicatee in acquisition exchange scene The performance information collected is matched with pre-stored all default performance informations in database, to obtain and acquire Then it is corresponding pre- to obtain the default performance information obtained with matching for the default performance information that obtained performance information matches If psychological condition type, and then the psychological condition type of communicatee is obtained, and the psychological condition type of the communicatee is exported, So that user can know psychological condition of the communicatee in communication process, and then according to the psychological condition of communicatee Adjustment exchange measure in time, to improve satisfaction of the communicatee in communication process as far as possible.
Description of the drawings
Fig. 1 is the flow chart of one embodiment of method of data processing of the present invention;
Fig. 2 is in another embodiment of method of data processing of the present invention, by the performance information collected and database In pre-stored all default performance informations compared, with the similarity for the performance information for obtaining and collecting be more than set Determine the flow chart of the default performance information of threshold value;
Fig. 3 is the expression library in database and expression type in another embodiment of method of data processing of the present invention Correspondence schematic diagram;
Fig. 4 be in the another embodiment of method of data processing of the present invention obtain in the database storage with exchange pair When the predetermined facial expression image that the facial expression image of elephant matches to defined in predetermined facial expression image preset expression type the step of it Flow chart afterwards;
Fig. 5 is one embodiment of expression value change curve of the communicatee in the method for the data processing of Fig. 4, obtained Schematic diagram;
Fig. 6 is in the another embodiment of method of data processing of the present invention, by the performance information collected and database In pre-stored all default performance informations compared, with the similarity for the performance information for obtaining and collecting be more than set Determine the flow chart of the default performance information of threshold value;
Fig. 7 be in the another embodiment of method of data processing of the present invention obtain in the database storage with exchange pair To the step of preset audio type defined in predetermined audio oscillogram when the predetermined audio oscillogram that the voice messaging of elephant matches Flow chart after rapid;
Fig. 8 is one embodiment of audio value change curve of the communicatee in the method for the data processing of Fig. 7, obtained Schematic diagram;
Fig. 9 is in the another embodiment of method of data processing of the present invention, by the performance information collected and database In pre-stored all default performance informations compared, with the similarity for the performance information for obtaining and collecting be more than set Determine the flow chart of the default performance information of threshold value;
Figure 10 is acquisition and the default performance information matched in the another embodiment of method of data processing of the present invention The flow chart of corresponding default psychological condition type;
Figure 11 is the meaning of a word library in database and meaning of a word type in another embodiment of method of data processing of the present invention Correspondence schematic diagram;
Figure 12 is the structural schematic diagram of one embodiment of data processing equipment of the present invention;
Figure 13 is the structural schematic diagram of another embodiment of data processing equipment of the present invention;
Figure 14 is the structural schematic diagram of the another embodiment of data processing equipment of the present invention;
Figure 15 is the structural schematic diagram of the another embodiment of data processing equipment of the present invention;
Figure 16 is the structural schematic diagram of the another embodiment of data processing equipment of the present invention;
Figure 17 is the structural schematic diagram of the another embodiment of data processing equipment of the present invention;
Figure 18 is the structural schematic diagram of one embodiment of data processing equipment of the present invention.
Specific implementation mode
The present invention is mainly used in exchange scene, by obtaining various tables of the communicatee in communication process automatically Existing information, to analyze the emotional change of communicatee according to these performance informations, to obtain the psychological condition of communicatee, and to The psychological condition of user communicatee so that user can adjust exchange measure according to the emotional change of communicatee in time, have Effect improves exchange service quality.
Below in conjunction with drawings and embodiments, the present invention is described in detail.
Refering to fig. 1, in one embodiment of method of data processing of the present invention, include the following steps:
Step S101:The performance information of communicatee in acquisition exchange scene, performance information includes that communicatee is exchanging Mood action message in the process and/or voice messaging.
In present embodiment, the performance information of communicatee in exchange scene is acquired by remote server.User and friendship The exchange way taken when flow object is exchanged is varied, such as long-distance video AC mode, remote speech AC mode Or talks AC mode etc., different AC modes correspond to different exchange scenes face to face.The performance information of communicatee is The mood action message and/or voice messaging of communicatee, mood action message refer to each position on face in communication process Action message, such as facial expression action, most of psychological activity of people is displayed by facial expression, secondly, sound Sound is also to embody a kind of medium of people's heart activity, such as intonation height, voice content can intuitively show the psychology of people State.Present embodiment by acquire communicatee mood action message and/or voice messaging, to obtain the heart of communicatee Reason state.Different AC modes can collected communicatee performance information it is different, such as can only be obtained when voice communication It takes the voice messaging of communicatee, and the facial table of communicatee can be then obtained simultaneously in video calling or while talking face to face Feelings information and voice messaging etc. acquire corresponding performance information according to the live remote server of specific exchange.
Step S102:Pre-stored all default performance informations in the performance information collected and database are carried out Matching, the default performance information to be matched with the performance information for obtaining and collecting, database are additionally operable to be stored in storage institute When having default performance information, the predefined default psychological condition type of performance information is preset to each, presets psychological condition type For the type of psychological condition.
Various default performance informations are prestored in the database, and remote server collects the performance information of communicatee Later, collected performance information is matched with default performance information all in database, matched process is will All default performance informations are compared in the performance information and database that collect, with from all default performance informations The similarity for the performance information for obtaining and collecting is more than the default performance information of given threshold, to obtain and collect The default performance information that matches of performance information.The given threshold can be set as 98%, i.e., with the performance information that collects Similarity is more than the default performance information that 98% default performance information is and the performance information collected matches.
Step S103:Default psychological condition type corresponding with the default performance information that matching obtains is obtained, to obtain The psychological condition type of communicatee.
When prestoring various default performance informations in the database, while presetting what performance information was embodied according to each Psychological condition presets performance information to each and predefines its corresponding psychological condition type, and psychological condition type refers to the psychology of people The type of state, such as the happiness of people, indignation, sadness psychological condition.For example, the psychological condition that a kind of voice messaging embodies is height It is emerging, the psychological condition type that is embodied according to the voice messaging and the psychological condition class that the voice messaging is defined as to corresponding happiness Type.Each is preset default psychological condition type defined in performance information to be stored in database.Each default psychological condition Type can correspond to a variety of different default performance informations, and each default performance information then with a kind of default psychological condition type Corresponding, according to the correspondence of default performance information and default psychological condition type, remote server is being obtained and is being exchanged pair After the default performance information that the performance information of elephant matches, obtain corresponding with the default performance information that matching obtains default Psychological condition type, to obtain the psychological condition type of communicatee.
Step S104:Export the psychological condition type of communicatee.
Different psychological condition types needs to take different exchange measures, especially for service industry or business It, may in communication process if corresponding exchange measure cannot be taken according to the psychological condition of client for talks etc. Cause customer satisfaction to decline, is so likely to result in customer churn.Remote server obtains the psychological condition class of communicatee After type, the psychological condition type of communicatee is exported to user so that user can adjust exchange measure in time, with as much as possible Improve the satisfaction of communicatee in communication process.
Present embodiment, by prestoring various default performance informations in the database, and to each default performance letter Predefined psychological condition type default accordingly is ceased, to which remote server is by the performance information of acquisition communicatee, and will The performance information of communicatee is matched with the default performance information in database, and then obtains the psychological condition of communicatee Type is simultaneously exported to user so that user can adjust exchange measure in time, to improve exchange pair in communication process as much as possible The satisfaction of elephant.
In another embodiment of method of data processing of the present invention, the performance letter for the communicatee that remote server is acquired Breath is the mood action message of communicatee, which is specially the facial expression information of communicatee.The heart of people Reason state change is all largely to be embodied by facial expression, therefore the facial expression for obtaining communicatee accurate can more obtain To the psychological condition type of communicatee.
Specifically, remote server acquisition exchange scene in communicatee performance information the specific steps are:Acquisition is handed over The image data of communicatee is to obtain mood action message of the communicatee in communication process in stream scene, and then is handed over The performance information of flow object.Wherein, remote server institute the image collected data are the current image date of communicatee.
The exchange scene of present embodiment can be the exchange scene under video AC mode.User passes through computer, flat The clients such as plate computer or mobile phone carry out online video call with communicatee.By taking computer as an example, under video calling pattern, The video data that the computer of user passes through Internet technology real-time reception communicatee so that user can see on computers To the video image of communicatee, to achieve the effect that talk face to face.Video data is really made of picture frame, long-range to take The current video data for the communicatee that business device is received by collecting computer, and then obtain the present image number of communicatee According to.
At this point, referring to Fig.2, by pre-stored all default performance informations in the performance information collected and database The step of being compared, being more than the default performance information of given threshold with the similarity for the performance information for obtaining and collecting is wrapped It includes:
Step S201:The feature of the current face predetermined point of communicatee is obtained from the current image date of communicatee Data.
After remote server collects the current image date of communicatee, Face datection is utilized(Face Detection) Technology extracts the characteristic of the current face predetermined point of communicatee from current image date.Human face detection tech is a kind of The position of face and the computer technology of size are found in arbitrary digital picture, are capable of detecting when facial characteristics, or even also can Enough detect facial fine feature.After remote server receives the current image date of communicatee, examined first with face Survey technology, which detects, whether there is facial image in current image date, when detecting the presence of facial image, in facial image Predetermined point feature carry out analysis extraction, with obtain embody the predetermined point feature of face related data.Therefore, remote server is logical The characteristic of facial predetermined point of communicatee can be extracted from image data by crossing aforesaid way.
Step S202:The current facial expression image of communicatee is obtained according to the characteristic of current face predetermined point, and will Current facial expression image is compared with pre-stored all predetermined facial expression images in database, with working as acquisition and communicatee The similarity of preceding facial expression image is more than the default facial expression image of given threshold, and then it is pre- to obtain matching with current facial expression image Determine facial expression image.
The facial predetermined point that remote server is taken be can embody the specified point of facial expression, therefore by exchange pair The characteristic of the facial predetermined point of elephant extracts, and the facial table of communicatee is analyzed according to the characteristic of these predetermined points Feelings, to obtain the current facial expression image of communicatee.In present embodiment, the performance information of the communicatee of acquisition is exchange pair The image information of elephant, it is pre-stored pre- in database to obtain the corresponding facial expression image of communicatee according to the image information If performance information corresponds to predetermined facial expression image, by being compared, obtain similar to the current facial expression image of communicatee Predetermined facial expression image of the degree more than given threshold.
When storing various predetermined facial expression images in the database, the psychological condition that is embodied according to each predetermined facial expression image And a kind of default expression type is predefined to each predetermined facial expression image, go back respective stored and various predetermined expression figures in database As corresponding default expression type, each predetermined facial expression image corresponds to a kind of default expression type, and each presets expression class Type can correspond to a variety of predetermined facial expression images.Specifically, it refering to Fig. 3, establishes in the database and each default expression type pair The expression library answered, by the way that various predetermined expression image classifications are stored in the form in different expression libraries with to various predetermined expressions The corresponding expression type of image definition.Each expression library type corresponds to a kind of default expression type, will belong to same preset table The various predetermined facial expression images of feelings type are stored in the expression library of the corresponding preset table feelings type, that is, are stored in corresponding this and are preset All predetermined facial expression images under the expression library of expression type are defined as corresponding to the facial expression image of the preset table feelings type.Example Such as, the expression library in database includes the types such as happiness expression library, angry expression library, helpless expression library and indifferent to expression library Expression library.Happiness expression library corresponds to happiness expression type, for storing the various predetermined expression figures for embodying happiness psychological condition Picture, all predetermined facial expression images stored in happiness expression library are all defined as the facial expression image of happiness expression type;Angry table Feelings library corresponds to angry expression type, for storing the various predetermined facial expression images for embodying angry psychological condition, in angry expression library All predetermined facial expression images of middle storage are all defined as the facial expression image of angry expression type, and other types of expression library is with such It pushes away.
After obtaining a kind of predetermined facial expression image, remote server is according to storing the expression library of the predetermined facial expression image in number According to the default expression type that can be obtained in library corresponding to the predetermined facial expression image.Therefore, remote server is being obtained and is being exchanged After the matched predetermined facial expression image of current facial expression image of object, according to the expression for the predetermined facial expression image that storage matching obtains Library, so as to the corresponding default expression type of the predetermined facial expression image that obtains with match, to obtain communicatee's Psychological condition type.For example, after remote server obtains the matched predetermined facial expression image of current facial expression image with communicatee, It inquires the predetermined facial expression image in the database to be stored in happiness expression library, i.e., the predetermined facial expression image obtained the matching Defined default expression type is happiness expression type, is happiness class thus to obtain the current psychological condition type of communicatee Type, and the adjective for indicating psychological condition type is exported to user:" happiness ", so that user knows the current heart of communicatee Reason state is happiness, and then takes corresponding exchange measure according to the psychological condition type of communicatee's happiness, to improve exchange Quality.
Further, the predetermined expression to match with the current facial expression image of communicatee acquired in the remote server When image has multiple, i.e., comparison obtains multiple qualified predetermined facial expression images in the database, each predetermined according to storage The expression library of facial expression image and obtain and each of obtain the corresponding default expression type of predetermined facial expression image with matching, multiple pre- Determine all correspond to same default expression type in facial expression image, it is also possible to corresponding multiple and different default expression type, this When acquired communicatee psychological condition type may also can there are many, by all psychological shapes of acquired communicatee State type is exported to user, to provide a user the psychological condition information of communicatee.
By the above-mentioned means, performance information of the present embodiment by the current image date of acquisition communicatee, and root The current facial expression image of communicatee is obtained according to current image date, is then obtained in all predetermined facial expression images of database Be more than the predetermined facial expression image of given threshold with the similarity of current facial expression image, and obtain in the database with it is acquired pre- Determine the corresponding current preset expression type of facial expression image, which is the current psychology of corresponding communicatee Status Type to obtain the psychological condition type of communicatee, and exports the psychological condition type to user, so that with Family can adjust corresponding exchange measure according to the psychological condition situation of communicatee in time, and then improve and exchanged in communication process The satisfaction of object.
In the above-described embodiment, remote server is by directly acquiring the current image date of communicatee to be handed over The performance data of flow object, and then obtain the current psychological condition type of communicatee, in another embodiment, remote service Device also acquires the history image data of communicatee other than the current image date of acquisition communicatee, and then is handed over The performance information of flow object.Specifically, the remote server communicatee that collecting computer receives in a periodic fashion regards Frequency evidence, to obtain the image data of communicatee.Within the setting period, remote server sets at least the two of duration at interval At least two image datas of communicatee during acquisition exchange is live on a time point, the setting duration is no more than the setting period Duration, and at least two time points are including at least the end time in setting period.Remote server is whole in the time in setting period Point institute the image collected data are the current image date of communicatee, and at other times on institute's the image collected Data are the history image data of communicatee.At least one in collected at least two image data of remote server institute For current image date.
For example, set the period when it is 10 minutes a length of, it is assumed that the start time in some period be 12:00, then should The end time in period is 12:10.In this 10 minutes, remote server is at interval of one image data of acquisition in 1 minute, i.e., far Journey server acquires an image data respectively at 11 time points including the start time in period and end time, and In the end time 12 in period:Institute's the image collected data are the current image date of communicatee on 10,12:00、 12:01 ... ..., 12:The image data acquired on 09 ten time points is the history image data of communicatee.At this point, Remote server obtains current preset expression type corresponding with current image date according to current image date in the database, And obtain history corresponding with history image data in the database according to history image data and preset expression type, specifically Acquisition process can refer to the above embodiment progress, herein without repeating one by one.
In present embodiment, refering to Fig. 4, obtains match with the facial expression image of communicatee in storage in the database Further include following steps after the step of when predetermined facial expression image to presetting expression type defined in predetermined facial expression image:
Step S401:Inquiry obtains and current preset expression class in pre-stored predefined expression value table in the database The corresponding current expression value of type, and preset the corresponding history expression value of expression type with history.
To one expression value of each expression type definition.For example, for excited expression type, expression value is defined as 10, its expression value of happiness expression type is defined as 8, its expression value of flat expression type is defined as 6 etc..It is pre-established in database One predefined expression value table, which is used to record the expression value corresponding to each expression type, a kind of Expression type corresponds to an expression value.Remote server often obtains a picture number in each time point within the setting period According to just analysis matching being carried out to the image data, to obtain corresponding expression type, and then according to the expression type predetermined Inquiry obtains corresponding expression value in adopted expression value table.
Step S402:The synthesis expression value in the setting period is calculated according to current expression value and history expression value.
Remote server obtains the table corresponding to collected each image data of all time points within the setting period After feelings value, the synthesis expression value in the setting period is calculated, to obtain the current psychological shape of communicatee by the synthesis expression value State type.Specifically, remote server is according to formula:
Calculate the synthesis expression value in the setting period.Wherein, M indicates that the synthesis expression value in the setting period, N are to set All time points in period, Vc are current expression value, VhiCorresponding to i-th of time point collected history image data I-th of history expression value, d is that current expression value is shared in current expression value and history expression value two types expression value Proportion.
For example, the expression value corresponding to the various expression types defined in predefined expression value table is:It is excited (10), it is glad(8), it is flat(6), it is cold and detached(4), it is angry(2), it is sad(1), wherein " excitement " refers to expression type, "(10)" refer to it is emerging The corresponding expression value of expression type put forth energy, other expression values and so on.Table 1 is please referred to, it is long-range to take within 10 minutes periods Device of being engaged in obtained an image data every one minute, was obtained 11 time points according to acquired image data corresponding 11 expression values are respectively:
The expression value that table 1 obtains in each time point within 10 minute period
Wherein, within 10 minutes periods share N=11 time point, in this 11 time points setting the period when Between terminal 12:The expression value that 10 correspondences obtain is current expression value, and the expression value obtained on remaining time point is history expression Value.In present embodiment, current expression value proportion d shared in two kinds of expression values of current expression value and history expression value is 60%, and history expression value proportion shared in two kinds of expression values is then 40%.Certainly, in other embodiments, current table Feelings value proportion shared in two kinds of expression values may be other ratios, herein without limiting.To present embodiment In, according to formula(1.0)Obtained comprehensive expression value M is:
Step S403:Inquiry obtains default expression type corresponding with comprehensive expression value in predefined expression value table, And then obtain the current psychological condition type of communicatee.
In predefined expression value table, expression type is corresponded with expression value, and the one kind obtained in the two can pass through Predefined expression value table is inquired to obtain another kind.Therefore, remote server is after obtaining comprehensive expression value, by the synthesis expression value It is matched with institute's espressiove value in predefined expression value table, matched process is inquiry and meter in predefined expression value table Obtained synthesis expression value is most close or identical predefined expression value, to obtained from all predefined expression values with it is comprehensive It closes expression and is worth matched predefined expression value, and then corresponding expression class is obtained according to the predefined expression value that the matching obtains Type, i.e., comprehensive expression are worth corresponding default expression type.For example, the synthesis expression value obtained is M=8.64, remote server It is 8 to be inquired in storage predefines expression value table with the 8.64 most similar predefined expression value, and predefined expression value 8 is right The expression type answered is happiness type, therefore it is happiness expression type that obtained comprehensive expression, which is worth corresponding expression type,.This In embodiment, using the expression type matched according to comprehensive expression value as the current expression type of communicatee, and one Kind expression type corresponds to a kind of psychological condition type, and then obtains the current psychological condition type of communicatee.Later, long-range clothes Business device and the psychological condition type that alternating current object is exported to user.
The psychological condition of people change it is typically sequential progressive, by obtaining the image data in a period of time to be handed over The current psychological condition type of flow object, more acurrate can embody the psychological condition variation of communicatee, therefore pass through calculating The synthesis expression value in the period is set to obtain the current psychological condition type of communicatee, accuracy can be improved.
In present embodiment, remote server is after the current expression value and history expression value for obtaining communicatee, root The expression value change curve of communicatee is obtained according to current expression value and history expression value.Specifically, remote server is according to setting Fixed cycle, each time point for acquiring image data and upper obtained expression value at every point of time(I.e. current expression value With history expression value)Obtain the expression value change curve of communicatee.By taking embodiment shown in above-mentioned table 1 as an example, refering to figure 5, within 10 minutes periods of setting, remote server acquires the picture number of communicatee on 1 minute time point of interval According to, and default expression type accordingly is obtained according to the collected each image data of institute in the database, and then according to predetermined Adopted expression value table obtains the corresponding expression value of each time point.Remote server obtains communicatee according to time point and expression value Expression value change curve in 10 minutes.At this point, psychological condition class of the remote server in addition to exporting communicatee to user Except type, the expression value change curve of communicatee is also exported to user.By expression value change curve, user can be made more straight It sees ground and finds out psychological condition variation of the communicatee in 10 minutes, to take better exchange measure to carry out ditch with communicatee It is logical, the satisfaction of communicatee is improved as much as possible.
In the respective embodiments described above, exchange scene is the exchange scene under online video AC mode, and remote server is logical The video data of the communicatee received by collecting computer, the i.e. image data of communicatee are crossed, and then obtains exchange pair The performance information of elephant, to obtain the psychological condition type of corresponding communicatee according to the performance information, and by communicatee Psychological condition type export to user so that user can adjust exchange measure in time.Certainly, in other embodiments, Exchange scene or the exchange scene to talk face to face, can be obtained in exchange scene at this time by picture pick-up devices such as cameras The image data of communicatee, remote server by being acquired analysis to the image data acquired in camera, and then With obtaining exchanging countermeasure accordingly.
In addition, in the another embodiment of the method for data processing of the present invention, exchange scene is that remote speech exchanges mould Exchange scene under formula, user and communicatee carry out voice communication by modes such as fixed-line telephone, the networking telephones.At this point, remote Journey server is by acquiring the voice messaging of communicatee to obtain the performance information of communicatee.User carries out with communicatee When voice communication, the fixed-line telephone or the networking telephone of one side of user persistently receive the voice content of communicatee, remote server By being acquired to the voice content received by fixed-line telephone or the networking telephone, to obtain the voice messaging of communicatee. In present embodiment, the voice messaging of remote server acquisition is the current speech information of communicatee.At this point, refering to Fig. 6, it will Pre-stored all default performance informations are compared in the performance information and database that collect, to obtain and acquire To performance information similarity be more than given threshold default performance information the step of include:
Step S601:According to the present video oscillogram of the current speech information acquisition communicatee of communicatee.
Audio volume control figure embodies the height situation of audio.Include plurality of kinds of contents in voice messaging, such as communicatee Tone, velocity of sound or meaning of a word etc., and each content can directly or indirectly reflect the psychological condition of communicatee, such as when When velocity of sound is very fast, illustrate that communicatee may be more anxious, present embodiment is with audio height(That is tone)For illustrate, lead to The audio-frequency information for obtaining communicatee is crossed to obtain the psychological condition of communicatee, such as illustrates communicatee when audio is higher Possible feeling is aroused, illustrates that communicatee may be indifferent to topic when audio is more droning.Remote server is collecting exchange After the current speech information of object, according to current speech acquisition of information present video oscillogram, with according to present video oscillogram Obtain the audio situation of communicatee.
Step S602:All predetermined audio oscillograms that will be stored in the present video oscillogram of communicatee and database It is compared, to obtain the predetermined audio waveform for being more than given threshold with the similarity of the present video oscillogram of communicatee Figure, and then obtain the current predetermined audio oscillogram to match with the present video oscillogram of communicatee.
In present embodiment, various predetermined audio oscillograms are stored in the database in advance, remote server will be by that will hand over The present video oscillogram of flow object is compared one by one with all predetermined audio oscillograms in database, and according to comparing result The predetermined audio oscillogram for being more than given threshold with the similarity of the present video oscillogram of communicatee is obtained, acquired is pre- Accordatura frequency oscillogram is the current predetermined audio oscillogram to match with the present video oscillogram of communicatee.
After remote server obtains the current predetermined audio oscillogram to match with the present video oscillogram of communicatee, It obtains and the current predetermined audio oscillogram is defined in the database in the acquired current predetermined audio oscillogram of storage Preset audio type, and then obtain the psychological condition type of communicatee.Wherein, it prestores in the database various predetermined It is predetermined to each predetermined audio oscillogram according to the audio types that each predetermined audio oscillogram is embodied when audio volume control figure A kind of audio types of justice.Specifically, by all predetermined audio oscillogram classification storages in the form pair of different audio repositories Each predetermined audio oscillogram defines audio types.Various types of audio repositories are pre-established in the database, each audio repository Type corresponds to a kind of predetermined audio type, i.e., each type of audio repository belongs to the pre- of same predetermined audio type for storing Accordatura frequency oscillogram, all predetermined audio oscillograms being stored under the type audio repository are defined as belonging to the predetermined audio class The audio volume control figure of type.For example, the audio repository in database includes high audio library, high audio library, sound intermediate frequency library etc., high audio Library is used to store the predetermined audio oscillogram for embodying high audio, and all predetermined audio oscillograms being stored in high audio library are fixed The adopted audio volume control figure for high audio type, other audio repositories and so on.Each predetermined audio oscillogram is only stored in one kind In the audio repository of type, and a type of audio repository can store a variety of predetermined audio oscillograms, therefore each predetermined audio Oscillogram corresponds to a kind of predetermined audio type, and a kind of predetermined audio type corresponds to a variety of predetermined audio oscillograms.According to pre- Predetermined audio type defined in accordatura frequency oscillogram is worked as to which remote server can obtain in the database with what is obtained The preceding corresponding predetermined audio type of predetermined audio oscillogram, and then obtain the psychological condition type of communicatee.
For example, remote server is obtaining the current predetermined audio wave to match with the present video oscillogram of communicatee After shape figure, the current predetermined audio oscillogram that the matching obtains is inquired in the database and is stored in high audio library, i.e., to this It is high audio type to match audio types defined in obtained current predetermined audio oscillogram, which also being capable of body The current psychological condition type for revealing communicatee illustrates that the tone of communicatee is high, mood may be more impassioned.Therefore, originally Embodiment remote server is by obtaining the present video type to match with the present video oscillogram of communicatee, to obtain The current psychological condition type of communicatee is obtained, remote server exports the word for embodying audio types to user later:" high pitch Frequently ", so that user is known that the current audio of communicatee is high audio, illustrate that the tone that communicatee currently speaks is higher, mood May be more impassioned, and then the current psychological condition type of communicatee is obtained, to timely according to psychological Status Type before deserving The corresponding exchange measure of adjustment, to improve the satisfaction of communicatee in communication process as much as possible.
In the above-described embodiment, remote server is by directly acquiring the current speech information of communicatee to be handed over The current performance information of flow object, in another embodiment, voice data is in addition to the current speech information including communicatee Except, further include the history voice messaging of communicatee.Specifically, remote server acquires telephone set or net in a periodic fashion Voice messaging received by network phone.Within the setting period, at least two times of the remote server in interval setting duration At least two voice messagings of communicatee, the setting duration are not more than the duration in setting period during acquisition exchange is live on point, And at least two end time for including at least the setting period in time point.End time institute of the remote server in the setting period Collected voice messaging is the current speech information of communicatee, and the other times within the setting period are collected Voice messaging be communicatee history voice messaging.In collected at least two voice messaging of remote server institute at least There are one be current speech information.
For example, set the period when it is 10 minutes a length of, it is assumed that the start time in some period be 12:00, then should The end time in period is 12:10.In this 10 minutes, remote server is at interval of one voice messaging of acquisition in 1 minute, i.e., far Journey server acquires a voice messaging respectively at 11 time points including the start time in period and end time, and In the end time 12 in period:On 10 collected voice messaging be communicatee current speech information, 12:00、 12:01 ... ..., 12:The voice messaging acquired on 09 ten time points is the history voice messaging of communicatee.At this point, Current preset audio types of the remote server corresponding to current speech acquisition of information current speech information, and according to going through History voice messaging obtains the history preset audio type corresponding to history voice messaging, and specific acquisition process can be according to above-mentioned reality The mode of applying carries out, herein without repeating one by one.
In present embodiment, refering to Fig. 7, obtains match with the voice messaging of communicatee in storage in the database When predetermined audio oscillogram to defined in predetermined audio oscillogram the step of preset audio type after, further include walking as follows Suddenly:
Step S701:It inquires and obtains and current preset audio class in pre-stored predefined audio value table in the database The corresponding present video value of type, and history audio value corresponding with history preset audio type.
To one audio value of each preset audio type definition.For example, for the audio types of high audio, audio value 10 are defined as, its audio value of the audio types of high audio is defined as 8, its audio value of the audio types of sound intermediate frequency is defined as 7 etc.. A predefined audio value table is pre-established in the database, which has recorded corresponding to each preset audio type Audio value, a kind of one audio value of preset audio type correspondence.Remote server is in each time point within the setting period, often After obtaining a voice messaging, analysis matching is carried out to obtain corresponding preset audio type, and according to institute to the voice messaging The preset audio type of acquisition inquires in predefined audio value table and obtains its corresponding audio value.
Step S702:The general audio value in the setting period is calculated according to present video value and history audio value.
Remote server obtains the audio corresponding to collected each voice messaging of all time points in the setting period After value, the general audio value in the setting period is calculated, to be worth to the current psychological condition of communicatee by the general audio Data.Specifically, remote server is according to formula:
Calculate the general audio value in the setting period.Wherein, K indicates that the general audio value in the setting period, L expressions are being set All time points in fixed cycle, Sc indicate present video value, ShiIndicate i-th of time point collected history voice messaging I-th corresponding of history audio value, b indicate present video value in present video value and history audio value two types audio value In shared proportion.
For example, the audio value corresponding to the various audio types defined in predefined audio value table is:High audio (10), high audio(8), sound intermediate frequency(7), bass(4), nothing(2), wherein " high audio " indicates audio types, "(10)" table Show the corresponding audio value of high audio type, other audio values and so on.Table 2 is please referred to, it is long-range to take within 10 minutes periods Device of being engaged in obtained a voice messaging every one minute, was obtained 11 time points according to acquired voice messaging corresponding 11 audio values are respectively:
The audio value that table 1 obtains in each time point within 10 minute period
Wherein, within 10 minutes periods share L=11 time point, in this 11 time points setting the period when Between terminal 12:The audio value that 10 correspondences obtain is present video value, and the audio value obtained on remaining time point is history audio Value.In present embodiment, present video value proportion d shared in two kinds of audio values of present video value and history audio value is 60%, and history audio value proportion shared in two kinds of audio values is then 40%.Certainly, in other embodiments, current sound Frequency value proportion shared in two kinds of audio values may be other ratios, herein without limiting.To present embodiment In, according to formula(2.0)Obtained general audio value K is:
Step S703:Inquiry obtains preset audio type corresponding with general audio value in predefined audio value table, And then obtain the psychological condition type of communicatee.
In predefined audio value table, preset audio type is corresponded with audio value, obtains one kind in the two It inquires to obtain another kind by predefined audio value table.Therefore, remote server is after obtaining general audio value, by the synthesis sound Frequency value is matched with all audio values in predefined audio value table, and matched process is to be inquired in predefined audio value table With the general audio value that is calculated is most close or identical predefined audio value, to be obtained from all predefined audio values Predefined audio value corresponding with general audio value, and then corresponding preset is obtained according to the predefined audio value that the matching obtains The corresponding preset audio type of audio types, i.e. general audio value.For example, the general audio value obtained is K=8.2, it is long-range to take It is 8 that business device, which is inquired in the predefined audio value table of storage with the 8.2 most similar predefined audio value, and predefined audio 8 corresponding preset audio type of value is high audio, therefore the corresponding preset audio type of obtained general audio value is high pitch Frequently.In present embodiment, using the preset audio type matched according to general audio value as the current preset of communicatee Audio types, by obtaining the current preset audio types of communicatee to obtain the current psychological condition type of communicatee. Later, remote server and the psychological condition type is exported to user.By obtaining the voice messaging in a period of time to obtain The current psychological condition type of communicatee, more acurrate can embody the psychological condition of communicatee.
In present embodiment, remote server is after the present video value and history audio value for obtaining communicatee, root The audio value change curve of communicatee is obtained according to present video value and history audio value.Specifically, remote server is according to setting Fixed cycle, each time point for acquiring voice messaging and upper obtained audio value at every point of time(That is present video value With history audio value)Obtain the audio value change curve of communicatee.By taking embodiment shown in above-mentioned table 2 as an example, refering to figure 8, within 10 minutes periods of setting, the voice letter of remote server acquisition communicatee on 1 minute time point of interval Breath, and inquired in the database according to the collected each voice messaging of institute and obtain corresponding preset audio type, it is every to obtain A time point corresponding audio value.Remote server obtains audio of the communicatee in 10 minutes according to time point and audio value It is worth change curve.At this point, remote server is other than exporting the psychological condition type of communicatee to user, it is also defeated to user Go out the audio value change curve of communicatee.User can be made more intuitively to find out communicatee 10 by audio value change curve Psychological condition variation in minute, to take better exchange measure to be linked up with communicatee.
In the another embodiment of data processing of the present invention, remote server obtains the voice messaging of communicatee to obtain The performance information of communicatee.In present embodiment, remote server still acquires telephone set or network electricity in a periodic fashion The received voice messaging of words.Within the setting period, remote server is at least two time points of interval setting duration Acquisition exchanges at least two voice messagings of communicatee in scene, and at least two voice messagings acquired in device include exchange pair The current speech information of elephant and at least one history voice messaging, specific acquisition process can refer to the above embodiment progress, Herein without repeating.
Remote server is obtained according in acquired current speech information and history voice messaging in communicatee's language The meaning of a word, to obtain the psychological condition type of communicatee.Specifically, refering to Fig. 9, by the performance information collected and data Pre-stored all default performance informations are compared in library, are more than with the similarity for the performance information for obtaining and collecting The step of default performance information of given threshold, specifically includes:
Step S901:Using speech recognition technology from voice messaging extract communicatee's language in crucial words.
Speech recognition technology(Automatic Speech Recognition, ASR)It can identify in the vocabulary in voice Hold.After remote server collects a voice messaging in each time point within the setting period, speech recognition technology is utilized From crucial words in collected voice messaging in extraction communicatee's language, the crucial words extracted be and specific feelings The relevant crucial words of thread, that is, the crucial words extracted is the crucial words for reacting specific psychological condition.
Step S902:By the crucial words in communicatee's language and pre-stored all predetermined keywords in database Word is compared, to obtain the predetermined keyword for being more than given threshold with the similarity of the crucial words in communicatee's language Word, and then obtain the predetermined keyword word to match with the crucial words in communicatee's language.
In present embodiment, common various predetermined keyword words are stored in the database in advance, remote server obtains After crucial words in communicatee's language, the various predetermined keyword words that will store in acquired crucial words and database It is compared, and obtains from all predetermined keyword words the phase with the crucial words in communicatee's language according to comparison result Predetermined keyword word like degree more than given threshold, acquired predetermined keyword word are and the key in communicatee's language The matched predetermined keyword word of words.
At this point, refering to fig. 10, remote server obtains default psychology corresponding with obtained default performance information is matched The step of Status Type includes:
Step S1001:It obtains and makes a reservation for what the crucial words in communicatee's language matched in storage in the database To meaning of a word type defined in predetermined keyword word when crucial words, and count corresponding to acquired each meaning of a word type Quantity with obtained predetermined keyword word.
When storing various predetermined keyword words in the database, these predetermined keyword words are subjected to classification storage, and root A kind of meaning of a word type is predefined to each predetermined keyword word according to the psychological condition that each predetermined keyword word is embodied, each Meaning of a word type embodies a kind of psychological condition type.Specifically, refering to fig. 11, various types of meaning of a word libraries are established in the database, Each meaning of a word library type corresponds to a kind of meaning of a word type, and each type of meaning of a word library is used to store each of corresponding same meaning of a word type Kind predetermined keyword word, that is, all predetermined keyword words being stored under the type meaning of a word library are defined as the pass of the meaning of a word type Key words.For example, database includes happiness word meaning of a word library, angry word meaning of a word library, courtesy word meaning of a word library etc..Happiness is used Word meaning of a word library is used to store the various common predetermined keyword words for embodying happiness psychological condition, that is, is stored in happiness word meaning of a word library In all predetermined keyword words be defined as the predetermined keyword word of happiness word meaning of a word type;Angry word meaning of a word library is used for Storage embodies the various common predetermined keyword words of angry psychological condition, that is, is stored in all predetermined in angry word meaning of a word library Crucial words is defined as the predetermined keyword word of angry word meaning of a word type, other and so on.Each predetermined keyword word A kind of corresponding meaning of a word type, and each meaning of a word type can correspond to a variety of predetermined keyword words.According to each predetermined keyword word Defined meaning of a word type, remote server are obtaining and the matched predetermined keyword word of crucial words in communicatee's language Afterwards, meaning of a word type corresponding with obtained predetermined keyword word can be obtained in the database.In addition, remote server is to handing over When the voice messaging of flow object extracts to obtain the crucial words in communicatee's language, the crucial words extracted may Have multiple, and different crucial words may correspond to same meaning of a word type, therefore remote server obtains each predetermined close After meaning of a word type corresponding to key words, the predetermined key that the matching corresponding to each meaning of a word type obtained obtains also is counted The quantity of words.
Step S1002:Pre-stored predefined word accounts in score table inquiry and acquired each in the database The corresponding word of meaning of a word type accounts for score value.
Score value is accounted for one word of each meaning of a word type definition, such as indicate the meaning of a word type definition of the courteous word of courtesy It is 5 that word, which accounts for score value, and it is -5 to indicate that the word of the meaning of a word type definition of angry angry word accounts for score value.It builds in advance in the database It stands a predefined word and accounts for score table, predefined word accounts for score table and accounted for point for recording each corresponding word of meaning of a word type Value.It is corresponding that each meaning of a word type and a word account for score value, can be accounted in score value and be inquired in predefined word according to one of which To another kind.
Step S1003:The word obtained according to inquiry accounts for the matching corresponding to score value and each acquired meaning of a word type The quantity of obtained predetermined keyword word calculates comprehensive word and accounts for score value.
When remote server extracts the crucial words of multiple communicatees, the meaning of a word type that matches may have more Kind, and each meaning of a word type may also can correspond to multiple crucial words.Remote server by the comprehensive word of calculating account for score value with Obtain the psychological condition type of communicatee.Specifically, remote server is according to formula:It calculates comprehensive It shares word and accounts for score value.Wherein, Y indicates that comprehensive word accounts for score value, and X indicates that the quantity of meaning of a word type, Qi indicate i-th kind of meaning of a word type Corresponding word accounts for score value, and Pi indicates that the corresponding quantity for matching obtained predetermined keyword word of i-th kind of meaning of a word type, ci indicate I-th kind of meaning of a word type proportion shared in all meaning of a word types.
For example, remote server extracts 8 crucial words in communicatee's language, is obtained and 8 by comparing The corresponding predetermined keyword word of a key words, and in the database with obtain with obtained each predetermined keyword word it is right The meaning of a word type answered, the meaning of a word type obtained have the meaning of a word type for indicating courtesy word and indicate angry angry word meaning of a word class Two kinds of type.Wherein, it indicates that the predetermined keyword word corresponding to the meaning of a word type of the courteous word of courtesy has 7, indicates angry indignation Predetermined keyword word corresponding to the meaning of a word type of word has 1.Assuming that the proportion shared by each meaning of a word type is respectively 50%, remotely Server accounts for inquiry in score table in predefined word and obtains indicating that the corresponding word of meaning of a word type of the courteous word of courtesy accounts for point Value is 5, and it is -5 to indicate that the corresponding word of meaning of a word type of angry angry word accounts for score value, then remote server is calculated comprehensive It shares word and accounts for score value and be:
Step S1004:Inquiry, which is accounted in score table, in predefined word accounts for the corresponding meaning of a word type of score value with word is integrated, And then obtain the psychological condition type of communicatee.
Remote server is calculated after comprehensive word accounts for score value, inquires the synthesis word and accounts for the corresponding meaning of a word class of score value Type to obtain the psychological condition type of communicatee, and exports user the psychological condition type of communicatee, allows a user to It is enough that corresponding exchange measure is taken according to the psychological condition of communicatee, improve the satisfaction of communicatee in communication process. For example, remote server accounts for inquiry and comprehensive word in score value in predefined word accounts for meaning of a word type corresponding to score value 15, look into It is to indicate the meaning of a word type of happiness word to ask and obtain the meaning of a word type, and the psychological condition type obtained is the shape at heart of happiness State.Later, remote server exports the adjective for indicating psychological condition type to user:" happiness ", thus user is according to this letter Breath knows that the psychological condition type of communicatee is " happiness ", and then takes corresponding exchange measure.
In the another embodiment of data processing of the present invention, when user is that video exchanges with the AC mode of communicatee Pattern or talk face to face pattern when, remote server can also obtain the image data and voice messaging of communicatee simultaneously, To obtain the performance information of communicatee.At this time remote server by obtain image data corresponding to synthesis expression value and language The corresponding general audio value of message breath, and comprehensive expression value and general audio value are calculated according to a certain percentage to obtain Comprehensive mood value, the synthesis emotional state for indicating communicatee.Such as comprehensive expression value is in comprehensive expression value and comprehensive sound Frequency ratio shared in being worth is 60%, and general audio value ratio shared in comprehensive expression value and general audio value is 40%, by This calculates comprehensive mood value:Comprehensive mood value=synthesis expression value * 60%+ general audio values * 40%.Wherein, in the database in advance Comprehensive mood value table is first established, which is used to record the correspondence of comprehensive mood value and comprehensive type of emotion, A comprehensive mood value is defined to each synthesis type of emotion in advance.Synthesis type of emotion in comprehensive mood value table is, for example, The type of emotion such as happy emoticon, angry emoticon, each synthesis type of emotion correspond to a kind of psychological condition type of communicatee.Cause This, remote server is after obtaining comprehensive mood value, the inquiry synthesis corresponding with the synthesis mood value in comprehensive mood value table Type of emotion, and then the psychological condition type of communicatee is obtained, and the psychological shape of obtained communicatee is exported to user State type.
Certainly, remote server can also integrate expression value according to image data acquisition, and be obtained according to voice messaging General audio value and comprehensive word account for score value, then will comprehensive expression value, general audio value and comprehensive word account for score value by It is calculated according to certain proportion to obtain comprehensive mood value, and matches to obtain according to comprehensive mood value and integrate type of emotion, in turn The psychological condition type of communicatee is obtained, specific process is similar with the above embodiment, can refer to the above embodiment It carries out, herein without repeating.
In addition, in other implementations, when the AC mode remote speech AC mode of user and communicatee, Remote server can not obtain the image data of communicatee, and remote server can pass through the voice of acquisition communicatee at this time Information, and then the general audio value of communicatee is obtained according to voice messaging simultaneously and comprehensive word accounts for score value, then by the two It carries out that comprehensive mood value is calculated according to a certain percentage, is worth matched comprehensive mood class with comprehensive mood by inquiring to obtain Type, and then obtain the psychological condition type of communicatee.
Refering to fig. 12, in an embodiment of data processing equipment of the present invention, data processing equipment can be remote service Device comprising acquisition module 121, the first acquisition module 122, the second acquisition module 123 and output module 124.Wherein, it acquires Module 121 is used to acquire the performance information of communicatee in exchange scene, which includes communicatee in communication process In mood action message and/or voice messaging.The table that first acquisition module 122 is used to be collected acquisition module 121 Pre-stored all default performance informations are matched in existing information and date library, with the performance information for obtaining and collecting The default performance information to match.Specifically, in the first acquisition module 122 is used to collect performance information and database All default performance informations are compared, and are more than the pre- of given threshold with the similarity for the performance information for obtaining and collecting If performance information, the default performance information that matches to the performance information for obtaining and collecting.Wherein, database is additionally operable to It stores when storing all default performance informations, the default psychological condition type that each default performance information is defined.Second obtains Modulus block 123 is for obtaining default psychological condition type corresponding with the preset table information matched, to obtain exchange pair The psychological condition type of elephant;Output module 124 is used to export the psychological condition of 123 obtained communicatee of the second acquisition module Type.
The performance information of communicatee is the mood action message and/or voice messaging of the communicatee in communication process, Mood action message refers to the action message at each position on face, such as facial expression action, and the psychological activity of people is all largely It is displayed by facial expression, secondly, sound is also a kind of medium for embodying people's heart activity, in intonation height, voice Appearance etc. can intuitively show the psychological condition of people.The exchange way of user and communicatee are varied, such as long-distance video AC mode, remote speech AC mode or AC mode etc. of talking face to face, different AC modes corresponds to different exchanges Scene.For different exchanges scene, the performance information that acquisition module 121 can acquire is also not exactly the same.It is such as logical for video Words pattern, acquisition module 121 can collect the performance informations such as the expression of communicatee, voice, and logical for pure voice Words pattern, acquisition module 121 are then merely able to collect the voice messaging of communicatee, according to specific exchange site setup acquisition Performance information acquired in module 121.It prestores in the database and various predetermined performance informations, the first acquisition module 122 According to the performance information that acquisition module 121 is obtained, the predetermined performance with the performance data match is obtained in the database and is believed Breath.In addition, when storing various predetermined performance informations in the database, to a kind of each predetermined default psychological shape of performance information definition State type, to which the second acquisition module 123 can obtain and the predetermined performance acquired in the first acquisition module 122 in the database The corresponding default psychological condition type of information.Output module 124 is by the psychological condition type acquired in the second acquisition module 123 It exports to user, so that user knows the psychological condition type of communicatee, and then takes corresponding exchange measure, handed over improving The satisfaction of communicatee during stream.
Refering to fig. 13, in another embodiment of data processing equipment of the present invention, acquisition module 131 includes Image Acquisition list Member 1311.The image data that image acquisition units 1311 are used to acquire communicatee in exchange scene is being handed over obtaining communicatee Mood action message during stream, and then obtain the performance information of communicatee.In present embodiment, exchange scene can be Exchange scene under long-distance video call mode, the video communication equipment of one side of user receive the video data of communicatee, and Image acquisition units 1311 are acquired the video data received by apparatus for video communication, to obtain the image of communicatee Data.In addition, exchange scene can also be the exchange scene under the pattern of talking face to face, can be obtained at this time by camera device The image data of communicatee, image acquisition units 1311 are acquired to obtain the image data acquired in camera device The image data of communicatee.Wherein, 1311 acquired image data of image acquisition units are the present image of communicatee Data, data processing equipment obtain the current psychological condition type of communicatee according to the current image date of communicatee.
Specifically, the first acquisition module 132 includes first acquisition unit 1321 and second acquisition unit 1322.Wherein, The spy of current face predetermined point of one acquiring unit 1321 for obtaining communicatee from the current image date of communicatee Levy data;Characteristic of the second acquisition unit 1322 for the current face predetermined point acquired in first acquisition unit 1321 According to obtaining the current facial expression image of communicatee, and by pre-stored institute in the current facial expression image obtained and database There is predetermined facial expression image to be compared, is more than the pre- of given threshold with the similarity of the current facial expression image of communicatee to obtain If facial expression image, then obtain and the matched current predetermined facial expression image of the current facial expression image of communicatee.Second obtains mould Block 133 includes third acquiring unit 1331, and third acquiring unit 1331 obtains list for obtaining in the database in storage second Member 1322 accessed by current preset facial expression image when, to current preset expression class defined in the current preset table feelings image Type, and then obtain the current psychological condition type of communicatee.Various expression libraries are established in the database, it is various predetermined storing When facial expression image, by the way that all predetermined expression image classifications to be stored in the form in various expression libraries to each predetermined facial expression image A kind of default expression type is defined, that is, all predetermined facial expression images being stored under a type of expression library are defined as such The facial expression image of type, each expression library type corresponds to a kind of default expression type, and each presets expression type and a kind of psychology Status Type corresponds to.Third acquiring unit 1331 is obtained by inquiring in the database accessed by second acquisition unit 1322 Current predetermined facial expression image is stored in what type of expression library, such as in storage and happiness expression library, is illustrated predetermined to this It is happiness expression type that expression type is preset defined in facial expression image, and then obtains the current psychological condition type of communicatee For glad psychological condition type, then the psychological condition type for the communicatee that output module 134 is exported to user is:It is " high It is emerging ".
The data processing equipment of present embodiment, by obtaining the current image date of communicatee, with according to current figure It is exported to use as the current psychological condition type of data acquisition communicatee, and by the current psychological condition type of communicatee Family is being exchanged so that user adjusts exchange measure in time according to the psychological condition of communicatee with improving communicatee as far as possible Satisfaction in the process.
Refering to fig. 14, in the another embodiment of data processing equipment of the present invention, acquired in image acquisition units 1411 Image data includes the current image date of communicatee and at least one history image data.Specifically, image acquisition units 1411 within the setting period, at least the two of communicatee in at least two time points acquisition exchange scene of interval setting duration A image data.Wherein, for the setting duration no more than the duration in setting period, it is all which includes at least setting The end time of phase.Image acquisition units 1411 setting the period end time collected communicatee image data As current image date, and on remaining time point the image data of collected communicatee be history image number According to.At this point, the default expression type acquired in third acquiring unit 1431 includes the current preset corresponding to current image date History corresponding to expression type and history image data presets expression type.
Refering to fig. 15, in the another embodiment of data processing equipment of the present invention, acquisition module 151 includes voice collecting list Member 1511.Voice collecting unit 1511 is used to acquire the voice messaging of communicatee in exchange scene, and then obtains communicatee Performance information.In present embodiment, the voice messaging acquired in voice collecting unit 1511 is the current speech of communicatee Information.
First acquisition module 152 includes the 6th acquiring unit 1521 and the 7th acquiring unit 1522.Wherein, the 6th list is obtained The current speech information acquisition communicatee's of communicatee of the member 1521 for being obtained according to voice collecting unit 1511 works as Preceding audio volume control figure.7th acquiring unit 1522 is for prestoring the present video oscillogram of communicatee with database All predetermined audio oscillograms compared, be more than setting with the similarity of the present video oscillogram of communicatee to obtain The predetermined audio oscillogram of threshold value, and then obtain the matched predetermined audio oscillogram of present video oscillogram with communicatee. Second acquisition module 153 includes the 8th acquiring unit 1531, and the 7th acquiring unit 1522 is being stored for obtaining in the database When acquired predetermined audio oscillogram, to the predefined preset audio type of predetermined audio oscillogram institute, and then handed over The psychological condition type of flow object.
Refering to fig. 16, in the another embodiment of data processing equipment of the present invention, the voice collecting unit of acquisition module 161 Voice messaging acquired in 1611 includes the current speech information of communicatee and at least one history voice messaging.Specifically, Voice collecting unit 1611 is handed at least two time points of interval setting duration in acquisition exchange scene within the setting period At least two voice messagings of flow object.The setting duration is at least wrapped no more than the duration for setting the period, at least two time points Include the end time in setting period.Voice collecting unit 1611 the end time collected voice messaging of institute for setting the period as The current speech information of communicatee, at other times on collected voice messaging be communicatee history voice believe Breath.At this point, the preset audio type acquired in the 8th acquiring unit 1631 includes the current preset corresponding to current speech information History preset audio type corresponding to audio types and history voice messaging.Second acquisition module 163 is in addition to including the 8th obtaining It takes except unit 1631, further includes the 9th acquiring unit 1632, the second computing unit 1633 and the tenth acquiring unit 1634.Its In, the 9th acquiring unit 1632 is obtained for inquiry in pre-stored predefined audio value table in the database and current preset The corresponding present video value of audio types, and history audio value corresponding with history preset audio type.Second calculates Unit 1633 is used to calculate the general audio value within the setting period, specifically, the second computing unit 1633 is according to formulaCalculate the general audio value in the setting period.Wherein, K indicates the setting period Interior general audio value, L indicate that all time points within the setting period, Sc indicate present video value, ShiWhen indicating i-th Between put i-th of history audio value corresponding to collected history voice messaging, b indicate present video value in present video value and Shared proportion in history audio value two types audio value.After general audio value is calculated in second computing unit 1633, the Ten acquiring units 1634 are used for the inquiry in predefined audio value table and obtain preset audio type corresponding with general audio value, And then obtain the psychological condition type of communicatee.Output module 164 exports the psychological condition type of communicatee to user.
In addition, present embodiment further includes the 4th acquisition module 165, for being obtained according to present video value and history audio value Take the audio value change curve of communicatee in the setting period.At this point, output module 164 to user in addition to exporting communicatee's Except psychological condition type, the audio value change curve of communicatee is also exported so that user can more intuitively know exchange pair The psychological condition of elephant changes.
Refering to fig. 17, in the another embodiment of data processing equipment of the present invention, the voice collecting unit of acquisition module 171 Voice messaging acquired in 1711 includes the current speech information of communicatee and at least one history voice messaging.First obtains Module 172 includes the 11st acquiring unit 1721, the 12nd acquiring unit 1722, and the second acquisition module 173 is obtained including the 13rd Take unit 1731, the 14th acquiring unit 1732, the 15th acquiring unit 1733 and the 16th acquiring unit 1734.
In present embodiment, by obtaining the crucial words in communicatee's language, to obtain the psychological shape of communicatee State type.Specifically, the 11st acquiring unit 1721 is used to extract exchange from all voice messagings using speech recognition technology Crucial words in object language, this crucial words is the crucial words for indicating specific emotional.12nd acquisition module 1722 is used In by communicatee's language acquired in the 11st acquiring unit 1721 crucial words with stored in database it is all pre- Fixed key words is compared, and is more than the predetermined of given threshold with the similarity of the crucial words in communicatee's language to obtain Crucial words, and then obtain and the matched predetermined keyword word of crucial words in communicatee's language.13rd acquiring unit 1731 for obtaining in the database when storing the predetermined keyword word to match with the crucial words in communicatee's language, To meaning of a word type defined in the predetermined keyword word, and count what the matching corresponding to acquired each meaning of a word type obtained The quantity of predetermined keyword word.14th acquiring unit 1732 accounts for score value for pre-stored predefined word in the database Word corresponding with each acquired meaning of a word type is inquired in table accounts for score value.15th acquiring unit 1733 is used for basis and looks into The quantity that obtained word accounts for the predetermined keyword word that the matching corresponding to score value and each acquired meaning of a word type obtains is ask, It calculates comprehensive word and accounts for score value.16th acquiring unit 1734 is used to account for inquiry and comprehensive word in score table in predefined word The corresponding meaning of a word type of score value is accounted for, and then obtains the psychological condition type of communicatee.Output module 174 is exported to user and is handed over The psychological condition type of flow object.
In addition, in the other embodiment of data processing equipment of the present invention, acquisition module can also include image simultaneously Collecting unit and voice collecting unit, at this time acquisition module can obtain the image data and voice messaging of communicatee simultaneously, number It the processes such as compared, inquired and is calculated by acquired image data and voice messaging according to processing unit, obtain image The general audio value corresponding to synthesis expression value and voice messaging corresponding to data, comprehensive word account for score value, and by consolidated statement Feelings value, general audio value and comprehensive word account for score value and are calculated according to a certain percentage to obtain comprehensive mood value, by right Comprehensive mood value is matched to obtain corresponding comprehensive type of emotion, and then obtains the psychological condition type of communicatee.When So, data processing equipment can also account for arbitrary two kinds in score value according to comprehensive expression value, general audio value and comprehensive word Comprehensive mood value is obtained, and then obtains the psychological condition type of communicatee.
Refering to fig. 18, in an embodiment of data processing equipment of the present invention, data processing equipment include memory 181, Processor 182 and output device 183.Memory 181, output device 183 are connect by bus 184 with processor 182 respectively.
Wherein, the data of memory processing equipment for storing data.Communicatee in the acquisition exchange of processor 182 scene Performance information, which includes mood action message and/or voice messaging of the communicatee in communication process, and will The performance information collected is matched with pre-stored all default performance informations in database, to obtain and acquire To the default performance information that matches of performance information.Wherein, database is additionally operable to be stored in all default performance informations of storage When, to each default predefined default psychological condition type of performance information, preset the class that psychological condition type is psychological condition Type.Processor 182 is additionally operable to obtain default psychological condition type corresponding with the default performance information that matching obtains, to obtain The psychological condition type of communicatee.Psychological condition of the output device 183 for the communicatee acquired in output processor 182 Type.
By the data processing equipment of present embodiment, it can obtain communicatee's according to the performance information of communicatee Psychological condition type, and the psychological condition type of communicatee is exported to user, allow the user to the heart according to communicatee Reason state adjusts exchange measure in time, to improve satisfaction of the communicatee in communication process as far as possible.
Mode the above is only the implementation of the present invention is not intended to limit the scope of the invention, every to utilize this Equivalent structure or equivalent flow shift made by description of the invention and accompanying drawing content, it is relevant to be applied directly or indirectly in other Technical field is included within the scope of the present invention.

Claims (7)

1. a kind of data processing method, which is characterized in that including:
It is handed at least two time points acquisition exchange scene by interval setting duration of the remote server within the setting period At least two image datas of flow object, the setting duration no more than the setting period duration, when described at least two Between point include at least the end time in the setting period, in the end time collected communicatee in the setting period Image data is current image date, is history image number in the image data of remaining time point collected communicatee According to;
The characteristic of the facial predetermined point of communicatee is obtained from the image data of the communicatee;
Obtain the facial expression image of communicatee according to the characteristic of the facial predetermined point, and by the expression of the communicatee Image is compared with pre-stored all predetermined facial expression images in database, to obtain the expression figure with the communicatee The similarity of picture is more than the default facial expression image of given threshold, and then obtain matching with the facial expression image of the communicatee Predetermined facial expression image;
The predetermined facial expression image to match in the storage facial expression image with the communicatee is obtained in the database When to presetting expression type defined in the predetermined facial expression image, pre-stored predefined expression value in the database Inquiry obtains current expression value corresponding with current preset expression type in table, and with history to preset expression type corresponding History expression value;
Calculate the synthesis expression value within the setting period;
Inquiry obtains default expression type corresponding with the comprehensive expression value in the predefined expression value table, and then must To the psychological condition type of the communicatee;
And/or
The performance information of communicatee in acquisition exchange scene, i.e.,
At least the two of communicatee in at least two time points acquisition exchange scene of interval setting duration within the setting period A voice messaging, the setting duration is no more than the duration in the setting period, and at least two time point is including at least institute The end time for stating the setting period is to work as in the voice messaging of the collected communicatee of end time in the setting period Preceding voice messaging is history voice messaging in the voice messaging of remaining time point collected communicatee;
The performance information collected and pre-stored all default performance informations in database are compared, to obtain The default performance information for being more than given threshold with the similarity of the performance information collected is taken, i.e.,
The audio volume control figure of communicatee is obtained according to the voice messaging of the communicatee;
The audio volume control figure of the communicatee and pre-stored all predetermined audio oscillograms in database are compared, To obtain the predetermined audio oscillogram for being more than given threshold with the similarity of the audio volume control figure of the communicatee, and then obtain The predetermined audio oscillogram to match with the audio volume control figure of the communicatee;
Default psychological condition type corresponding with the obtained default performance information is matched is obtained, i.e.,
The predetermined audio wave to match in the storage audio volume control figure with the communicatee is obtained in the database It is pre-stored predefined in the database to preset audio type defined in the predetermined audio oscillogram when shape figure Inquiry obtains present video value corresponding with current preset audio types in audio value table, and with history preset audio type Corresponding history audio value;
Calculate the general audio value within the setting period;
Inquiry obtains preset audio type corresponding with the general audio value in the predefined audio value table, and then obtains Obtain the psychological condition type of the communicatee;
Export the psychological condition type of the communicatee.
2. according to the method described in claim 1, it is characterized in that,
It is described calculate set the period in synthesis expression value the step of include:
According to formula:Calculate the consolidated statement within the setting period Feelings value, wherein the M is the comprehensive expression value, and N is all time points within the setting period, and Vc is described Current expression value, VhiFor i-th of history expression value corresponding to i-th of time point collected history image data, D is current expression value proportion shared in the current expression value and history expression value two types expression value.
3. according to the method described in claim 1, it is characterized in that,
It inquires and obtains and the current preset expression class in the predefined expression value table pre-stored in the database After the corresponding current expression value of type, and the step of history expression value corresponding with the default expression type of the history, Including:
The expression value that the communicatee within the setting period is obtained according to the current expression value and history expression value changes song Line;
The step of psychological condition type of the output communicatee includes:
Other than exporting the psychological condition type of the communicatee, the expression value variation for also exporting the communicatee is bent Line.
4. according to the method described in claim 1, it is characterized in that,
It is described calculate it is described setting the period in general audio value the step of include:
According to formula:Calculate the general audio within the setting period Value, wherein the K is the general audio value, and L is all time points within the setting period, and Sc is described works as Preceding audio value, ShiFor i-th of history audio value corresponding to i-th of time point collected history voice messaging, b For present video value proportion shared in the present video value and history audio value two types audio value.
5. according to the method described in claim 1, it is characterized in that,
It inquires and obtains and the current preset audio class in the predefined audio value table pre-stored in the database After the corresponding present video value of type, and the step of history audio value corresponding with the history preset audio type, Including:
The audio value that the communicatee within the setting period is obtained according to the present video value and history audio value changes song Line;
The step of psychological condition type of the output communicatee includes:
Other than exporting the psychological condition type of the communicatee, the audio value variation for also exporting the communicatee is bent Line.
6. according to the method described in claim 1, it is characterized in that,
It is described to compare the performance information collected and pre-stored all default performance informations in database, Include to obtain the step of being more than the default performance information of given threshold with the similarity of the performance information collected:
Using speech recognition technology from the voice messaging extract communicatee's language in crucial words;
By pre-stored all predetermined keyword words in crucial words and the database in communicatee's language into Row comparison, to obtain the predetermined keyword for being more than given threshold with the similarity of the crucial words in communicatee's language Word, and then obtain the predetermined keyword word to match with the crucial words in communicatee's language;
The step of acquisition corresponding with the obtained default performance information is matched default psychological condition type includes:
It is obtained in the database in the predetermined pass that the storage crucial words with communicatee's language matches Each described meaning of a word type institute to meaning of a word type defined in the predetermined keyword word when key words, and acquired in statistics is right The quantity for the predetermined keyword word that the matching answered obtains;
Pre-stored predefined word accounts for inquiry and each acquired described meaning of a word class in score table in the database The corresponding word of type accounts for score value;
The word obtained according to inquiry accounts for what the matching corresponding to score value and each acquired described meaning of a word type obtained The quantity of the predetermined keyword word calculates comprehensive word and accounts for score value;
Inquiry and the comprehensive word are accounted in score table in the predefined word and accounts for the corresponding meaning of a word type of score value, and then are obtained Obtain the psychological condition type of the communicatee.
7. a kind of data processing equipment, which is characterized in that the memory, defeated including memory, processor and output device Go out device to be connected to the processor by bus respectively;
The memory is used to store the data of the data processing equipment;
The processor is used for:It is adopted by least two time points of interval setting duration of the remote server within the setting period At least two image datas of communicatee, the setting duration are not more than the duration in the setting period in collection exchange scene, At least two time point includes at least the end time in the setting period, in the end time acquisition in the setting period The image data of the communicatee arrived is current image date, in the picture number of remaining time point collected communicatee According to for history image data;
The characteristic of the facial predetermined point of communicatee is obtained from the image data of the communicatee;
Obtain the facial expression image of communicatee according to the characteristic of the facial predetermined point, and by the expression of the communicatee Image is compared with pre-stored all predetermined facial expression images in database, to obtain the expression figure with the communicatee The similarity of picture is more than the default facial expression image of given threshold, and then obtain matching with the facial expression image of the communicatee Predetermined facial expression image;
The predetermined facial expression image to match in the storage facial expression image with the communicatee is obtained in the database When to presetting expression type defined in the predetermined facial expression image, pre-stored predefined expression value in the database Inquiry obtains current expression value corresponding with current preset expression type in table, and with history to preset expression type corresponding History expression value;
Calculate the synthesis expression value within the setting period;
Inquiry obtains default expression type corresponding with the comprehensive expression value in the predefined expression value table, and then must To the psychological condition type of the communicatee;
And/or
At least the two of communicatee in at least two time points acquisition exchange scene of interval setting duration within the setting period A voice messaging, the setting duration is no more than the duration in the setting period, and at least two time point is including at least institute The end time for stating the setting period is to work as in the voice messaging of the collected communicatee of end time in the setting period Preceding voice messaging is history voice messaging in the voice messaging of remaining time point collected communicatee;
The audio volume control figure of communicatee is obtained according to the voice messaging of the communicatee;
The audio volume control figure of the communicatee and pre-stored all predetermined audio oscillograms in database are compared, To obtain the predetermined audio oscillogram for being more than given threshold with the similarity of the audio volume control figure of the communicatee, and then obtain The predetermined audio oscillogram to match with the audio volume control figure of the communicatee;
The predetermined audio wave to match in the storage audio volume control figure with the communicatee is obtained in the database It is pre-stored predefined in the database to preset audio type defined in the predetermined audio oscillogram when shape figure Inquiry obtains present video value corresponding with current preset audio types in audio value table, and with history preset audio type Corresponding history audio value;
Calculate the general audio value within the setting period;
Inquiry obtains preset audio type corresponding with the general audio value in the predefined audio value table, and then obtains Obtain the psychological condition type of the communicatee;
The output device is used for the psychological condition type of the communicatee.
CN201310226296.4A 2013-06-07 2013-06-07 A kind of method, apparatus and equipment of data processing Active CN104239304B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310226296.4A CN104239304B (en) 2013-06-07 2013-06-07 A kind of method, apparatus and equipment of data processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310226296.4A CN104239304B (en) 2013-06-07 2013-06-07 A kind of method, apparatus and equipment of data processing

Publications (2)

Publication Number Publication Date
CN104239304A CN104239304A (en) 2014-12-24
CN104239304B true CN104239304B (en) 2018-08-21

Family

ID=52227397

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310226296.4A Active CN104239304B (en) 2013-06-07 2013-06-07 A kind of method, apparatus and equipment of data processing

Country Status (1)

Country Link
CN (1) CN104239304B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850335B (en) 2015-05-28 2018-01-23 瞬联软件科技(北京)有限公司 Expression curve generation method based on phonetic entry
CN105205756A (en) * 2015-09-15 2015-12-30 广东小天才科技有限公司 Behavior monitoring method and system
CN106933863B (en) * 2015-12-30 2019-04-19 华为技术有限公司 Data clearing method and device
CN106926258B (en) * 2015-12-31 2022-06-03 深圳光启合众科技有限公司 Robot emotion control method and device
CN105979140A (en) * 2016-06-03 2016-09-28 北京奇虎科技有限公司 Image generation device and image generation method
CN107590147A (en) * 2016-07-07 2018-01-16 深圳市珍爱网信息技术有限公司 A kind of method and device according to exchange atmosphere matching background music
CN107609567A (en) * 2016-07-12 2018-01-19 李晨翱 Shopping Guide's behavioural information processing method, apparatus and system
CN108573697B (en) * 2017-03-10 2021-06-01 北京搜狗科技发展有限公司 Language model updating method, device and equipment
CN107092664B (en) * 2017-03-30 2020-04-28 华为技术有限公司 Content interpretation method and device
CN108595406B (en) * 2018-01-04 2022-05-17 广东小天才科技有限公司 User state reminding method and device, electronic equipment and storage medium
CN108491074B (en) * 2018-03-09 2021-07-09 Oppo广东移动通信有限公司 Electronic device, exercise assisting method and related product
CN109215762A (en) * 2018-08-09 2019-01-15 上海常仁信息科技有限公司 A kind of user psychology inference system and method
CN109346108B (en) * 2018-11-28 2022-07-12 广东小天才科技有限公司 Operation checking method and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102216876A (en) * 2008-11-19 2011-10-12 英默森公司 Method and apparatus for generating mood-based haptic feedback

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007207153A (en) * 2006-02-06 2007-08-16 Sony Corp Communication terminal, information providing system, server device, information providing method, and information providing program

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102216876A (en) * 2008-11-19 2011-10-12 英默森公司 Method and apparatus for generating mood-based haptic feedback

Also Published As

Publication number Publication date
CN104239304A (en) 2014-12-24

Similar Documents

Publication Publication Date Title
CN104239304B (en) A kind of method, apparatus and equipment of data processing
CN103024521A (en) Program screening method, program screening system and television with program screening system
EP1895745B1 (en) Method and communication system for continuous recording of data from the environment
KR101528086B1 (en) System and method for providing conference information
CN110505491A (en) A kind of processing method of live streaming, device, electronic equipment and storage medium
CN109858702A (en) Client upgrades prediction technique, device, equipment and the readable storage medium storing program for executing complained
CN109243444A (en) Voice interactive method, equipment and computer readable storage medium
CN105244042B (en) A kind of speech emotional interactive device and method based on finite-state automata
CN105704425A (en) Conferencing system and method for controlling the conferencing system
CN110475155A (en) Live video temperature state identification method, device, equipment and readable medium
CN110377761A (en) A kind of method and device enhancing video tastes
CN106024015A (en) Call center agent monitoring method and system
CN110139062A (en) A kind of creation method, device and the terminal device of video conference record
CN104410973B (en) A kind of fraudulent call recognition methods of playback and system
CN109671438A (en) It is a kind of to provide the device and method of ancillary service using voice
CN107105322A (en) A kind of multimedia intelligent pushes robot and method for pushing
CN111970471B (en) Conference participant scoring method, device, equipment and medium based on video conference
CN110522462A (en) The multi-modal intelligent trial system of one kind and method
CN106919989A (en) Room online booking method and apparatus based on Internet of Things open platform
CN109697556A (en) Evaluate method, system and the intelligent terminal of effect of meeting
CN115460031A (en) Intelligent sound control supervision system and method based on Internet of things
CN105701686A (en) Voiceprint advertisement implementation method and device
CN107622300A (en) The cognitive Decision method and system of multi-modal virtual robot
CN109119077A (en) A kind of robot voice interactive system
CN113783709A (en) Conference system-based participant monitoring and processing method and device and intelligent terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant