CN113099043A - Customer service control method, apparatus and computer-readable storage medium - Google Patents

Customer service control method, apparatus and computer-readable storage medium Download PDF

Info

Publication number
CN113099043A
CN113099043A CN201911333027.1A CN201911333027A CN113099043A CN 113099043 A CN113099043 A CN 113099043A CN 201911333027 A CN201911333027 A CN 201911333027A CN 113099043 A CN113099043 A CN 113099043A
Authority
CN
China
Prior art keywords
emotional state
value
sub
voice information
customer service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911333027.1A
Other languages
Chinese (zh)
Inventor
朱明英
李舒婷
刘智琼
伍运珍
俞科峰
华竹轩
张金娟
陈娜
池炜成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN201911333027.1A priority Critical patent/CN113099043A/en
Publication of CN113099043A publication Critical patent/CN113099043A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
    • H04M3/5166Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing in combination with interactive voice response systems or voice portals, e.g. as front-ends
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Child & Adolescent Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The disclosure relates to a control method and device of customer service and a computer readable storage medium, and relates to the technical field of computers. The method of the present disclosure comprises: acquiring real-time voice information of a client in a call process of client service; recognizing the emotional state of the client according to the voice information; selecting a corresponding customer service channel to continue the conversation of the customer according to the emotional state of the customer; wherein different emotional states correspond to different customer service channels.

Description

Customer service control method, apparatus and computer-readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for controlling a customer service, and a computer-readable storage medium.
Background
With the rapid development of mobile internet and artificial intelligence technology, intelligent (automatic) customer service systems become an important means for enterprises to solve customer service pressure.
At present, an automatic customer service system can recognize semantics according to the speech of a customer, so that a corresponding service flow is entered.
Disclosure of Invention
The inventor finds that: the existing automatic customer service system is too programmed, the requirements of many customers cannot be met, so that the customers gradually become impatient or even excited in emotion in the service process, but the automatic customer service system still processes the customer service system according to the original flow, the problem that high-quality service cannot be provided for the customer service is solved, and the customer service efficiency is reduced.
One technical problem to be solved by the present disclosure is: how to realize more intellectualization, the efficiency is higher, and the customer service system can meet the customer requirements better.
According to some embodiments of the present disclosure, there is provided a method of controlling customer service, including: acquiring real-time voice information of a client in a call process of client service; recognizing the emotional state of the client according to the voice information; selecting a corresponding customer service channel to continue the conversation of the customer according to the emotional state of the customer; wherein different emotional states correspond to different customer service channels.
In some embodiments, identifying the emotional state of the client based on the speech information comprises: identifying a first emotional state of the client according to the sound attribute of the voice information; converting the voice information into a text, and identifying a second emotional state of the client according to the text; and determining the emotional state of the client according to the first emotional state and the second emotional state.
In some embodiments, the sound attributes include: at least one item of speech rate value, tone value and volume value; identifying a first emotional state of the client based on the acoustic attributes of the speech information includes: determining at least one of a speech rate value, a tone value and a volume value of the voice information; comparing the speech rate value with a preset speech rate range to determine a first sub-emotion state; comparing the tone value with a preset tone range to determine a second sub-emotion state; comparing the volume value with a preset volume range, and determining a third sub-emotion state; and determining the first emotional state according to at least one of the first sub-emotional state, the second sub-emotional state and the third sub-emotional state.
In some embodiments, the sound attributes include: at least one item of speech rate value, tone value and volume value; identifying a first emotional state of the client based on the acoustic attributes of the speech information includes: determining a speech rate value of the voice information; determining at least one of a tone value and a volume value of the voice information under the condition that the change of the speech rate value of the voice information relative to the reference speech rate value exceeds a speech rate threshold value; comparing the speech rate value with a preset speech rate range to determine a first sub-emotion state; comparing the tone value with a preset tone range to determine a second sub-emotion state; comparing the volume value with a preset volume range, and determining a third sub-emotion state; and determining the first emotional state according to at least one of the first sub-emotional state, the second sub-emotional state and the third sub-emotional state.
In some embodiments, determining the mood value of the speech information comprises: determining the frequency fluctuation range of the voice information; and determining the tone value of the voice information according to the frequency fluctuation range.
In some embodiments, identifying the second emotional state of the customer from the text comprises: identifying whether the text contains preset keywords or keywords; and under the condition that the recognition text contains preset keywords or keywords, recognizing the text by using a semantic recognition model, and determining a second emotional state of the client.
In some embodiments, the emotional states include: negative emotions of different levels, and positive emotions; selecting a corresponding customer service channel to continue the call of the customer according to the emotional state of the customer comprises the following steps: and under the condition that the emotional state of the client reaches the negative emotion of the preset level, switching the automatic customer service channel to the manual customer service channel to continue the conversation of the client.
According to other embodiments of the present disclosure, there is provided a customer service control apparatus including: the voice information acquisition module is used for acquiring real-time voice information of a client in the conversation process of the client service; the emotion state identification module is used for identifying the emotion state of the client according to the voice information; the service channel selection module is used for selecting a corresponding client service channel to continue the conversation of the client according to the emotional state of the client; wherein different emotional states correspond to different customer service channels.
In some embodiments, the emotional state identification module is used for identifying a first emotional state of the client according to the sound attribute of the voice information; converting the voice information into a text, and identifying a second emotional state of the client according to the text; and determining the emotional state of the client according to the first emotional state and the second emotional state.
In some embodiments, the sound attributes include: at least one item of speech rate value, tone value and volume value; the emotion state identification module is used for determining at least one of a speech speed value, a tone value and a volume value of the voice information; comparing the speech rate value with a preset speech rate range to determine a first sub-emotion state; comparing the tone value with a preset tone range to determine a second sub-emotion state; comparing the volume value with a preset volume range, and determining a third sub-emotion state; and determining the first emotional state according to at least one of the first sub-emotional state, the second sub-emotional state and the third sub-emotional state.
In some embodiments, the sound attributes include: at least one item of speech rate value, tone value and volume value; the emotion state identification module is used for determining the speech speed value of the voice information; determining at least one of a tone value and a volume value of the voice information under the condition that the change of the speech rate value of the voice information relative to the reference speech rate value exceeds a speech rate threshold value; comparing the speech rate value with a preset speech rate range to determine a first sub-emotion state; comparing the tone value with a preset tone range to determine a second sub-emotion state; comparing the volume value with a preset volume range, and determining a third sub-emotion state; and determining the first emotional state according to at least one of the first sub-emotional state, the second sub-emotional state and the third sub-emotional state.
In some embodiments, the emotion state recognition module is used for determining a frequency fluctuation range of the voice information; and determining the tone value of the voice information according to the frequency fluctuation range.
In some embodiments, the emotion state identification module is used for identifying whether the text contains preset keywords or keywords; and under the condition that the recognition text contains preset keywords or keywords, recognizing the text by using a semantic recognition model, and determining a second emotional state of the client.
In some embodiments, the emotional states include: negative emotions of different levels, and positive emotions; and the service channel selection module is used for switching the automatic customer service channel to the manual customer service channel to continue the conversation of the customer under the condition that the emotional state of the customer reaches the negative emotion of the preset level.
According to still other embodiments of the present disclosure, there is provided a control apparatus for customer service, including: a processor; and a memory coupled to the processor for storing instructions that, when executed by the processor, cause the processor to perform a method of controlling customer service as in any of the preceding embodiments.
According to still further embodiments of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon a computer program, wherein the program when executed by a processor implements the steps of the control method of a customer service of any of the preceding embodiments.
According to the method and the device, the real-time voice information of the customer in the call process of customer service is acquired, the emotional state of the customer is identified, and therefore the corresponding customer service channel is selected to continue the call of the customer. Because the customer service process senses the emotion of the customer, under the condition that the customer is not satisfied or even angry, different customer service channels can be selected to carry out subsequent conversation instead of continuing the current service flow, and the problem solving efficiency of the customer is improved, so that the customer service system is more intelligent, higher in efficiency and more capable of meeting the customer requirements.
Other features of the present disclosure and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 illustrates a flow diagram of a method of controlling customer service of some embodiments of the present disclosure.
FIG. 2 shows a flow diagram of a method of controlling customer service of further embodiments of the present disclosure.
Fig. 3 shows a schematic structural diagram of a control device of a customer service of some embodiments of the present disclosure.
Fig. 4 shows a schematic structural diagram of a control device of a customer service of further embodiments of the present disclosure.
Fig. 5 shows a schematic structural diagram of a control device of a customer service according to further embodiments of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
The present disclosure provides a method for controlling customer service, described below in conjunction with fig. 1.
FIG. 1 is a flow chart of some embodiments of a method of controlling customer service according to the present disclosure. As shown in fig. 1, the method of this embodiment includes: steps S102 to S106.
In step S102, real-time voice information of the client during the call of the client service is acquired.
The current customer service may be a manual customer service or an intelligent (automatic) customer service. The recording can be carried out in real time in the customer service process, the recording can be preprocessed when the recording in the preset time period is obtained, and the noise in the recording is removed to obtain the audio frequency of the customer in the preset time period as the voice information. The volume change of the non-emotional fluctuation exists when the client calls, so that the accuracy of subsequent emotional state recognition is not influenced, and the non-emotional fluctuation voice can be processed. For example, in human language acoustics, a person has a special frequency or a tail-end sound for some characters and words, for example, the "thanks" of the shanghai's word "thanks" will be about 20% higher than the normal audio frequency, different speech processing models are used for filtering different audios in order to reduce non-emotional audio fluctuation, the method belongs to the prior art, and the models can be selected according to actual conditions, and details are not repeated herein. Preprocessing the audio recording further comprises: removing the voice of the customer service, namely distinguishing the condition that the customer and the customer service speak simultaneously, and filtering the voice of the customer service; and removing the environmental noise, and performing noise reduction and other treatment.
In step S104, the emotional state of the client is recognized based on the speech information.
In some embodiments, a first emotional state of the customer is identified based on the acoustic attributes of the speech information; converting the voice information into a text, and identifying a second emotional state of the client according to the text; and determining the emotional state of the client according to the first emotional state and the second emotional state. The emotion state recognition is divided into two parts, one part is recognized according to the sound attribute of the voice, and the first emotion state is obtained, for example, the sound attribute comprises: the first attribute represents volume, tone value represents tone weight, and at least one item in the third attribute represents speed of speech. And the other part is to carry out semantic recognition according to the text converted by the voice to obtain a second emotional state. The emotional state of the client is more accurately determined from the aspects of the sound attribute and the text.
In some embodiments, at least one of a pace value, a mood value, and a volume value of the speech information is determined; comparing the speech rate value with a preset speech rate range to determine a first sub-emotion state; comparing the tone value with a preset tone range to determine a second sub-emotion state; comparing the volume value with a preset volume range, and determining a third sub-emotion state; and determining the first emotional state according to at least one of the first sub-emotional state, the second sub-emotional state and the third sub-emotional state.
In other embodiments, a speech rate value for the speech information is determined; determining at least one of a tone value and a volume value of the voice information under the condition that the change of the speech rate value of the voice information relative to the reference speech rate value exceeds a speech rate threshold value; comparing the speech rate value with a preset speech rate range to determine a first sub-emotion state; comparing the tone value with a preset tone range to determine a second sub-emotion state; comparing the volume value with a preset volume range, and determining a third sub-emotion state; and determining the first emotional state according to at least one of the first sub-emotional state, the second sub-emotional state and the third sub-emotional state. Firstly, determining whether the speech rate changes suddenly and excessively, if so, further executing the subsequent process, otherwise, directly comparing the speech rate value with a preset speech rate range to determine a first sub-emotion state, and determining the first emotion state according to the first sub-emotion state.
In still other embodiments, a volume value of the speech information is determined; determining at least one of a tone value and a speech speed value of the voice information under the condition that the change of the volume value relative to the reference volume value exceeds a volume threshold value; comparing the speech rate value with a preset speech rate range to determine a first sub-emotion state; comparing the tone value with a preset tone range to determine a second sub-emotion state; comparing the volume value with a preset volume range, and determining a third sub-emotion state; and determining the first emotional state according to at least one of the first sub-emotional state, the second sub-emotional state and the third sub-emotional state. Firstly, determining whether the volume changes suddenly and excessively, if so, further executing the subsequent process, otherwise, directly comparing the volume value with a preset volume range, determining a third sub-emotion state, and determining a first emotion state according to the third sub-emotion state.
For example, the speech rate value may be determined according to the ratio of the number of speech words to the speech duration (i.e., the duration of the preset time period). Different speech speed ranges can be set to respectively correspond to different first sub-emotion states, and the first sub-emotion state corresponding to the speech speed value is determined within which preset speech speed range.
For example, for a mood value, a frequency fluctuation range of the voice information is determined; and determining the tone value of the voice information according to the frequency fluctuation range. The audio frequency of a client can be added as a reference audio frequency, and a preset high peak value is achieved through filtering and superposition of single-point wave frequency; the preset low peak value is reached through filtering superposition; and judging whether the reference audio generates large-amplitude fluctuation or not by calculating the superposed decibel number and the superposed reduced decibel number, and further judging whether the tone is changed or not. For example, the superimposition decibel threshold is +10 decibels, the superimposition drop decibel threshold is-10 decibels, and when the superimposition decibel number is greater than 10 decibels or the superimposition drop decibel number is less than 10 decibels, the tone level is determined to be serious. Different decibel ranges can be set to determine the mood value. The mood value can be determined using existing models, e.g. PSOLA (pitch synchronous overlay) etc.
For example, the volume value may be determined according to an average volume value within a preset time period, different volume ranges may be set to respectively correspond to different third sub-emotion states, and it is determined which preset speech speed range the speech speed value falls within, which third sub-emotion state corresponds to.
The first sub-emotional state, the second sub-emotional state and the third sub-emotional state can correspond to different negative emotions or positive emotions of different levels respectively, the negative emotions or the positive emotions of different levels can correspond to different numerical values, the first sub-emotional state, the second sub-emotional state and the third sub-emotional state can be weighted and compared with different threshold ranges, and the first emotional state is finally determined. The maximum value or the minimum value or the mean value corresponding to the first sub-emotional state, the second sub-emotional state and the third sub-emotional state can also be selected as the numerical value corresponding to the first emotional state, and then the first emotional state is determined.
In some embodiments, whether the text contains preset keywords or keywords is identified; and under the condition that the recognition text contains preset keywords or keywords, recognizing the text by using a semantic recognition model, and determining a second emotional state of the client. Keywords or keywords can be preset and respectively correspond to different emotional states. Firstly, keyword recognition is performed, and the existing model can be adopted, which is not described herein again. If the text is recognized to contain the preset keywords or keywords, semantic recognition needs to be further carried out. Because some words may represent different emotions in different scenes, the emotional state of the client can be more accurately determined by combining semantic recognition. The semantic recognition model may adopt the prior art, and is not described in detail herein.
After the abnormal keywords are identified, semantic identification is started, whether the keywords carry angry emotion information or not is analyzed, for example, the keywords are preset to be complaint, and the text of voice conversion of the client is that you are to complain you if you charge disorderly. In the case of recognition of a "complaint", semantic recognition is initiated to judge that the customer is simply a warning intention and is not angry. The text that the customer converts to speech is "i must complain you". Semantics are identified as anger.
The first emotional state and the second emotional state may correspond to different values, and the first emotional state and the second emotional state may be weighted and compared with different threshold ranges to determine the emotional state. The maximum value or the minimum value or the mean value corresponding to the first emotional state and the second emotional state can also be selected as the numerical value corresponding to the emotional state, and then the emotional state is determined.
In step S106, according to the emotional state of the client, the corresponding client service channel is selected to continue the call of the client.
Different emotional states may correspond to different customer service channels. In some embodiments, the emotional states include: negative emotions of different levels, and positive emotions. And under the condition that the emotional state of the client reaches the negative emotion of the preset level, switching the automatic customer service channel to the manual customer service channel to continue the conversation of the client. A manual calming channel or the like may be provided, and in the case where the emotional state of the customer is found to be, for example, "angry", then switching is made to the manual calming channel. If the emotional state of the client is 'happy', the client can continue to stay in the automatic customer service channel for conversation. Different customer service channels can be set according to actual requirements, and the method is not limited to the illustrated examples.
In the embodiment, the real-time voice information of the customer in the call process of the customer service is acquired, and the emotional state of the customer is identified, so that the corresponding customer service channel is selected to continue the call of the customer. Because the customer service process senses the emotion of the customer, under the condition that the customer is not satisfied or even angry, different customer service channels can be selected to carry out subsequent conversation instead of continuing the current service flow, and the problem solving efficiency of the customer is improved, so that the customer service system is more intelligent, higher in efficiency and more capable of meeting the customer requirements.
Further embodiments of the present disclosure are described below in conjunction with fig. 2.
FIG. 2 is a flow chart of further embodiments of a method for controlling customer service according to the present disclosure. As shown in fig. 2, the method of this embodiment includes: steps S202 to S210.
In step S202, real-time voice information of the client during the call of the client service is acquired.
In step S204, a first emotional state of the client is identified according to the sound attribute of the voice information.
In step S206, the speech information is converted into a text, and the second emotional state of the client is identified according to the text.
In step S208, the emotional state of the client is determined according to the first emotional state and the second emotional state.
In step S210, according to the emotional state of the client, the corresponding client service channel is selected to continue the call of the client.
The method of the embodiment can recognize from multiple aspects of voice sound attributes and semantics, thereby more accurately recognizing the emotional state of the client, improving the solution efficiency of the problem of the client, realizing more intellectualization and higher efficiency, and more meeting the client requirements.
The present disclosure also provides a control device for customer service, which is described below with reference to fig. 3.
FIG. 3 is a block diagram of some embodiments of a control device for customer service according to the present disclosure. As shown in fig. 3, the apparatus 30 of this embodiment includes: the system comprises a voice information acquisition module 310, an emotional state recognition module 320 and a service channel selection module 330.
The voice information obtaining module 310 is used for obtaining the real-time voice information of the customer during the conversation process of the customer service.
The emotional state recognition module 320 is used for recognizing the emotional state of the client according to the voice information.
In some embodiments, emotion state identification module 320 is configured to identify a first emotion state of the client based on the voice attributes of the speech information; converting the voice information into a text, and identifying a second emotional state of the client according to the text; and determining the emotional state of the client according to the first emotional state and the second emotional state.
In some embodiments, the sound attributes include: at least one item of speech rate value, tone value and volume value; the emotional state recognition module 320 is configured to determine at least one of a speech rate value, a mood value, and a volume value of the speech information; comparing the speech rate value with a preset speech rate range to determine a first sub-emotion state; comparing the tone value with a preset tone range to determine a second sub-emotion state; comparing the volume value with a preset volume range, and determining a third sub-emotion state; and determining the first emotional state according to at least one of the first sub-emotional state, the second sub-emotional state and the third sub-emotional state.
In some embodiments, the sound attributes include: at least one item of speech rate value, tone value and volume value; the emotional state recognition module 320 is used for determining a speech speed value of the voice information; determining at least one of a tone value and a volume value of the voice information under the condition that the change of the speech rate value of the voice information relative to the reference speech rate value exceeds a speech rate threshold value; comparing the speech rate value with a preset speech rate range to determine a first sub-emotion state; comparing the tone value with a preset tone range to determine a second sub-emotion state; comparing the volume value with a preset volume range, and determining a third sub-emotion state; and determining the first emotional state according to at least one of the first sub-emotional state, the second sub-emotional state and the third sub-emotional state.
In some embodiments, emotion state recognition module 320 is used to determine the frequency fluctuation range of the speech information; and determining the tone value of the voice information according to the frequency fluctuation range.
In some embodiments, the emotional state recognition module 320 is configured to recognize whether the text contains a preset keyword or a keyword; and under the condition that the recognition text contains preset keywords or keywords, recognizing the text by using a semantic recognition model, and determining a second emotional state of the client.
The service channel selection module 330 is used for selecting a corresponding client service channel to continue the call of the client according to the emotional state of the client; wherein different emotional states correspond to different customer service channels.
In some embodiments, the emotional states include: negative emotions of different levels, and positive emotions; the service channel selection module 330 is configured to switch the automatic customer service channel to the manual customer service channel to continue the call of the customer when the emotional state of the customer reaches the negative emotion of the preset level.
The control means of the customer service in the embodiments of the present disclosure may each be implemented by various computing devices or computer systems, which are described below in conjunction with fig. 4 and 5.
FIG. 4 is a block diagram of some embodiments of a control device for customer service according to the present disclosure. As shown in fig. 4, the apparatus 40 of this embodiment includes: a memory 410 and a processor 420 coupled to the memory 410, the processor 420 configured to execute a method of controlling customer service in any of the embodiments of the present disclosure based on instructions stored in the memory 410.
Memory 410 may include, for example, system memory, fixed non-volatile storage media, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader (Boot Loader), a database, and other programs.
FIG. 5 is a block diagram of another embodiment of a control device for customer service according to the present disclosure. As shown in fig. 5, the apparatus 50 of this embodiment includes: memory 510 and processor 520 are similar to memory 410 and processor 420, respectively. An input output interface 530, a network interface 540, a storage interface 550, and the like may also be included. These interfaces 530, 540, 550 and the connections between the memory 510 and the processor 520 may be, for example, via a bus 560. The input/output interface 530 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, and a touch screen. The network interface 540 provides a connection interface for various networking devices, such as a database server or a cloud storage server. The storage interface 550 provides a connection interface for external storage devices such as an SD card and a usb disk.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only exemplary of the present disclosure and is not intended to limit the present disclosure, so that any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (10)

1. A method of controlling customer service, comprising:
acquiring real-time voice information of a client in a call process of client service;
recognizing the emotional state of the client according to the voice information;
selecting a corresponding customer service channel to continue the conversation of the customer according to the emotional state of the customer; wherein different emotional states correspond to different customer service channels.
2. The control method according to claim 1, wherein,
the recognizing the emotional state of the client according to the voice information comprises the following steps:
identifying a first emotional state of the client according to the sound attribute of the voice information;
converting the voice information into a text, and identifying a second emotional state of the client according to the text;
and determining the emotional state of the client according to the first emotional state and the second emotional state.
3. The control method according to claim 2, wherein,
the sound attributes include: at least one item of speech rate value, tone value and volume value;
the identifying the first emotional state of the client according to the sound attribute of the voice information comprises:
determining at least one of a speech rate value, a tone value and a volume value of the voice information;
comparing the speech rate value with a preset speech rate range to determine a first sub-emotion state;
comparing the tone value with a preset tone range to determine a second sub-emotion state;
comparing the volume value with a preset volume range, and determining a third sub-emotion state;
and determining the first emotional state according to at least one of the first sub-emotional state, the second sub-emotional state and the third sub-emotional state.
4. The control method according to claim 2, wherein,
the sound attributes include: at least one item of speech rate value, tone value and volume value;
the identifying the first emotional state of the client according to the sound attribute of the voice information comprises:
determining a speech speed value of the voice information;
determining at least one of a tone value and a volume value of the voice information under the condition that the change of the speech rate value of the voice information relative to a reference speech rate value exceeds a speech rate threshold value;
comparing the speech rate value with a preset speech rate range to determine a first sub-emotion state;
comparing the tone value with a preset tone range to determine a second sub-emotion state;
comparing the volume value with a preset volume range, and determining a third sub-emotion state;
and determining the first emotional state according to at least one of the first sub-emotional state, the second sub-emotional state and the third sub-emotional state.
5. The control method according to claim 3 or 4, wherein,
the determining the mood value of the voice message comprises:
determining a frequency fluctuation range of the voice information;
and determining the tone value of the voice information according to the frequency fluctuation range.
6. The control method according to claim 2, wherein,
the identifying the second emotional state of the client according to the text comprises:
identifying whether the text contains preset keywords or keywords;
and under the condition that the text is identified to contain preset keywords or keywords, identifying the text by using a semantic identification model, and determining a second emotional state of the client.
7. The control method according to claim 1, wherein,
the emotional states include: negative emotions of different levels, and positive emotions;
selecting a corresponding customer service channel to continue the call of the customer according to the emotional state of the customer comprises the following steps:
and switching the automatic customer service channel to the manual customer service channel to continue the conversation of the customer under the condition that the emotional state of the customer reaches the negative emotion of the preset level.
8. A control apparatus for customer service, comprising:
the voice information acquisition module is used for acquiring real-time voice information of a client in the conversation process of the client service;
the emotion state identification module is used for identifying the emotion state of the client according to the voice information;
the service channel selection module is used for selecting a corresponding customer service channel to continue the conversation of the customer according to the emotional state of the customer; wherein different emotional states correspond to different customer service channels.
9. A control apparatus for customer service, comprising:
a processor; and
a memory coupled to the processor for storing instructions that, when executed by the processor, cause the processor to perform a method of controlling customer service according to any one of claims 1-7.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the program when executed by a processor implements the steps of the method of controlling customer service of claims 1-7.
CN201911333027.1A 2019-12-23 2019-12-23 Customer service control method, apparatus and computer-readable storage medium Pending CN113099043A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911333027.1A CN113099043A (en) 2019-12-23 2019-12-23 Customer service control method, apparatus and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911333027.1A CN113099043A (en) 2019-12-23 2019-12-23 Customer service control method, apparatus and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN113099043A true CN113099043A (en) 2021-07-09

Family

ID=76662819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911333027.1A Pending CN113099043A (en) 2019-12-23 2019-12-23 Customer service control method, apparatus and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN113099043A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113676602A (en) * 2021-07-23 2021-11-19 上海原圈网络科技有限公司 Method and device for processing manual transfer in automatic response
CN116580721A (en) * 2023-07-13 2023-08-11 中国电信股份有限公司 Expression animation generation method and device and digital human platform

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107293309A (en) * 2017-05-19 2017-10-24 四川新网银行股份有限公司 A kind of method that lifting public sentiment monitoring efficiency is analyzed based on customer anger
US10009465B1 (en) * 2016-12-22 2018-06-26 Capital One Services, Llc Systems and methods for customer sentiment prediction and depiction
CN108900726A (en) * 2018-06-28 2018-11-27 北京首汽智行科技有限公司 Artificial customer service forwarding method based on speech robot people
CN109784414A (en) * 2019-01-24 2019-05-21 出门问问信息科技有限公司 Customer anger detection method, device and electronic equipment in a kind of phone customer service
CN109887525A (en) * 2019-01-04 2019-06-14 平安科技(深圳)有限公司 Intelligent customer service method, apparatus and computer readable storage medium
CN110149450A (en) * 2019-05-22 2019-08-20 欧冶云商股份有限公司 Intelligent customer service answer method and system
CN110472023A (en) * 2019-07-10 2019-11-19 深圳追一科技有限公司 Customer service switching method, device, computer equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10009465B1 (en) * 2016-12-22 2018-06-26 Capital One Services, Llc Systems and methods for customer sentiment prediction and depiction
CN107293309A (en) * 2017-05-19 2017-10-24 四川新网银行股份有限公司 A kind of method that lifting public sentiment monitoring efficiency is analyzed based on customer anger
CN108900726A (en) * 2018-06-28 2018-11-27 北京首汽智行科技有限公司 Artificial customer service forwarding method based on speech robot people
CN109887525A (en) * 2019-01-04 2019-06-14 平安科技(深圳)有限公司 Intelligent customer service method, apparatus and computer readable storage medium
CN109784414A (en) * 2019-01-24 2019-05-21 出门问问信息科技有限公司 Customer anger detection method, device and electronic equipment in a kind of phone customer service
CN110149450A (en) * 2019-05-22 2019-08-20 欧冶云商股份有限公司 Intelligent customer service answer method and system
CN110472023A (en) * 2019-07-10 2019-11-19 深圳追一科技有限公司 Customer service switching method, device, computer equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113676602A (en) * 2021-07-23 2021-11-19 上海原圈网络科技有限公司 Method and device for processing manual transfer in automatic response
CN116580721A (en) * 2023-07-13 2023-08-11 中国电信股份有限公司 Expression animation generation method and device and digital human platform
CN116580721B (en) * 2023-07-13 2023-09-22 中国电信股份有限公司 Expression animation generation method and device and digital human platform

Similar Documents

Publication Publication Date Title
US9875739B2 (en) Speaker separation in diarization
CN111508474B (en) Voice interruption method, electronic equipment and storage device
US10452352B2 (en) Voice interaction apparatus, its processing method, and program
CN106504743B (en) Voice interaction output method for intelligent robot and robot
CN104538043A (en) Real-time emotion reminder for call
CN108039181B (en) Method and device for analyzing emotion information of sound signal
CN105096941A (en) Voice recognition method and device
CN109036412A (en) voice awakening method and system
CN110556105B (en) Voice interaction system, processing method thereof, and program thereof
CN113516964B (en) Speech synthesis method and readable storage medium
CN110298463A (en) Meeting room preordering method, device, equipment and storage medium based on speech recognition
CN108595406B (en) User state reminding method and device, electronic equipment and storage medium
CN111161726B (en) Intelligent voice interaction method, device, medium and system
CN113099043A (en) Customer service control method, apparatus and computer-readable storage medium
CN108053023A (en) A kind of self-action intent classifier method and device
CN113782026A (en) Information processing method, device, medium and equipment
CN112802498B (en) Voice detection method, device, computer equipment and storage medium
CN111866289B (en) Outbound number state detection method and device and intelligent outbound method and system
CN110931002B (en) Man-machine interaction method, device, computer equipment and storage medium
CN104537036A (en) Language feature analyzing method and device
US20220375468A1 (en) System method and apparatus for combining words and behaviors
CN116013257A (en) Speech recognition and speech recognition model training method, device, medium and equipment
CN113012680B (en) Speech technology synthesis method and device for speech robot
CN110125946B (en) Automatic call method, automatic call device, electronic equipment and computer readable medium
CN111785277A (en) Speech recognition method, speech recognition device, computer-readable storage medium and processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210709

RJ01 Rejection of invention patent application after publication