CN111563663B - Robot, service quality evaluation method and system - Google Patents

Robot, service quality evaluation method and system Download PDF

Info

Publication number
CN111563663B
CN111563663B CN202010301208.2A CN202010301208A CN111563663B CN 111563663 B CN111563663 B CN 111563663B CN 202010301208 A CN202010301208 A CN 202010301208A CN 111563663 B CN111563663 B CN 111563663B
Authority
CN
China
Prior art keywords
service
data
information
robot
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010301208.2A
Other languages
Chinese (zh)
Other versions
CN111563663A (en
Inventor
翟懿奎
陈家聪
梁艳阳
柯琪锐
陈丽燕
余翠琳
王天雷
徐颖
欧晓莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuyi University
Original Assignee
Wuyi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuyi University filed Critical Wuyi University
Priority to CN202010301208.2A priority Critical patent/CN111563663B/en
Publication of CN111563663A publication Critical patent/CN111563663A/en
Application granted granted Critical
Publication of CN111563663B publication Critical patent/CN111563663B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Tourism & Hospitality (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Marketing (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Business, Economics & Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Primary Health Care (AREA)
  • Manipulator (AREA)

Abstract

The application discloses a robot, a service quality evaluation method and a service quality evaluation system. The robot provided by the embodiment of the application is provided with a binocular camera, an action analysis module, a voice emotion analysis module and an image emotion analysis module, and the satisfaction degree of a client and the service level of a service worker can be obtained according to a trained deep convolutional neural network and serve as first grading data; obtaining the degree of environment tidiness as second grading data according to the environment information; generating third grading data according to the response speed of the service personnel in the non-service area; according to the action data of the client in the non-service area, the satisfaction degree of the client on the work efficiency of service personnel can be obtained, fourth scoring data is generated, the service quality evaluation is comprehensively obtained through various types of data, the GPS module is arranged, the obstacle avoidance route can be obtained, the service quality evaluation can be obtained by the robot under the condition that the user is not affected, and the reference value of the service quality evaluation is effectively improved.

Description

Robot, service quality evaluation method and system
Technical Field
The application relates to the technical field of data processing, in particular to a robot and a service quality evaluation method and system.
Background
For the service organization, the quality of service personnel and the service environment are important guarantees of the service quality. The traditional service quality evaluation method mainly depends on customer scoring or questionnaire survey, however, customer scoring is usually random, and questionnaire survey consumes more manpower and material resources and has low efficiency. In order to realize the automation of service quality evaluation, some service quality evaluation robots appear on the market, can recognize the emotion of a client through facial expressions so as to automatically evaluate, but the influence on the service quality not only comprises a service process, but also can only automatically evaluate the service process of a service worker in the prior art, cannot represent the service quality of the whole service organization, and has low reference value.
Disclosure of Invention
In order to overcome the defects of the prior art, the application aims to provide a robot, a service quality evaluation method and a service quality evaluation system, which can automatically acquire various types of service quality evaluation data and improve the comprehensiveness of service quality evaluation.
The technical scheme adopted by the application for solving the problems is as follows: in a first aspect, the present application provides a robot comprising:
the binocular camera is used for acquiring facial image information, action information, environment information, client position information and distance information;
the voice receiving module is used for receiving voice information;
the action analysis module is used for acquiring action data according to the action information and the acquired deep convolutional neural network;
the voice emotion analysis module is used for acquiring voice emotion data according to the voice information and the acquired deep convolution neural network;
the image emotion analysis module is used for acquiring image emotion data according to the facial image information and the acquired deep convolutional neural network;
and the GPS module is used for carrying out three-dimensional reconstruction according to the client position information, the environment information and the distance information and acquiring an obstacle avoidance route.
One or more technical schemes provided in the embodiment of the application have at least the following beneficial effects: the robot comprises an action analysis module, a voice emotion analysis module and an image emotion analysis module, wherein the action analysis module is provided with a trained deep convolutional neural network, and the robot further comprises a binocular camera used for acquiring facial image information, action information, environment information, client position information and distance information, so that a hardware basis is provided for acquiring the satisfaction degree of a client, the service level of a service staff and the cleanness and tidiness degree of the environment and comprehensively obtaining service quality evaluation through various types of data; meanwhile, the GPS module is also arranged, so that an obstacle avoidance route can be obtained, and the robot can obtain service quality evaluation on the premise of not influencing customers.
In a second aspect, the present application further provides a service quality evaluation method applied to the robot, including at least one of the following steps:
the robot reads a pre-trained deep convolutional neural network from a server;
the robot acquires voice emotion data and image emotion data of a client and a service worker in a service area, inputs the voice emotion data and the image emotion data into the deep convolutional neural network, generates first scoring data and sends the first scoring data to the server;
the robot acquires environmental information, inputs the environmental information into the deep convolutional neural network, generates second scoring data and sends the second scoring data to the server;
after detecting that voice information of a client in a non-service area contains preset keyword information, the robot sends prompt information to a service person in an idle state through the server and obtains response time, and generates third rating data according to the response time and sends the third rating data to the server;
the robot acquires action data of a client in a non-service area, inputs the action data into the deep convolutional neural network, generates fourth scoring data and sends the fourth scoring data to the server;
and the server generates service quality evaluation data according to the first grading data, the second grading data, the third grading data and the fourth grading data.
One or more technical schemes provided in the embodiment of the application have at least the following beneficial effects: the service quality evaluation method is applied to the robot, voice emotion data and image emotion data of a client and a service staff in a service area are obtained through the robot, and the satisfaction degree of the client and the service level of the service staff can be obtained to serve as first evaluation data; obtaining the degree of environment tidiness according to the environment information as second evaluation data; generating third grading data according to the response speed of the service personnel in the non-service area; according to the action data of the client in the non-service area, the satisfaction degree of the client on the work efficiency of the service personnel can be obtained, fourth grading data is generated, and the service quality evaluation can be obtained by integrating the multi-type data according to the first grading data, the second grading data, the third grading data and the fourth grading data, so that the service quality evaluation has a reference value.
Further, if the robot detects that the customer moves from a service area to a non-service area, the method further comprises: and acquiring the service evaluation score of the client and sending the service evaluation score to a server.
Further, the method also comprises the following steps: and obtaining the service times of the service personnel according to the facial image information and the voice information in the non-service area, and sending the service times to the server.
Further, after sending the prompt information to the service staff in the idle state and obtaining the response time length through the server, the method further includes:
the robot acquires client position information, environment information and distance information and carries out three-dimensional reconstruction to obtain an obstacle avoidance route;
and the robot moves according to the obstacle avoidance route, reads corresponding operation information from the server according to the keyword information and executes the operation information.
Further, still include:
the robot acquires the position information of service personnel in an idle state in a non-service area;
the robot carries out three-dimensional reconstruction according to the position information of the service personnel in the idle state, the environment information and the distance information to obtain an obstacle avoidance route;
and the robot moves according to the obstacle avoidance route, reads the test interaction information, acquires the test result of the service personnel in the idle state, and sends the test result to the server.
Further, still include:
the robot acquires incremental learning data of the deep convolutional neural network according to the voice emotion data and the action data and sends the incremental learning data to a server;
the server trains a new category in the deep convolutional neural network according to the incremental learning data and based on the attention attracting network node and the cyclic back propagation of meta-learning;
the server synchronizes the updated deep convolutional neural network into the robot.
In a third aspect, the present application further provides a service quality evaluation system, including a robot group and a server, where the robot group is composed of a plurality of robots as described above, and the robot group and the server cooperate to perform the service quality evaluation method as described above.
In a fourth aspect, the present application provides a quality of service evaluation apparatus, comprising at least one control processor and a memory for communicative connection with the at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform a quality of service assessment method as described above.
In a fifth aspect, the present application provides a computer-readable storage medium storing computer-executable instructions for causing a computer to perform the quality of service evaluation method as described above.
In a sixth aspect, the present application also provides a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the quality of service assessment method as described above.
Drawings
The present application is further described below with reference to the following figures and examples.
FIG. 1 is a block diagram of a robot provided in one embodiment of the present application;
fig. 2 is a flowchart of a method for evaluating service quality according to another embodiment of the present application;
fig. 3 is a schematic structural diagram of a deep convolutional neural network of a service quality evaluation method according to another embodiment of the present application;
fig. 4 is a flowchart of a method for evaluating service quality according to another embodiment of the present application;
fig. 5 is a flowchart of a method for evaluating service quality according to another embodiment of the present application;
fig. 6 is a flowchart of a method for evaluating service quality according to another embodiment of the present application;
fig. 7 is a flowchart of a method for evaluating service quality according to another embodiment of the present application;
fig. 8 is a flowchart of a method for evaluating service quality according to another embodiment of the present application;
fig. 9 is a schematic structural diagram of a service quality evaluation system according to another embodiment of the present application;
fig. 10 is a schematic diagram of an apparatus for performing a method for evaluating quality of service according to a second embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that, if not conflicted, the various features of the embodiments of the present application may be combined with each other and are within the scope of protection of the present application. Additionally, while functional block divisions are performed in apparatus schematics, with logical sequences shown in flowcharts, in some cases, steps shown or described may be performed in different orders than block divisions in apparatus, or in flowcharts.
Referring to fig. 1, an embodiment of the present application provides a robot 100, the robot 100 including:
a binocular camera 110 for acquiring facial image information, motion information, environment information, client position information, and distance information;
a voice receiving module 120, configured to receive voice information;
the action analysis module 130 is used for acquiring action data according to the action information and the acquired deep convolutional neural network;
the voice emotion analysis module 140 is used for acquiring voice emotion data according to the voice information and the acquired deep convolutional neural network;
the image emotion analysis module 150 is used for acquiring image emotion data according to the facial image information and the acquired deep convolution neural network;
and the GPS module 160 is used for performing three-dimensional reconstruction according to the client position information, the environment information and the distance information and acquiring an obstacle avoidance route.
In an embodiment, the binocular camera 110 may be any type number commonly known in the art, and is not described herein again. It should be noted that the voice receiving module 120 in this embodiment may be a sound pickup device commonly found in the prior art, such as a microphone, and this application does not relate to hardware improvement of the voice receiving module 120, and is not described herein again. It should be noted that the motion analysis module 130, the speech emotion analysis module 140, and the image emotion analysis module 150 in this embodiment are independent functional modules, for example, a separate processing chip is configured in each module, and the processing chip can receive input data, run a deep convolutional neural network, and obtain a calculation result. It should be noted that the GPS module 160 may further include a common GPS positioning device, which can acquire the current position of the robot to facilitate path planning. Those skilled in the art will appreciate that a processing chip may be configured in the GPS module 160 for receiving data transmitted by the binocular camera 110 and executing any existing three-dimensional reconstruction algorithm, so as to obtain an obstacle avoidance route.
In one embodiment, the robot 100 may further include a short-range wireless communication module, a display screen, control keys, and the like. The short-distance wireless communication module can be a WIFI module or a Bluetooth module; in addition, when the display screen is a touch display screen, the control key may be a key function of the touch display screen.
In an embodiment, in order to realize the movement of the robot 100, a driving device, such as a common driving wheel and a steering wheel, may be further provided, and the driving device can be electrically connected to the GPS module 160 and can move under the driving action of the GPS module, which is not described herein again.
Referring to fig. 2, another embodiment of the present application further provides a method for evaluating service quality, applied to a robot as described above, including but not limited to at least one of the following steps:
step S210, the robot reads a pre-trained deep convolution neural network from a server;
step S220, the robot acquires voice emotion data and image emotion data of a client and a service worker in a service area, inputs the voice emotion data and the image emotion data into a deep convolutional neural network, generates first evaluation data and sends the first evaluation data to a server;
step S230, the robot acquires environmental information and inputs the environmental information into a deep convolutional neural network, second scoring data is generated and sent to a server;
step S240, after detecting that voice information of a client in a non-service area contains preset keyword information, the robot sends prompt information to a service worker in an idle state through a server and obtains response time length, and generates third grading data according to the response time length and sends the third grading data to the server;
step S250, the robot acquires action data of a client in a non-service area, inputs the action data into a deep convolutional neural network, generates fourth scoring data and sends the fourth scoring data to a server;
and step S260, the server generates service quality evaluation data according to the first scoring data, the second scoring data, the third scoring data and the fourth scoring data.
In an embodiment, since the training of the deep convolutional neural network requires a relatively large amount of computation, in the embodiment of the present application, the training of the deep convolutional neural network is preferably completed by the server, and the robot reads and downloads the trained deep convolutional neural network from the server when starting.
In an embodiment, in order to facilitate scoring, a plurality of scoring accounts may be generated in the server, for example, a scoring account is established for each service person, and the scoring data includes a corresponding scoring account, which can be found out according to a face recognition result of the service person, which is not described herein again.
In an embodiment, the first scoring data in step S220 may be attitudes of the service staff and the client, for example, voice information of the service staff and the client is identified through a deep convolutional neural network, the voice information is in an active state or a passive state, and then a corresponding score is set according to the corresponding state, for example, if the service staff is in the active state, the service staff is added by one, and a specific scoring standard is adjusted according to an actual requirement, which is not within an improvement range of the embodiment of the application and is not described herein again.
In an embodiment, the second scoring data in step S230 may be environmental sanitation scores, for example, the environmental information is acquired by a binocular camera and then input to a deep convolutional neural network, and the obstacle in the environmental information is identified by comparing the environmental information with a default environmental image stored in a server, for example, the ground in the default environmental image is a flat plane, and a plurality of small-sized objects are identified on the flat plane according to the environmental information, so that it may be considered that there is garbage on the ground, and the corresponding score may be made on the second scoring data, or positions of some fixed placed objects may be identified to determine whether the fixed placed objects are displaced.
In an embodiment, in step S240, a threshold of the response time duration may be preset for determination, for example, the threshold is set to 1 minute, if it is still not detected that the service personnel is in the customer for service after exceeding 1 minute, the response time duration is determined to be over-term, and corresponding score adjustment is performed in the third scoring data. It should be noted that, after detecting the keyword information, the robot may detect whether there is a service person near the client sending the keyword information, and if there is a service person near the client sending the keyword information, the robot may further identify a specific service person through the facial image information, and add a score to the corresponding service person in the third score data, where a specific score standard is not in the improvement of this embodiment and is not described herein again.
In one embodiment, step S250 can be used to determine the emotion of the client through the identification of the client motion data, for example, the deep convolutional neural network identifies that the client is stomping, and the client can be determined to be in an impatient emotion, which is probably caused by low service efficiency, and a certain score adjustment is performed on the corresponding service person in the fourth scoring data.
In an embodiment, before determining the emotion of the client, it may also be determined whether the client is in a state of making a call with another person, for example, it is recognized that the action of the client is holding a mobile phone, and the determination may be made by acquiring data through facial image recognition and action recognition, which is not described herein again.
Based on the above embodiment, the first score data, the second score data, the third score data, and the fourth score data preferably include a score account corresponding to a service person or a service department, and the server may complete adjustment of the score according to the corresponding score account after receiving the data, and may adopt any data statistics method in the prior art, which is not described herein again.
Referring to fig. 3, the following explains the deep convolutional neural network involved in the embodiment of the present application in a specific embodiment:
in one embodiment, the deep convolutional neural network is a capsule network, and comprises an encoder structure formed by the convolutional neural network, a convolutional layer for extracting preliminary features, a main capsule layer for receiving basic feature generation feature combinations detected by the convolutional layer, a digital capsule layer for receiving high-level features, and three decoding structures formed by fully-connected layers. The data to be trained by the coding layer comprises: the voice information, face image information, etc. are converted into feature matrices of low dimension and the matrices are passed to the main capsule layer for feature combination, high-level features are stored by the digital capsule layer and feature extraction is optimized by means of dynamic routing during this period.
Based on the above embodiment, the capsule network preferably updates the weight by using a dynamic routing manner, and the output vector of the high-layer capsule and the output vector of the low-layer capsule are multiplied by a dot product, and then the temporary weight parameter is added to obtain a new weight parameter. The direction difference of the high-layer capsule and the bottom-layer capsule can be obtained through multiplication of dot products, and finally updated values are different. For example, if the high-level capsule output vector and the low-level capsule input vector are both similar, i.e., the vector directions are less than ninety degrees, the weight is scaled up, and similarly, if the two are not similar, the weight is scaled down. After iteration, a set of routing weight coefficients is obtained. And thus optimize the process of feature extraction.
The decoding module then accepts the output vector from the correct digital capsule and learns its encoding as the desired target instance image. The decoder is used as a regulon, which accepts as input the output of the correct digital capsule, reconstructs a standard pixel size image, and the loss function is the Euclidean distance between the reconstructed image and the input image. The decoder forces the capsule to learn features useful for reconstructing the original image, the closer the reconstructed image is to the input image the better the effect. All data transmitted between capsules are vectors, and the vectors retain direction and probability information in original features, so that common convolutional layers cannot process the data.
Another embodiment of the present application further provides a method for evaluating service quality, as shown in fig. 4, fig. 4 is a schematic diagram of another embodiment of a refinement procedure of step S220 in fig. 2, where the step S220 includes, but is not limited to:
and step S410, the robot acquires the service evaluation score of the client and sends the service evaluation score to the server.
In an embodiment, since the robot may also be provided with a display screen, the robot may interact with the customer through the display screen to obtain the service evaluation of the customer, and a customer evaluation method commonly used in the prior art may be adopted, which is not described herein again.
Referring to fig. 5, another embodiment of the present application further provides a method for evaluating quality of service, including but not limited to the following steps:
in step S510, the robot obtains the service times of the service staff according to the facial image information and the voice information in the non-service area, and sends the service times to the server.
In an embodiment, step S510 may determine whether the service person and the client are communicated according to the distance between the service person and the client, the voice information, and the facial expression information, and if so, determine that the service person is servicing the client, add one to the number of servicing times, and perform a certain score adjustment on the service person at the server, which is not described herein again.
Another embodiment of the present application further provides a method for evaluating the service quality, as shown in fig. 6, fig. 6 is a schematic diagram of another embodiment of a refinement process of step S240 in fig. 2, where step S240 includes, but is not limited to:
step S610, the robot acquires the position information, the environment information and the distance information of the client and carries out three-dimensional reconstruction to obtain an obstacle avoidance route;
and S620, the robot moves according to the obstacle avoidance route, reads corresponding operation information from the server according to the keyword information and executes the operation information.
In an embodiment, after the server sends the prompt message to the service staff, the embodiment preferably controls the robot to move to the area where the client is located, so that the service staff provides services to the client before reaching the area, and the client experience is improved. The specific movement control method may be any control method in the prior art, and is not described herein again.
In an embodiment, the keyword information may be preset in the server, for example, a service item name that can be provided, and a corresponding customer service operation may be read according to the service item name, for example, a corresponding form is provided to the customer through a display screen for filling, and the like.
In an embodiment, the three-dimensional reconstruction may adopt any three-dimensional reconstruction method in the prior art, which is not described herein again.
Referring to fig. 7, in another embodiment of the present application, there is further provided a method for evaluating quality of service, including but not limited to the following steps:
step S710, the robot acquires the position information of the service personnel in an idle state in the non-service area;
s720, the robot carries out three-dimensional reconstruction according to the position information, the environment information and the distance information of the service personnel in the idle state to obtain an obstacle avoidance route;
and step S730, the robot moves according to the obstacle avoidance route, reads the test interaction information, acquires the test result of the service personnel in the idle state, and sends the test result to the server.
In one embodiment, the idle state may be a state in which the service person is in a position for a period of time and no interaction with the customer occurs, in which case, the test interaction is performed on the service person, which can be beneficial to assess the professional level of the service person as a reference score for the service quality evaluation. It can be understood that after moving to the area where the service person is located, preset test interaction information, such as a question and answer sheet or person service scene simulation, can be displayed through a display screen in the robot.
Referring to fig. 8, another embodiment of the present application further provides a method for evaluating service quality, further including, but not limited to, the following steps:
step S810, the robot acquires incremental learning data of the deep convolutional neural network according to the voice emotion data and the action data and sends the incremental learning data to a server;
step S820, the server trains a new category in the deep convolutional neural network based on the attention attracting network node and the cyclic back propagation of the meta learning according to the incremental learning data;
in step S830, the server synchronizes the updated deep convolutional neural network to the robot.
In an embodiment, because the actions and language habits of different clients are different, the deep convolutional neural network is updated through the speech emotion data and the action data, so that the deep convolutional neural network can be identified more accurately, and a better identification effect is obtained. It should be noted that the incremental learning can quickly implement training of a new category, and other training methods may also be used, which are not described herein again.
The following illustrates steps S810 to S830 with a specific example:
firstly, training a classifier on an inherent class by using traditional supervised learning, and learning to a fixed characteristic expression stage, namely a pre-training stage, training a new class classifier by combining a meta-learning regularization matrix at each training and testing node, and optimizing a regularization module of the previous stage by combining the newly added class and the inherent class so as to enable the new class and the inherent class to play a role after the inherent classifier. It should be noted that, in the pre-training stage, there is no extra special operation, given all data and corresponding class labels of the inherent class classification, an inherent class classifier is trained, and its corresponding feature expression, to obtain a basic classification model.
The incremental type data set D is used for learning the few-sample nodes at the stage, for each learned N-K node, K new types different from the inherent type are selected each time, each new type part comprises N pictures from the support set S and M pictures from the query set Q, the S and Q picture sets can be used as a training set and a verification set used in learning each node, each node learns a new classifier from the training set S, and meanwhile, the learning parameter W corresponding to the classifier only acts on the node and is called as a quick-use parameter. In order to measure the overall classification effect, only the training set S of the newly added classes is allowed to be contacted in the training algorithm process, and the verification model uses the verification set Q formed by combining the newly added classes and the inherent classes.
In an embodiment, the canonical constraint of meta-learning may be used in a training process, since the overall process of meta-learning is a training process of iteratively repeating the previous stage, a new classifier is added to a new training set, and performance verification is performed on a verification set Q, and a cross entropy loss function and an additionally introduced canonical term R (θ) are combined as an optimization objective function to learn an update speed parameter W, where θ is a meta-parameter, and the combination is embedded in an attention-attracting network as follows:
Figure BDA0002454049370000161
obtaining an overall optimization objective function, wherein the essence of the model parameter W is to optimize the prediction of the newly added category, and then training the verified process for each local node may result in that the performance of the inherent category cannot be guaranteed, in order to solve the problem of catastrophic inherent category forgetting, in this embodiment, an attention-attracting network is used as an optimized regular term R, information features of the inherent category are encoded, and then the subsequent parameterization is used as a constant parameter for storage, and the learning parameter θ is minimized through the entire attention-attracting network as follows:
Figure BDA0002454049370000171
when the prediction class is
Figure BDA0002454049370000172
Obtaining a minimization parameter theta, wherein the regular term R (W, theta) is a core point of the attention attracting network, and the formula is as follows:
Figure BDA0002454049370000173
wherein
Figure BDA0002454049370000174
That is, the attraction part of the attention attraction network, W is the above-mentioned weight parameter, and the regular part can realize the process of acquiring learning information from a new category by adding a bias term based on the sum of squared mahalanobis distances, thereby avoiding the problem of catastrophic forgetting.
Referring to fig. 9, another embodiment of the present application further provides a service quality evaluation system 900, which includes a robot group 910 and a server 920, where the robot group is composed of a plurality of robots as described above, and the robot group and the server cooperate to perform the service quality evaluation method as described above.
The following illustrates a service quality evaluation system according to this embodiment by using a specific example:
in an embodiment, the robot group includes 4 robots, after the 4 robots are started in a working area, the server 920 divides the working area into 4 working sub-areas, and the working area is allocated to be a first robot 911, a second robot 912, a third robot 913 and a fourth robot 914 according to the proximity of the 4 robots and the working sub-areas, wherein the first robot 911 monitors window service personnel and customers randomly between office windows to obtain first rating data, the second robot 912 moves in the working area according to an obstacle avoidance route, detects garbage on the ground and stains on tables and chairs to obtain second rating data, and meanwhile, when the second robot 912 detects that a customer moves from a service area to a non-service area, the second robot can also move to the customer and interact with the customer through a display screen to obtain the rating of the customer on a service process; the third robot 913 waits in a non-service area, acquires the location of the client after detecting keyword information, for example, common questioning words of the client, and sends a prompt message to a service person in an idle state through the server 920, and acquires response time of the service person to generate third scoring data, and in order to improve the experience of the client, after sending the prompt message, the third robot 913 moves to the location of the user and interacts with the client through a display screen, and thus, the third robot 913 may also acquire the number of times that the service person receives or guides the client in the non-service area, for example, through voice information recognition and judgment, and send the counted number of times to the server 920 for score adjustment; the fourth robot 914 monitors queued clients in the service area, generates fourth scoring data and sends the fourth scoring data to the server 920 when monitoring that the action data of the client is abnormal, such as walking back and forth, pacing and the like, so as to check the working efficiency of the service personnel, and in addition, the fourth robot 914 can obtain the number of times that the service personnel receives or guides the client in the service area, which is the same as that of the third robot 913, and is not described herein again; in addition, after detecting an idle service person, the robots in the robot group may move to the service person to perform service knowledge question-answer interaction, and send a question-answer score to the server 920 for score adjustment.
Referring to fig. 10, another embodiment of the present application further provides a quality of service evaluation apparatus 1000, including: the memory 1010, the control processor 1020, and a computer program stored on the memory 1020 and executable on the control processor 1010 implement the method for evaluating the quality of service in any of the above embodiments when the control processor executes the computer program, for example, execute the above-described method steps S210 to S260 in fig. 2, the method step S410 in fig. 4, the method step S510 in fig. 5, the method steps S610 to S620 in fig. 6, the method steps S710 to S730 in fig. 7, and the method steps S810 to S830 in fig. 8.
The control processor 1020 and the memory 1010 may be connected by a bus or other means, such as by a bus in fig. 10.
The memory 1010, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer-executable programs. Further, the memory 1010 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 1010 may optionally include memory located remotely from the control processor 1020, which may be connected to the quality of service assessment apparatus 1000 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The above described embodiments of the apparatus are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may also be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, another embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, which are executed by one or more control processors, for example, by one of the control processors 1020 in fig. 10, and may cause the one or more control processors 1020 to execute the method for evaluating the quality of service in the method embodiment, for example, to execute the above-described method steps S210 to S260 in fig. 2, the method step S410 in fig. 4, the method step S510 in fig. 5, the method steps S610 to S620 in fig. 6, the method steps S710 to S730 in fig. 7, and the method steps S810 to S830 in fig. 8.
It should be noted that, since the apparatus for executing the service quality evaluation method in the present embodiment is based on the same inventive concept as the service quality evaluation method described above, the corresponding contents in the method embodiment are also applicable to the present apparatus embodiment, and are not described in detail herein.
Through the above description of the embodiments, those skilled in the art can clearly understand that the embodiments can be implemented by software plus a general hardware platform. Those skilled in the art will appreciate that all or part of the processes of the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer readable storage medium, and when executed, may include the processes of the embodiments of the methods as described above. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
While the preferred embodiments of the present invention have been described, the present invention is not limited to the above embodiments, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the present invention, and such equivalent modifications or substitutions are included in the scope of the present invention defined by the claims.

Claims (9)

1. A service quality evaluation method is applied to a robot, and the robot comprises the following steps: the binocular camera is used for acquiring facial image information, action information, environment information, client position information and distance information; the voice receiving module is used for receiving voice information; the action analysis module is used for acquiring action data according to the action information and the acquired deep convolutional neural network; the voice emotion analysis module is used for acquiring voice emotion data according to the voice information and the acquired deep convolutional neural network; the image emotion analysis module is used for acquiring image emotion data according to the facial image information and the acquired deep convolutional neural network; the GPS module is used for carrying out three-dimensional reconstruction according to the client position information, the environment information and the distance information and acquiring an obstacle avoidance route, and is characterized in that the service quality evaluation method comprises at least one of the following steps:
the robot reads a pre-trained deep convolutional neural network from a server;
the robot acquires voice emotion data and image emotion data of a client and a service person in a service area, inputs the voice emotion data and the image emotion data into the deep convolutional neural network, identifies an active state or a passive state through the deep convolutional neural network, generates first scoring data according to the identified active state or the passive state, and sends the first scoring data to the server;
the robot acquires an environment image through the binocular camera, inputs the environment image into the deep convolutional neural network, compares the environment image with a preset default environment image through the deep convolutional neural network to obtain an obstacle identification result, generates second grading data according to the obstacle identification result and sends the second grading data to the server;
after detecting that voice information of a client in a non-service area comprises preset keyword information, the robot sends prompt information to a service worker in an idle state through the server and obtains response time, and generates third grading data according to the response time and a preset time threshold and sends the third grading data to the server;
the robot acquires action data of a client in a non-service area, inputs the action data into the deep convolutional neural network, identifies client emotion information, generates fourth scoring data according to the client emotion information and sends the fourth scoring data to the server;
and the server generates service quality evaluation data according to the first grading data, the second grading data, the third grading data and the fourth grading data.
2. The method of claim 1, wherein if the robot detects that the customer moves from a service area to a non-service area, the method further comprises: and acquiring the service evaluation score of the client and sending the service evaluation score to a server.
3. The method of claim 1, further comprising: and obtaining the service times of service personnel according to the facial image information and the voice information in the non-service area, and sending the service times to the server.
4. The method for evaluating the service quality according to claim 1, wherein after the server sends the prompt message to the service personnel in the idle state and obtains the response time, the method further comprises the following steps:
the robot acquires client position information, environment information and distance information and carries out three-dimensional reconstruction to obtain an obstacle avoidance route;
and the robot moves according to the obstacle avoidance route, reads corresponding operation information from a server according to the keyword information and executes the operation information.
5. The method of claim 1, further comprising:
the robot acquires the position information of service personnel in an idle state in a non-service area;
the robot carries out three-dimensional reconstruction according to the position information of the service personnel in the idle state, the environment information and the distance information to obtain an obstacle avoidance route;
and the robot moves according to the obstacle avoidance route, reads the test interaction information, acquires the test result of the service personnel in the idle state, and sends the test result to the server.
6. The method of claim 1, further comprising:
the robot acquires incremental learning data of the deep convolutional neural network according to the voice emotion data and the action data and sends the incremental learning data to a server;
the server trains a new category in the deep convolutional neural network according to the incremental learning data based on the attention attracting network node and the cyclic back propagation of the meta learning;
the server synchronizes the updated deep convolutional neural network into the robot.
7. A service quality evaluation system characterized by: the method comprises a robot group and a server, wherein the robot group consists of a plurality of robots as claimed in claim 1, and the robot group and the server cooperate to execute the service quality evaluation method as claimed in any one of claims 1 to 6.
8. A quality of service evaluation apparatus comprising at least one control processor and a memory for communicative connection with the at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform a quality of service assessment method as claimed in any one of claims 1 to 6.
9. A computer-readable storage medium, characterized in that: the computer-readable storage medium stores computer-executable instructions for causing a computer to perform the method of quality of service evaluation of any one of claims 1 to 6.
CN202010301208.2A 2020-04-16 2020-04-16 Robot, service quality evaluation method and system Active CN111563663B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010301208.2A CN111563663B (en) 2020-04-16 2020-04-16 Robot, service quality evaluation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010301208.2A CN111563663B (en) 2020-04-16 2020-04-16 Robot, service quality evaluation method and system

Publications (2)

Publication Number Publication Date
CN111563663A CN111563663A (en) 2020-08-21
CN111563663B true CN111563663B (en) 2023-03-21

Family

ID=72073125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010301208.2A Active CN111563663B (en) 2020-04-16 2020-04-16 Robot, service quality evaluation method and system

Country Status (1)

Country Link
CN (1) CN111563663B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112057089A (en) * 2020-08-31 2020-12-11 五邑大学 Emotion recognition method, emotion recognition device and storage medium
CN112016938A (en) * 2020-09-01 2020-12-01 中国银行股份有限公司 Interaction method and device of robot, electronic equipment and computer storage medium
CN114371893B (en) * 2020-10-14 2024-03-01 腾讯科技(深圳)有限公司 Information reminding method and related equipment
CN112308211B (en) * 2020-10-29 2024-03-08 中科(厦门)数据智能研究院 Domain increment method based on meta learning
CN113256154A (en) * 2021-06-16 2021-08-13 中国银行股份有限公司 Customer service satisfaction evaluation method and device, storage medium and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105049249A (en) * 2015-07-09 2015-11-11 中山大学 Scoring method and system of remote visual conversation services
WO2018036276A1 (en) * 2016-08-22 2018-03-01 平安科技(深圳)有限公司 Image quality detection method, device, server and storage medium
CN110363154A (en) * 2019-07-17 2019-10-22 安徽航天信息有限公司 A kind of service quality examining method and system based on Emotion identification

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105049249A (en) * 2015-07-09 2015-11-11 中山大学 Scoring method and system of remote visual conversation services
WO2018036276A1 (en) * 2016-08-22 2018-03-01 平安科技(深圳)有限公司 Image quality detection method, device, server and storage medium
CN110363154A (en) * 2019-07-17 2019-10-22 安徽航天信息有限公司 A kind of service quality examining method and system based on Emotion identification

Also Published As

Publication number Publication date
CN111563663A (en) 2020-08-21

Similar Documents

Publication Publication Date Title
CN111563663B (en) Robot, service quality evaluation method and system
US11281675B2 (en) Method for determining user behavior preference, and method and device for presenting recommendation information
US11144831B2 (en) Regularized neural network architecture search
US20180174037A1 (en) Suggesting resources using context hashing
EP3884426B1 (en) Action classification in video clips using attention-based neural networks
CN111784002A (en) Distributed data processing method, device, computer equipment and storage medium
CN106875940B (en) Machine self-learning construction knowledge graph training method based on neural network
EP3885966B1 (en) Method and device for generating natural language description information
US20190139063A1 (en) Methodology of analyzing incidence and behavior of customer personas among users of digital environments
WO2023287910A1 (en) Intelligent task completion detection at a computing device
CN116310667B (en) Self-supervision visual characterization learning method combining contrast loss and reconstruction loss
EP4020468A1 (en) System, method and apparatus for conversational guidance
CN110689359A (en) Method and device for dynamically updating model
CN112101231A (en) Learning behavior monitoring method, terminal, small program and server
CN110097004B (en) Facial expression recognition method and device
CN109451334B (en) User portrait generation processing method and device and electronic equipment
CN114974253A (en) Natural language interpretation method and device based on character image and storage medium
CN117971420A (en) Task processing, traffic task processing and task processing model training method
CN117540703A (en) Text generation method, model training method, device and electronic equipment
CN112149426B (en) Reading task processing method and related equipment
CN114970494A (en) Comment generation method and device, electronic equipment and storage medium
CN112818084A (en) Information interaction method, related device, equipment and computer readable medium
CN117786416B (en) Model training method, device, equipment, storage medium and product
CN117874321A (en) Method and related device for determining prediction model
CN116976309A (en) Data analysis method, device, computer, readable storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant