CN113067952A - Man-machine cooperation non-inductive control method and device for multiple robots - Google Patents

Man-machine cooperation non-inductive control method and device for multiple robots Download PDF

Info

Publication number
CN113067952A
CN113067952A CN202110345606.9A CN202110345606A CN113067952A CN 113067952 A CN113067952 A CN 113067952A CN 202110345606 A CN202110345606 A CN 202110345606A CN 113067952 A CN113067952 A CN 113067952A
Authority
CN
China
Prior art keywords
response
customer service
robots
robot
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110345606.9A
Other languages
Chinese (zh)
Other versions
CN113067952B (en
Inventor
綦欣
张文禹
张家恒
张玮莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110345606.9A priority Critical patent/CN113067952B/en
Publication of CN113067952A publication Critical patent/CN113067952A/en
Application granted granted Critical
Publication of CN113067952B publication Critical patent/CN113067952B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
    • H04M3/5166Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing in combination with interactive voice response systems or voice portals, e.g. as front-ends
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42136Administration or customisation of services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42187Lines and connections with preferential service
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
    • H04M3/5141Details of processing calls and other types of contacts in an unified manner
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/527Centralised call answering arrangements not requiring operator intervention
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a man-machine cooperation non-inductive control method and a device for a plurality of robots, which relate to the technical field of artificial intelligence and can be applied to the field of financial services, and the method comprises the following steps: acquiring response data sent by a customer service staff terminal; carrying out voice sound change or voice synthesis on the response data to obtain response voice; and playing the response voice to the current client. When manual assistance is needed for response, if customer service personnel respond with voices, the response voices are changed, if the customer service personnel respond with words, the words are subjected to voice synthesis, so that the robot always responds to customers with the same voice, man-machine cooperation non-inductive control is achieved, and when questions of the customers exceed certain complexity, the system actively prompts the customer service to perform manual intervention according to intervention coefficients.

Description

Man-machine cooperation non-inductive control method and device for multiple robots
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a man-machine cooperation non-inductive control method and device for multiple robots.
Background
The existing robot remote manual control system in the market can allow customer service staff to control sounds and action commands of a robot, but most systems depend too much on the operation capacity of the customer service staff, namely, the robot is operated to ask for an answer and move mainly by the subjective thinking of the customer service staff, the requirement on the operation proficiency of the customer service staff is high, the manual operation cost is high, the operation capacity of people limits the system reaction speed, in addition, when the customer service staff directly carries out conversation with a client through the robot, the cut-in feeling is obvious, the user needs to program the voice conversation with the customer service staff from a robot conversation mode, the jumping performance is strong, the feeling is sharp, and the user experience is poor.
Disclosure of Invention
The present invention provides a method and an apparatus for human-machine cooperative non-inductive control of multiple robots, an electronic device, and a computer-readable storage medium, which can solve at least some of the problems in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, a human-machine cooperative non-inductive control method for a plurality of robots is provided, which includes:
acquiring response data sent by a customer service staff terminal;
carrying out voice sound change or voice synthesis on the response data to obtain response voice;
and playing the response voice to the current client.
Further, the man-machine cooperation non-inductive control method for the plurality of robots further comprises the following steps:
and according to the question-answer data of the current client, performing AI calculation through an AI auxiliary question-answer system to obtain a reference response result and feeding the reference response result back to the customer service personnel terminal.
Further, the man-machine cooperation non-inductive control method for the plurality of robots further comprises the following steps:
acquiring real-time interactive data uploaded by a plurality of robots;
analyzing the number of robots currently in a service state and the service characteristic parameters of each robot according to the real-time interactive data and the historical interactive data;
judging the corresponding response level according to the service characteristic parameters of each robot;
and sending a man-machine cooperative control request to the customer service personnel terminal according to the number of the robots currently in the service state and the response level of each robot.
Further, the man-machine cooperation non-inductive control method for the plurality of robots further comprises the following steps:
acquiring an artificial cooperative control response instruction fed back by the customer service personnel terminal;
and if the customer service personnel participate in the manual cooperative control, sending the interactive data with the current customer to the customer service personnel terminal.
Further, the service characteristic parameters include: business urgency, business complexity, business priority, question-answer confidence, question-answer interaction frequency, and customer traffic.
Further, the determining the corresponding response level according to the service characteristic parameters of each robot includes:
and carrying out weighted calculation on the service emergency degree, the service complexity, the service priority, the question-answer confidence coefficient, the question-answer interaction frequency and the customer service volume according to preset coefficients to obtain corresponding response levels.
In a second aspect, there is provided a human-machine cooperative non-sensory control device for a plurality of robots, comprising:
the response data acquisition module is used for acquiring response data sent by the customer service personnel terminal;
the response voice integration module is used for carrying out voice sound change or voice synthesis on the response data to obtain response voice;
and the response voice playing module plays the response voice to the current client.
Further, the man-machine cooperation non-inductive control device for a plurality of robots further comprises:
and the AI auxiliary question-answering module is used for carrying out AI calculation through the AI auxiliary question-answering system according to the question-answering data of the current client to obtain a reference response result and feeding the reference response result back to the customer service staff terminal.
In a third aspect, an electronic device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the above-mentioned human-machine cooperative non-sensory control method for multiple robots when executing the program.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which, when executed by a processor, implements the steps of the above-described human-machine cooperative non-sensory control method for a plurality of robots.
The invention provides a man-machine cooperation non-inductive control method and a device for a plurality of robots, which can be applied to the field of financial services and comprise the following steps: acquiring response data sent by a customer service staff terminal; carrying out voice sound change or voice synthesis on the response data to obtain response voice; and playing the response voice to the current client. When manual assistance is needed for response, if customer service personnel respond with voices, the response voices are changed, if the customer service personnel respond with words, the words are subjected to voice synthesis, so that the robot always responds to customers with the same voice, man-machine cooperation non-inductive control is achieved, and when questions of the customers exceed certain complexity, the system actively prompts the customer service to perform manual intervention according to intervention coefficients.
In order to make the aforementioned and other objects, features and advantages of the invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. In the drawings:
FIG. 1 is a schematic diagram of an application scenario architecture in an embodiment of the present invention;
FIG. 2 is a first flowchart illustrating a cooperative human-computer sensorless control method for multiple robots according to an embodiment of the present invention;
FIG. 3 is a second flowchart illustrating a cooperative non-inductive human-computer control method for multiple robots according to an embodiment of the present invention;
FIG. 4 is a third schematic flow chart of a cooperative human-computer sensorless control method for multiple robots according to an embodiment of the present invention;
FIG. 5 is a fourth flowchart illustrating a cooperative human-computer sensorless control method for multiple robots according to an embodiment of the present invention;
FIG. 6 illustrates a principle of human-machine cooperative control in an embodiment of the present invention;
FIG. 7 is a Demo diagram of a mobile terminal version minimelt monitoring platform system according to an embodiment of the present invention;
FIG. 8 is a block diagram showing the configuration of a human-machine cooperative sensorless control apparatus for a plurality of robots according to an embodiment of the present invention;
fig. 9 is a block diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, in the description and claims of this application and the above-described drawings, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
The invention provides a man-machine cooperative non-inductive control method for multiple robots, which is characterized by a high-efficiency low-delay remote cooperative and man-machine hybrid intelligent design. The technical aspect relates to human-computer interaction engineering design, AI artificial intelligence, 5G, audio and video real-time transmission, low-delay message transmission, terminal safety control design and the like, and the design aspect firstly proposes to enable AI capacity to remote operators and create a hybrid intelligent remote human-computer control platform by matching with UI human-computer interaction design of a touch operation mode. The user can be allowed to control the robot to perform various interactive services such as voice, business, action and the like with low human operation cost.
FIG. 1 is a schematic diagram of an application scenario architecture in an embodiment of the present invention; as shown in FIG. 1, a plurality of robots B1-B4, or even more, are used to provide services to customers. The client 1 interacts with the robot B1, service processing, such as consultation service and service handling, is realized through the robot B1, the client 2 interacts with the robot B2, the client 3 interacts with the robot B3, the robot B4 is in an idle state at present, the server S1 interacts with each robot in a wireless mode, such as Bluetooth, WIFI, mobile wireless communication and the like, after the server receives data of each robot, the dynamic adjustment method for the control mode for cooperatively controlling the plurality of robots provided by the embodiment of the invention is implemented, so as to automatically adjust the control interface corresponding to each robot on the customer service staff operation interface of the control center based on the number of the robots currently in a service state and the response level obtained according to the service characteristic parameters of each robot, wherein the control interface is displayed by a control center display M1, M1 can be a touch screen display, and the customer service staff monitors and controls each robot through the display. For example, the service is simple, when the robot can match the corresponding answer with high precision according to the input of the client, the manual participation of a customer service person is not needed, the control picture of the robot is not needed to be displayed on the operation interface of the control center, when the service complexity of the service currently handled by the robot by the client is high, the participation of the customer service person is needed, at the moment, the control picture of the robot on the operation interface of the control center can be displayed in an amplification mode, when the service priority of the service currently handled by the client through the robot is higher, the customer service person needs to pay attention, the control picture of the robot on the operation interface of the control center is displayed moderately, and therefore an intelligent auxiliary means is provided for the customer service person, and the customer service person is helped to control multiple devices in parallel.
FIG. 2 is a first flowchart illustrating a cooperative human-computer sensorless control method for multiple robots according to an embodiment of the present invention; as shown in fig. 2, the man-machine cooperative non-sensory control method for multiple robots may be executed by the server S1, or may be executed by a robot side when the hardware configuration of the robot is strong, and specifically may include the following:
step S100: acquiring response data sent by a customer service staff terminal;
specifically, when entering the human-computer cooperative control, response data for a current problem of a client sent by a customer service staff terminal is acquired, wherein the response data can be in a voice form or a text form, and of course, the response data can also be in a text icon form.
Step S200: carrying out voice sound change or voice synthesis on the response data to obtain response voice;
specifically, when the questioning robot of the client can accurately respond, the robot performs response processing, but when the questioning of the client exceeds a certain complexity, the customer service needs to be actively prompted to perform manual intervention according to an intervention coefficient, and after manual intervention is performed on customer service personnel.
Step S300: and playing the response voice to the current client.
Specifically, when the execution agent is the server S1, the response voice is transmitted to the robot and played to the current client, and if the execution agent is the robot, the response voice is played directly to the current client through the speaker.
In a preferred embodiment, the method for human-machine cooperative non-sensory control of multiple robots may further include:
step I: acquiring the acquisition data of a current client;
specifically, collecting data includes: face images, sound information, the position of the current customer relative to the virtual digital person, and the like;
step II: analyzing the characteristics of the client according to the collected data;
specifically, model data of the current client is obtained based on a big data analysis technology according to the face image, the height information, the sound information and the position of the current client relative to the virtual digital person; and then, analyzing the potential business requirements and the language types of the current customers based on the big data analysis technology of the marketing system according to the model data.
In my, the model data includes: gender, age, mood, identity, height, body type, language type, volume, and location of the current client relative to the virtual digital person, etc. of the current client.
Step III: and generating personalized sound effect data and a service model according to the client characteristics.
Specifically, the service model is generated according to the potential business requirements; and generating the sound effect data according to the language type and the visual preference. The sound effect may refer to a dialect type or a language type and a voice tone, for example, some mild female voice may be used for male.
Specifically, product recommendations may be made based on the potential business needs of the customer.
The method comprises the steps of analyzing different customers to generate personalized language and service models, and realizing thousands of people and thousands of faces.
It should be noted that, when the user uses dialects to perform business, such as cantonese and shanxi dialects, for the service party, if the dialects same as the user can be used, the intimacy or experience of the user can be improved, at this time, the robot can use the language pack responding to the dialects to perform voice answering, after the customer service personnel accesses, the customer service personnel may not speak, or the customer service personnel and the robot ask for an answer in different tones, so that the customer can feel very obtrusive, therefore, a smooth user experience can be provided for the customer by performing a voice processing on the response of the customer service personnel to the same voice tone as that of the robot when the customer service personnel autonomously answers.
In an alternative embodiment, referring to fig. 3, the method for controlling multiple robots cooperatively and noninductively may further include:
step S400: and according to the question-answer data of the current client, performing AI calculation through an AI auxiliary question-answer system to obtain a reference response result and feeding the reference response result back to the customer service personnel terminal.
Specifically, the question-answer result calculated by the AI is quickly sent through the AI auxiliary question-answer system, mixed intelligent output is realized, AI auxiliary suggestions are synchronously provided, and the working difficulty of customer service staff is reduced.
In an alternative embodiment, referring to fig. 4, the method for controlling multiple robots cooperatively and noninductively may further include:
step S500: acquiring real-time interactive data uploaded by a plurality of robots;
specifically, the robot in the service state uploads the business interaction data with the client, while the robot in the idle state does not upload the data or uploads the response data representing that the robot is in the online state.
Step S600: analyzing the number of robots currently in a service state and the service characteristic parameters of each robot according to the real-time interactive data and the historical interactive data;
specifically, the number of robots currently in a service state can be judged according to the real-time interactive data; and the characteristic parameters of the current business handling of the client can be analyzed according to the real-time interaction data and the historical interaction data, such as: the service emergency degree, the service complexity, the service priority, the question-answer confidence degree, the question-answer interaction frequency, the customer service volume and the like.
Step S700: judging the corresponding response level according to the service characteristic parameters of each robot;
specifically, by integrating a plurality of parameters and judging the response level of each robot, various factors can be considered, and overall planning can be realized. The response level is used for representing the degree of manual participation of customer service personnel required by the current business processed by each robot, such as: and when the service priority is high, the service is complex, the question and answer confidence is low, and customer service personnel are urgently needed to participate, the response level is high, and the corresponding response level can be obtained by performing weighted calculation on the service emergency degree, the service complexity, the service priority, the question and answer confidence, the question and answer interaction frequency and the customer service volume according to preset coefficients.
Step S800: and sending a man-machine cooperative control request to the customer service personnel terminal according to the number of the robots currently in the service state and the response level of each robot.
Specifically, when the human-computer cooperation control is requested, not only the response level but also the number of the robots in the service state are considered, for example, when only one robot is in the service state, although the response level is low, the human-home cooperation control may be requested, and when more robots are in the service state, the response levels of the robots are integrated to determine whether the human-computer cooperation is requested.
It should be noted that the steps S500 to S800 are mainly executed at the server side.
By adopting the technical scheme, a plurality of remote robots (or intelligent terminal equipment) with different models can be controlled simultaneously, the remote cooperative labor cost is reduced, the remote control service operation complexity is simplified, a brand new operation method is provided for a remote operator to control a plurality of pieces of equipment alone, a one-to-many remote control mode intelligent dynamic adjustment technology is realized, and the operation experience of customers is greatly improved.
In an alternative embodiment, referring to fig. 5, the method for controlling multiple robots cooperatively and noninductively may further include:
step S900: acquiring an artificial cooperative control response instruction fed back by the customer service personnel terminal;
when the customer service staff terminal receives the human-computer cooperative control response instruction, the customer service staff selects whether to carry out a human-computer cooperative control mode according to the self condition.
And when the customer service staff is in a busy state and cannot take over the robot corresponding to the current request, refusing to perform the man-machine cooperative control mode.
Step S1000: and if the customer service personnel participate in the manual cooperative control, sending the interactive data with the current customer to the customer service personnel terminal.
The robot is also connected with a background man-machine cooperative service to transmit the question and answer process of the audio/video and AI clients to a background client service end in real time.
The following, with reference to fig. 6 and 7, will additionally describe a specific embodiment of the present invention:
the invention overcomes the defects that the existing manual control system is too dependent on the subjective thinking ability of the human being and has high requirement on the operation proficiency, provides an innovative thought for enabling Artificial Intelligence (AI) to customer service personnel, and creates a system and a device for unified man-machine cooperation (or intelligent terminal equipment) of the remote robot based on hybrid intelligence. The design adopts a pure software scheme, has strong system compatibility and expandability, can be used for robot equipment of different hardware manufacturers, and provides a uniform software interface standard for various different hardware terminals. The design of the hybrid intelligence reduces the remote cooperative manual operation cost, simplifies the operation complexity of remote control service, provides intelligent AI auxiliary service for remote operators, and greatly improves the user operation experience.
The man-machine cooperative control remote robot becomes a unified operation management platform of a new man-machine cooperative service mode of 'intelligent robot + remote manual customer service' in an intelligent network, and a set of complete robot control functions are provided for remote customer service personnel. In the existing functional scenes of question-answer conversation between an intelligent robot and a client, multi-robot intelligent service, light service scheduling, intelligent machine scheduling and the like, an AI automatic auxiliary function is provided for the non-sensible participation of remote customer service personnel, the texts of the client questions and AI answers can be automatically listed in an AI auxiliary area of a control center display M1, the customer service personnel can directly use AI answers, so that the reaction efficiency is improved, the operation difficulty is reduced, the answer uniformity of professional questions is improved, the artificial intelligence capability is given to the customer service personnel, namely the customer service personnel can control the conversation between the intelligent robot and the client by two modes of voice synthesis after voice changing or character transmission, and the question-answer result calculated by AI can be quickly sent by an AI auxiliary question-answer system to realize mixed intelligent output. Meanwhile, the system also allows customer service personnel to safely operate the action and walking functions of the robot and schedule various equipment functions.
It is worth explaining that after a client meets the robot, the robot can actively welcome the guest through face recognition, meanwhile, the robot starts a man-machine cooperation mode, and background customer service staff can monitor the interaction process of the client and the robot in real time without interference. The robot end and the man-machine takeover customer service end are in a real-time communication state, manual intervention of the customer service can be prompted according to needs, AI auxiliary suggestions are synchronously provided, and customer service personnel conduct active or semi-automatic manual intervention according to needs or continuously adopt a monitoring noninterference strategy.
Specifically, taking a service scenario under a bank line as an example: when a client enters a network point and wants to take a number and transact a formal loss reporting service. The robot can actively meet the guests at the branch hall. At the moment, the robot is already under the control of a hybrid intelligent mode, namely the robot is connected with a small-melting brain real-time semantic analysis system in real time to intelligently respond to the question and answer of the customer service at any time, and meanwhile, the robot is also connected with a background man-machine cooperative service to transmit the question and answer process of the audio/video and AI customer to a background customer service end in real time. When the customer service staff use the customer service end, the customer service staff are in a passive monitoring state, namely, only slight attention needs to be paid to system real-time information, active operation is not needed, and only when the questions of customers exceed a certain complexity, the system can actively prompt the customer service to perform manual intervention according to an intervention coefficient. After the client enters the website, the robot actively asks the guest what service the client needs to transact. The customer answers to the official loss report. The robot can automatically judge the service intention according to AI small melting brain analysis, inform the client of handling formal loss reporting and immediately scanning the code and taking the number, and display the two-dimensional code of taking the number on a head display screen of the robot. Meanwhile, the robot can send questions of the customer and answers of the robot to the customer service end for monitoring. After the customer scans the code and fetches the number, the customer proposes to transact the balance inquiry service due to the waiting time. The robot can judge through AI this moment, takes the customer to go to self-service machine and handle the business voluntarily, but this moment this self-service machine has other customers just in time to use, so the robot tells that the customer needs to wait for again to judge this condition with intelligence and belong to unexpected condition, propose artifical intervention and pacify the customer, and send for the customer service end through "man-machine cooperative service" automatically, the customer service end can remind customer service personnel to intervene the operation voluntarily through sound and vibrations. And display the customer's previous contextual dialog content to the customer service personnel. Meanwhile, the customer service end system can automatically recommend the AI auxiliary knowledge base, namely, the best possible answer is recommended to the customer service personnel, and the recommended answer is prompted in a flashing mode. The customer service personnel can directly select the ai recommended dialect to answer without manually editing characters. In the scenario as in the example, the ai assistant would recommend the customer service answer "please later, the self-service machine is being used by other users, do you need to drink a cup of water slightly etc.? ". If the customer continues to carry out deep questioning, such as asking whether the robot can analyze the recent financial investment trend for the customer, customer service personnel can actively intervene to directly carry out audio sound change transmission through the man-machine cooperative service, and manually solve similar complex problems. For example, the customer service may explain rough financial trends for him and say "i call a professional financing manager for you again and explain you in detail", and then may send a notification to the financing manager via the system. While for the client he is indifferent to the content of the manual or AI intelligent answer.
Meaning with respect to hybrid intelligent design. AI technology has advanced significantly, but the ability to handle chinese dialogue logic has not reached the level of complete free dialogue. AI intelligent dialogues are mainly used in certain specific service scenarios, such as: a deposit and withdrawal service transaction scene, a credit card application scene and the like. In the actual use process of the bank hall, the questioning content of the robot facing the client cannot be limited, and the content is a full-range and full-scene questioning and answering which is freely opened, such as: hour messages, chat, home-chants, etc. When the AI processes the full-scene free question and answer, a question with an increased degree of answer deviation appears. Therefore, artificial cooperative control is introduced to handle highly complex full-scene questions and answers with human capabilities. And energized for customer service through the AI system. When a customer's question is biased toward a certain business scenario or depth knowledge, the AI's answer effect may be higher than manual. Therefore, the artificial intelligence is matched with the AI to form mixed intelligence, and the maximum intelligent service efficiency can be exerted on the basis of the current technology.
In man-machine collaboration, aspects are assigned to the use of artificial and intelligent modes. The robot is in a pure intelligent service mode during normal operation, the automatic operation condition of the robot can be monitored manually in the background, and the robot is controlled automatically and intelligently. When the robot has inaccurate or wrong question-answer results (if the confidence of 2 continuous answers is lower than 30%), manual work can intervene immediately, correct errors again and answer questions.
In addition to this, manual mixing control is necessary in the following cases: 1. the environment is noisy, and the microphone device cannot receive sound clearly, so that the voice recognition cannot convert the audio frequency into characters normally. Such as: various exhibitions, group visits, outdoor store gathering areas, etc. Unstable network signal and too high AI delay.
Intelligent control co-operation logic as shown in figure 6,
a) in a daily operation mode, the robot operates in a pure intelligent mode, a background can monitor the current operation process in real time to see the interactive pictures and audio of the client, and the robot can automatically answer the questions of the client and automatically interact. When the answer confidence of the questions of the client is lower than 30% continuously for 2 times or the questions of the same client are chatting questions continuously for 3 times, the system prompts the customer service to intervene manually, and customer service staff can quickly control the robot to carry out voice call through the man-machine cooperation system to guide the client to handle specific services. The server can judge the intention, the category and the confidence for each conversation content. The intention is what the user wants to do, the category is business class, chatting class or encyclopedia knowledge class, and the like, and the confidence is the confidence index that the background brain system answers correctly for each dialogue. The server can judge to introduce artificial cooperation at the best time according to the set of logic so as to correct wrong answers or actively market business products and guide complex business handling.
In some special environments, the robot may be in a manual takeover mode for a long time, such as an exhibition site, a noisy outdoor place, and a place where people gather. In the above environments with noisy noise, the accuracy of ASR speech recognition may decrease, resulting in a problem that the robot cannot hear the client clearly, and the client may have syntax and word segment recognition errors. And thus the manual mode is directly used for the cooperative operation. The customer service personnel still can work in a semi-intelligent cooperative robot by utilizing the AI auxiliary function.
b) When the man-machine cooperation is carried out, the customer service staff decide the man-machine cooperation, namely the robot enters an intelligent mode after being started, and meanwhile, background staff are allowed to carry out cooperative monitoring at any time. And after the background personnel actively carry out the man-machine cooperation, the operation can be not carried out, and the operation is only used for monitoring the working condition of the remote robot. The server can analyze the user problems in real time and carry out intervention prompt on the customer service according to intention, category and confidence, and the customer service can intervene manually by voice at any time according to actual conditions or after seeing the intervention prompt. While the whole process is not sensible to the front-end client. The AI assist function exists throughout the life cycle of the system with the human machine. As long as the man-machine cooperative system starts and is butted with a certain robot, the AI auxiliary function can be automatically started, the customer problems are automatically analyzed, an AI answer text is given, and the answer system with high confidence coefficient is marked out in dark red. The customer service personnel can use the AI answers to answer quickly at any time. Meanwhile, the AI assistance can also automatically analyze key words in the customer problems, actively search the corpus of customer service personnel, automatically match the corpus and possibly use the corpus content to carry out blue light flicker prompting. That is, the servicer will see the AI answer and which of the corpora prepared by him is most likely to be available, all prompts are automatically and highlighted, and the servicer can listen to the system recommendations for rapid action. Thereby reducing human reaction delay and thinking time.
Practical use case: after the robot is started at a network point, a customer service worker operates the background human-computer takeover system to connect the robot and is in a monitoring mode, the customer service worker can observe a front picture of the robot and receive environmental sounds, and the AI assistant can be started all the time. At this time, if a client has a conversation with the robot, the robot can perform AI question-answer interaction in an intelligent mode. When the customer asks for questions for 3 times continuously and belongs to the chat type, or the confidence coefficient of intelligent answers for 2 times continuously is low, the customer service end system prompts the customer service to carry out manual intervention. The customer service can immediately control the robot to have a conversation with the customer service through voice-changing intervention. Meanwhile, the customer service end AI assistant can display the recommended answer at any moment and prompt the corpus information related to the customer service problem to the customer service through a flash style. The customer service staff can directly adopt the suggestions of the AI assistant as required or make slight adjustments to perform voice reply.
No matter what mode the man-machine cooperation is in, the intelligent capability of the intelligent device is meaningful all the time. Manual supervision and control can be used mainly in peak hours of passenger flow. The method comprises the steps that an intelligent AI mode is started to provide services in a time period with less passenger flow at a website; the AI capability of the smart device enables not only the robot but also the backend human customer service. The manual control is not single brain thinking and dialogue, but service output is carried out by means of AI prompt, and customers are served by mixed intelligence.
The remote man-machine cooperative control system helps remote customer service personnel to operate intelligent equipment (such as a robot) more simply, the robot makes up for the weakness of the robot and the intelligent robot in a network only serves the intelligent equipment from the perspective of a customer, so that a customer service environment which is transparent to the customer, has more complete functions and has mixed intelligent service capability is formed.
Fig. 7 is an interface of a mobile terminal, i.e. a robot terminal convergence monitoring platform, which can provide a service for a manager to detect the state of a terminal device in real time; the operation data can be analyzed in real time, such as a robot service usage amount trend graph in the last 6 months, a question and answer content classification proportion graph and the like;
the corresponding computer monitoring system can also provide more services, such as allowing a manager to control the equipment access, and managing the registration, audit, state adjustment and the like of the robot equipment; the robot state can be quickly inquired, data is presented in a flexible data report form, and an extended inquiry interface is provided for a third party;
based on the same inventive concept, the embodiment of the present application further provides a human-machine cooperative non-inductive control device for multiple robots, which can be used to implement the methods described in the above embodiments, as described in the following embodiments. Because the principle of solving the problems of the man-machine cooperative non-inductive control device for the plurality of robots is similar to that of the method, the implementation of the man-machine cooperative non-inductive control device for the plurality of robots can be referred to the implementation of the method, and repeated details are omitted. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 8 is a block diagram showing a configuration of a cooperative sensorless human-machine controller for a plurality of robots according to an embodiment of the present invention, and as shown in fig. 8, the cooperative sensorless human-machine controller for a plurality of robots may include: a response data acquisition module 10, a response voice integration module 20 and a response voice playing module 30,
the response data acquisition module 10 acquires response data sent by the customer service personnel terminal;
the response voice integration module 20 performs voice change or voice synthesis on the response data to obtain response voice;
the response voice playing module 30 plays the response voice to the current client.
Specifically, when the questioning robot of the client can accurately respond, the robot performs response processing, but when the questioning of the client exceeds a certain complexity, the customer service needs to be actively prompted to perform manual intervention according to an intervention coefficient, and after manual intervention is performed on customer service personnel.
The apparatuses, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or implemented by a product with certain functions. A typical implementation device is an electronic device, which may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
In a typical example, the electronic device specifically includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor executes the program to implement the steps of the above-mentioned man-machine cooperative non-sensory control method for multiple robots.
Referring now to FIG. 9, shown is a schematic diagram of an electronic device 600 suitable for use in implementing embodiments of the present application.
As shown in fig. 9, the electronic apparatus 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate works and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM)) 603. In the RAM603, various programs and data necessary for the operation of the system 600 are also stored. The CPU601, ROM602, and RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted as necessary on the storage section 608.
In particular, according to an embodiment of the present invention, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, an embodiment of the present invention includes a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the above-described human-machine cooperative non-sensory control method for a plurality of robots.
In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A human-computer cooperation non-inductive control method for a plurality of robots is characterized by comprising the following steps:
acquiring response data sent by a customer service staff terminal;
carrying out voice sound change or voice synthesis on the response data to obtain response voice;
and playing the response voice to the current client.
2. The human-computer cooperative non-sensory control method for a plurality of robots according to claim 1, characterized by further comprising:
and according to the question-answer data of the current client, performing AI calculation through an AI auxiliary question-answer system to obtain a reference response result and feeding the reference response result back to the customer service personnel terminal.
3. The human-computer cooperative non-sensory control method for a plurality of robots according to claim 1, characterized by further comprising:
acquiring real-time interactive data uploaded by a plurality of robots;
analyzing the number of robots currently in a service state and the service characteristic parameters of each robot according to the real-time interactive data and the historical interactive data;
judging the corresponding response level according to the service characteristic parameters of each robot;
and sending a man-machine cooperative control request to the customer service personnel terminal according to the number of the robots currently in the service state and the response level of each robot.
4. The human-computer cooperative non-sensory control method for a plurality of robots according to claim 3, characterized by further comprising:
acquiring an artificial cooperative control response instruction fed back by the customer service personnel terminal;
and if the customer service personnel participate in the manual cooperative control, sending the interactive data with the current customer to the customer service personnel terminal.
5. The human-computer cooperative non-sensory control method for a plurality of robots according to claim 3, wherein the service characteristic parameters include: business urgency, business complexity, business priority, question-answer confidence, question-answer interaction frequency, and customer traffic.
6. The human-computer cooperative non-sensory control method for the plurality of robots according to claim 5, wherein the judging the corresponding response level according to the service characteristic parameter of each robot comprises:
and carrying out weighted calculation on the service emergency degree, the service complexity, the service priority, the question-answer confidence coefficient, the question-answer interaction frequency and the customer service volume according to preset coefficients to obtain corresponding response levels.
7. A human-machine cooperative non-inductive control device for a plurality of robots, comprising:
the response data acquisition module is used for acquiring response data sent by the customer service personnel terminal;
the response voice integration module is used for carrying out voice sound change or voice synthesis on the response data to obtain response voice;
and the response voice playing module plays the response voice to the current client.
8. The human-machine cooperative non-sensory control device for a plurality of robots according to claim 7, characterized by further comprising:
and the AI auxiliary question-answering module is used for carrying out AI calculation through the AI auxiliary question-answering system according to the question-answering data of the current client to obtain a reference response result and feeding the reference response result back to the customer service staff terminal.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method for human-machine cooperative sensorless control for a plurality of robots of any one of claims 1 to 6 when executing the program.
10. A computer-readable storage medium, on which a computer program is stored, the computer program, when being executed by a processor, implementing the steps of the human-machine cooperative non-sensory control method for a plurality of robots of any one of claims 1 to 6.
CN202110345606.9A 2021-03-31 2021-03-31 Man-machine cooperation non-inductive control method and device for multiple robots Active CN113067952B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110345606.9A CN113067952B (en) 2021-03-31 2021-03-31 Man-machine cooperation non-inductive control method and device for multiple robots

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110345606.9A CN113067952B (en) 2021-03-31 2021-03-31 Man-machine cooperation non-inductive control method and device for multiple robots

Publications (2)

Publication Number Publication Date
CN113067952A true CN113067952A (en) 2021-07-02
CN113067952B CN113067952B (en) 2023-04-14

Family

ID=76564746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110345606.9A Active CN113067952B (en) 2021-03-31 2021-03-31 Man-machine cooperation non-inductive control method and device for multiple robots

Country Status (1)

Country Link
CN (1) CN113067952B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113878595A (en) * 2021-10-27 2022-01-04 上海清芸机器人有限公司 Humanoid entity robot system based on raspberry group

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107135247A (en) * 2017-02-16 2017-09-05 江苏南大电子信息技术股份有限公司 A kind of service system and method for the intelligent coordinated work of person to person's work
CN108073976A (en) * 2016-11-18 2018-05-25 科沃斯商用机器人有限公司 Man-machine interactive system and its man-machine interaction method
CN111246027A (en) * 2020-04-28 2020-06-05 南京硅基智能科技有限公司 Voice communication system and method for realizing man-machine cooperation
CN111409081A (en) * 2019-08-16 2020-07-14 江苏遨信科技有限公司 Method and system for simulating and learning speech by robot
CN112365894A (en) * 2020-11-09 2021-02-12 平安普惠企业管理有限公司 AI-based composite voice interaction method and device and computer equipment
WO2021027198A1 (en) * 2019-08-15 2021-02-18 苏州思必驰信息科技有限公司 Speech dialog processing method and apparatus
CN112543185A (en) * 2020-11-23 2021-03-23 建信金融科技有限责任公司 Customer service method, device and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108073976A (en) * 2016-11-18 2018-05-25 科沃斯商用机器人有限公司 Man-machine interactive system and its man-machine interaction method
CN107135247A (en) * 2017-02-16 2017-09-05 江苏南大电子信息技术股份有限公司 A kind of service system and method for the intelligent coordinated work of person to person's work
WO2021027198A1 (en) * 2019-08-15 2021-02-18 苏州思必驰信息科技有限公司 Speech dialog processing method and apparatus
CN111409081A (en) * 2019-08-16 2020-07-14 江苏遨信科技有限公司 Method and system for simulating and learning speech by robot
CN111246027A (en) * 2020-04-28 2020-06-05 南京硅基智能科技有限公司 Voice communication system and method for realizing man-machine cooperation
CN112365894A (en) * 2020-11-09 2021-02-12 平安普惠企业管理有限公司 AI-based composite voice interaction method and device and computer equipment
CN112543185A (en) * 2020-11-23 2021-03-23 建信金融科技有限责任公司 Customer service method, device and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113878595A (en) * 2021-10-27 2022-01-04 上海清芸机器人有限公司 Humanoid entity robot system based on raspberry group

Also Published As

Publication number Publication date
CN113067952B (en) 2023-04-14

Similar Documents

Publication Publication Date Title
US10708423B2 (en) Method and apparatus for processing voice information to determine emotion based on volume and pacing of the voice
KR102348904B1 (en) Method for providing chatting service with chatbot assisted by human counselor
CN111095892B (en) Electronic device and control method thereof
CN107704169B (en) Virtual human state management method and system
KR102136706B1 (en) Information processing system, reception server, information processing method and program
JP7297797B2 (en) Method and apparatus for managing holds
CN111131005A (en) Dialogue method, device, equipment and storage medium of customer service system
CN111405129A (en) Intelligent outbound risk monitoring method and device
CN113067952B (en) Man-machine cooperation non-inductive control method and device for multiple robots
CN112383667A (en) Call data processing method, device, equipment and storage medium
CN117151662A (en) Position information processing method, device, equipment and storage medium based on AI
CN113724036A (en) Method and electronic equipment for providing question consultation service
US10972606B1 (en) Testing configuration for assessing user-agent communication
CN110196900A (en) Exchange method and device for terminal
WO2021109741A1 (en) Serving method, apparatus, system and device, and storage medium
CN115118820A (en) Call processing method and device, computer equipment and storage medium
CN114179083B (en) Leading robot voice information generation method and device and leading robot
JP2024092451A (en) Dialogue support system, dialogue support method, and computer program
JP7205962B1 (en) System for automatic dialogue
US11082560B2 (en) Configuration for transitioning a communication from an automated system to a simulated live customer agent
JP2018190070A (en) Interaction support method, device and program
CN117808443A (en) Recruitment service system, method, equipment and storage medium based on AI
CN114283853A (en) Method and device for determining voice robot broadcasting strategy
CN115841404A (en) Online job processing method and online job processing, answering and correcting device
JP2020115244A (en) Operator response support system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant