CN108698231A - Posture control device, robot and posture control method - Google Patents

Posture control device, robot and posture control method Download PDF

Info

Publication number
CN108698231A
CN108698231A CN201780007508.6A CN201780007508A CN108698231A CN 108698231 A CN108698231 A CN 108698231A CN 201780007508 A CN201780007508 A CN 201780007508A CN 108698231 A CN108698231 A CN 108698231A
Authority
CN
China
Prior art keywords
robot
posture
user
instructions
orders
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201780007508.6A
Other languages
Chinese (zh)
Inventor
伊藤诚悟
筱原秀俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sharp Corp
Original Assignee
Sharp Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sharp Corp filed Critical Sharp Corp
Publication of CN108698231A publication Critical patent/CN108698231A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)
  • Toys (AREA)

Abstract

At moment beginning of conversation with user, it can indicate whether robot itself gives orders or instructions to be intended to user.Posture control device for the posture for controlling robot (101), the robot can engage in the dialogue with user, the posture of shown robot (101) determined by moment beginning of conversation with user is not in the case of giving orders or instructions to be intended to prompt posture, and posture control device (3) the driving driving unit (1) is so that the robot (101) makes the intention prompt posture of giving orders or instructions.

Description

Posture control device, robot and posture control method
Technical field
A kind of posture control device of the posture for the robot that can be engaged in the dialogue with user the present invention relates to control including The robot and posture control method of posture control device.
Background technology
In recent years, the robot for giving orders or instructions to be acted according to oneself has been developed.And, it is desirable that these robots are more certainly Right carries out corresponding to the action given orders or instructions of oneself.For example, Patent Document 1 discloses synchronous with actual act by synthesizing Voice carries out corresponding to the robot device of action to give orders or instructions naturally.In addition, being disclosed by machine in patent document 2 People generates the action of robot during exporting voice, and carries out the humanoid shape robot for corresponding to the action given orders or instructions naturally.
Existing technical literature
Patent document
Patent document 1:Japanese Laid-Open Patent Publication " patent the 5402648th (login on November 8th, 2013) "
Patent document 2:Japanese Laid-Open Patent Publication " special table 2014-504959 (on 2 27th, 2014 public tables) "
Invention content
The technical problems to be solved by the invention
However, in order to successfully carry out the dialogue of user and robot, when starting dialogue with user, need to user Clearly convey to whether the robot itself has the intention given orders or instructions.But the robot disclosed in above-mentioned each patent document, though Action is so designed as when robot itself gives orders or instructions naturally, still, not particularly contemplating when starting dialogue with user, to Family conclusivelys show whether the robot itself has the intention given orders or instructions.
The present invention makes in view of the issue, its purpose is that realizing in moment beginning of conversation with user, energy It is enough that whether robot itself, which has the posture control device and posture control method of the intention given orders or instructions, is conclusivelyed show to user.
Solution to the problem
In order to solve the problems, such as above-mentioned class, posture control device according to one method of the present invention is installed in robot In, and the posture of the robot is controlled, the robot can engage in the dialogue with user, and drive multiple driving portions and make A variety of postures, which is characterized in that the posture control device has:Posture determining section, from the driving shape of each driving portion The posture of the robot is determined in state;Drive control part carries out the drive control of each driving portion, the drive control Portion is being not representing the machine with moment beginning of conversation of user posture of the robot determined by the posture determining section In the case that device people has the intention prompt posture of giving orders or instructions for the intention given orders or instructions, drives each driving portion and the robot is made to make institute It states and gives orders or instructions to be intended to prompt posture.
Posture control method according to one method of the present invention is the posture control method for the posture for controlling robot, institute Stating robot can engage in the dialogue with user, and can drive multiple driving portions to make a variety of postures, which is characterized in that institute Stating posture control method includes:Gesture determination step, in the posture for starting with determining the robot when user session;Driving Rate-determining steps are to be not representing the robot to give orders or instructions to anticipate in the posture of the robot determined by the gesture determination step Figure gives orders or instructions to be intended in the case of prompting posture, drives each driving portion and the robot is made to make the intention prompt of giving orders or instructions Posture.
Invention effect
Following effect can be obtained according to one method of the present invention:At moment beginning of conversation with user, can to Family conclusivelys show whether robot itself has the intention given orders or instructions.
Description of the drawings
Fig. 1 shows the summaries of the robot of the first embodiment of the present invention to constitute block diagram.
Fig. 2 indicates the robot ability of posture control process flow for the posture control device that robot according to figure 1 has Sequence chart.
Fig. 3 indicates that the summary of the robot of second embodiment of the present invention constitutes block diagram.
Fig. 4 indicates the robot ability of posture control process flow for the posture control device that robot according to Fig.3, has Sequence chart.
Fig. 5 indicates that the summary of the robot of third embodiment of the present invention constitutes block diagram.
Fig. 6 indicates that the summary of the robot of the variation of third embodiment of the present invention constitutes block diagram.
Specific implementation mode
< first embodiments >
In the following, detailed description of embodiments of the present invention.In present embodiment, explanation is had into at least mankind or is moved Object, or shell similar with its, and have driving unit making it act, being made of multiple driving portions, and can The robot to engage in the dialogue with user.
(summary of robot)
Fig. 1 is the summary composition figure of the robot 101 of present embodiment.Robot 101 has at least mankind or animal, or It is shell (not shown) similar with its.Robot 101 further includes:Driving unit 1, by the multiple driving portions for making shell act (executor) is constituted;Voice unit 2, for realizing the dialogue with user;Posture control device 3, driving driving unit 1 is simultaneously Make a variety of postures.
Voice unit 2 includes:Microphone 21, input unit 22, speech recognition equipment 23, Interface 24, phonetic synthesis Device 25, transcriber 26, loud speaker 27 and reproduction situation acquisition device 28.Microphone 21 is the language collected user and sent out Sound, and the voice of collection is converted to the device of electron waves data (Wave data).Microphone 21 is by the electron waves figurate number of conversion According to the input unit 22 for being sent to back segment.
Input unit 22 is the device for storing the electronic wave form data.Input unit 22 is in storing Wave data, table In the case of showing that the Wave data continue for the stipulated time or more for the state of the Wave data of tone-off, it will terminate to store, and The signal that the case where indicating to have finished storage indicates end of input is sent to posture control device 3.Input unit 22 to At the time of posture control device 3 sends the signal for indicating end of input, the Wave data stored is sent to the voice of back segment Identification device 23.Speech recognition equipment 23 is that the electronic wave form data that will be sent from input unit 22 are converted to text data (ASR:Automatic Speech Recognition, automatic speech recognition) device.Speech recognition equipment 23 will be transformed Text data is sent to the Interface 24 of back segment.
Interface 24 is to analyze the text data sent from speech recognition equipment 23 to determine that the content of giving orders or instructions of user (is divided Analyse result), to obtain the device of dialogue data, the dialogue is indicated to give orders or instructions content and the meeting of foundation for identified with data The response contents of words.In addition, Interface 24 corresponds to the textual data of response contents from acquired dialogue with extracting data According to.Also, the text data of extraction is sent to the speech synthetic device 25 of back segment by Interface 24.
Speech synthetic device 25 is TTS (Text to Speech:Text To Speech) device, it will be sent out from Interface 24 The text data sent is converted to PCM data.Transformed PCM data is sent to the transcriber of back segment by speech synthetic device 25 26.Transcriber 26 is the device that loud speaker 27 is output to using the PCM data sent from speech synthetic device 25 as sound wave.This In the sound wave that exports refer to sound that people can identify.In addition, the sound wave exported from transcriber 26 is in giving orders or instructions user The response contents of appearance.Dialogue is established between user and robot 101 as a result,.In addition, transcriber 26 is defeated by PCM data Go out to while loud speaker 27 and PCM data is output to reproduction situation acquisition device 28.
Situation acquisition device 28 is reproduced in the case where transcriber 26 has sent PCM data, will indicate loud speaker 27 The signal that voice output has started will indicate reproduction start time of the robot 101 to user's reproducing speech (when giving orders or instructions to start Carve) signal be sent to posture control device 3.
Posture control device 3 is the device for the posture for controlling robot 101, including drive dynamic control device 31, shell state Acquisition device 32, posture storage device 33 and behavior pattern storage device 34.Drive dynamic control device 31 includes according to the drive The driving condition of moving cell (driving portion) 1 determines the posture determining section 31a of the posture of robot 101, and carries out the driving The drive control part 31b of the drive control of unit 1.
Shell state acquisition device 32 is the device for the information for obtaining the driving condition for indicating driving unit 1.Herein, it indicates The information of the driving condition of driving unit 1 is to indicate which type of the driving unit 1 of the posture for determining robot 101 is in The information of driving condition.For example, being equivalent to the angle from the joint that the rotary encoder on the joint of robot obtains Information, torque open/close state etc. indicate the information of driving condition.The information is sent to driving from shell state acquisition device 32 The posture determining section 31a of control device 31.
Posture storage device 33 is for storing the device for giving orders or instructions to be intended to prompt posture made by robot 101.Specifically For, it will indicate that the information of the driving condition of driving unit 1 is stored in posture storage device 33 so that robot 101 makes It gives orders or instructions to be intended to prompt posture.Intention prompt posture of giving orders or instructions refers to, such as robot makes posture from finger to mouth, attention appearance by The postures such as gesture, user oriented face are the postures for the intention that robot indicates to give orders or instructions to user.
Behavior pattern storage device 34 is the device for storing behavior pattern associated with the content of giving orders or instructions of robot 101. Specifically, in behavior mode storage device 34, drive associated with each content of giving orders or instructions is indicated as behavior pattern storage The information of the driving condition of moving cell 1.Also, as behavior pattern, it is not only the information of posture storage device 33, can also be chased after Shell adding body state acquisition device 32 and the various sensors such as fall detection and acceleration of gravity or robot 101 Internal state, for example, voice recognition result past behavior pattern.In addition, the behavior that the content of giving orders or instructions of user is classified Pattern.It can also be the behavior pattern according to pitch when giving orders or instructions.In addition, giving orders or instructions to be intended to that posture is prompted to can not be a type Type, and multiple types the case where can be according to robot 101, such as robot 101 hold object etc..
In posture determining section 31a, by acquisition indicate robot 201 driving unit 1 driving condition information come Which type of posture that the robot 201 is made now determined.The identified information for indicating posture is from posture determining section 31a It is sent to drive control part 31b.
Judge in drive control part 31b:It is engraved at the beginning of conversation with user by posture determining section 31a institutes really Whether the posture of the fixed robot 101 is to give orders or instructions to be intended to prompt posture.Herein, it is the machine with moment beginning of conversation of user Reproduction start time of the device people 101 to user's reproducing speech.That is, receive indicate from reproduce situation acquisition device 28, At the time of robot 101 starts the signal of reproducing speech to user, drive control part 31b is judged by posture determining section 31a The posture of identified robot 101.
It does not give orders or instructions to be intended to prompt posture in the posture that the judging result of the posture of robot 101 is the robot 101 In the case of, drive control part 31b drives driving unit 1 and so that robot 101 is made and give orders or instructions to be intended to prompt posture.That is, with Moment beginning of conversation at family determines the posture (gesture determination step) of the robot 101, and judges identified robot Whether posture is the intention prompt posture of giving orders or instructions for indicating the robot 101 and being intended to give orders or instructions.Also, it be not to give orders or instructions to be intended to carry In the case of showing posture, driving driving unit 1 is so that the robot 101 makes the intention prompt posture (drive control of giving orders or instructions Step).In this way, in the case where moment beginning of conversation with user is not to give orders or instructions to be intended to prompt posture, just due to robot 101 Be restored to the action for giving orders or instructions to be intended to prompt posture, therefore user can easily understand that robot 101 has and give orders or instructions It is intended to.
In addition, robot 101 posture judging result be the robot 101 posture be give orders or instructions be intended to prompt posture In the case of, before beginning is given orders or instructions by robot 101, drive control part 31b is acted, for being informed to user from present It rises and gives orders or instructions.For example, robot 101 gives orders or instructions to be intended in prompt posture, if head is giving orders or instructions to start it towards front Advance into following action:Nose drop is primary, and head is restored to front.After carrying out the action, robot 101 into Row is given orders or instructions.User can easily understand that there is the intention given orders or instructions in robot 101 as a result,.
(ability of posture control processing)
Fig. 2 is the sequence chart for the ability of posture control process flow for indicating robot 101 shown in FIG. 1.Sequence chart below includes: Processing (1) until 101 reproducing speech of robot;In voice reproduction, the processing (2) at the end of the behavior of robot 101; During the behavior of robot 101, the processing (3) at the end of voice reproduction.
Handle the summary of (1):In the voice unit 2 of robot 101, substantially user gives orders or instructions to obtain by microphone 21 It takes, and is entered 22 typing of device.Then, what is be logged gives orders or instructions through the progress speech recognition of speech recognition equipment 23, and passes through Interface 24 obtains dialogue character string from voice recognition result, and will talk with character string by speech synthetic device 25 and carry out Phonetic synthesis sends out phonetic synthesis content by transcriber 26 on loud speaker 27.Also, by giving orders or instructions from acquisition user To sending out phonetic synthesis content as a series of action.
In addition, in present embodiment, in institute's speech units 2, moment beginning of conversation with user is:User's gives orders or instructions Obtained by microphone 21, and by corresponding to the acquired response given orders or instructions with give orders or instructions by transcriber 26 start reproduce when It carves.
In posture control device 3, at moment beginning of conversation, shell (machine is obtained by shell state acquisition device 32 The driving unit 1 of device people 101) information (activation bit).Also, drive dynamic control device 31 as needed swashs driving unit 1 After work, it is changed to give orders or instructions to be intended to prompt posture according to the information of posture storage device 33, and according to content subordinate act mould of giving orders or instructions A behavior pattern is selected in formula storage device 34.
When driving unit 1 starts to drive according to behavior pattern, then robot 101 starts to give orders or instructions.Specifically, the processing (1) correspond in sequence shown in Fig. 2 from (input of 1. voice data) to the processing of (13. voice datas are sent out).That is, microphone 21 voices that user gives orders or instructions and inputs are converted into Wave data, and are output to input unit 22 (1. as voice data Voice data inputs).Input unit 22 inputs the voice data being entered and the voice data of input is output to speech recognition Device 23 (2. speech recognition initiation command).
After speech recognition equipment 23 receives the speech recognition initiation command from control unit (not shown), the sound that will be entered Sound data are converted to text data, and are output to Interface 24 (3. command session start).Dialogue is 24 from control (not shown) After portion processed receives command session start, the content of giving orders or instructions of user is analyzed from the text data inputted, and (is not schemed from database Show) in obtain correspond to give orders or instructions content dialog text text data.Also, acquired text data is input to voice Synthesizer 25 (4. conversation sentence synthetic order).
Speech synthetic device 25 is after control unit (not shown) receives conversation sentence synthetic order, the textual data that will be entered According to being converted to output sonic data (PCM data), and it is output to transcriber 26 (5. voice data reproduction order).Reproduce dress 26 are set after control unit (not shown) receives voice data reproduction order, when reproducing output sonic data, to reproducing situation The output of acquisition device 28 gives orders or instructions to start Status Change information (6., which give orders or instructions, starts Status Change).This gives orders or instructions to start Status Change information It is the information for indicating robot 101 and whether having started to give orders or instructions, is the information for indicating robot 101 and having started to give orders or instructions in this case.
Situation acquisition device 28 is reproduced according to the beginning Status Change information of giving orders or instructions being entered, and is led to drive dynamic control device 31 Know that robot 101 has given orders or instructions (7., which give orders or instructions, starts situation notice).What is notified herein is to indicate that robot 101 starts to reproduce to user The signal of voice.
Receiving the letter for indicating that from reproduction situation acquisition device 28, robot 101 starts to user's reproducing speech Number at the time of, drive dynamic control device 31 from shell state acquisition device 32 obtain robot 101 state (shell state) (8. shells Body acquisition of information).In addition, drive dynamic control device 31 further obtains the intention prompt appearance of giving orders or instructions for being stored in posture storage device 33 Gesture (9., which give orders or instructions, is intended to the acquisition of prompt posture).Drive dynamic control device 31 can pass through appearance according to acquired shell state as a result, Gesture determining section 31a determines the posture of robot 101, and whether the posture of robot 101 determined by judging is acquired hair Words are intended to prompt posture.Also, according to judging result driving driving unit 1, (10. give orders or instructions is intended to prompt appearance to drive dynamic control device 31 Gesture is converted).
Herein, in the case where the posture of identified robot 101 is not to give orders or instructions to be intended to prompt posture, drive control dress It sets 31 driving driving units 1 and gives orders or instructions to be intended to prompt posture to become.On the other hand, in the posture of identified robot 101 In the case of giving orders or instructions to be intended to prompt posture, drive dynamic control device 31 is acted as follows:If this gives orders or instructions to be intended to prompt posture Head towards front, then first will then return to front after nose drop.
Drive dynamic control device 31 obtains and gives orders or instructions in subordinate act mode storage device 34 when robot 101 starts to give orders or instructions The corresponding behavior pattern of content (acquisition of 11. behavior patterns) starts to drive driving unit 1 to become acquired behavior pattern (12. behavior initiation command).When the driving for starting driving unit 1, transcriber 26 is received from control unit (not shown) After voice data reproduction order, sonic data is used to send out (13. voices as sound wave and by loud speaker 27 output being entered Data are sent out).
Handle the summary of (2):It is being the information for indicating giving orders or instructions from the information reproduced acquired in situation acquisition device 28 In the case of, that is, it is not finished in the case of giving orders or instructions (reproduction), the subordinate act mode storage device 34 again of drive dynamic control device 31 In make the action of a behavior pattern.Also, behavior pattern can select at the time of giving orders or instructions and terminating, and can also select in advance It selects.
Specifically, above-mentioned processing (2) corresponds in sequence shown in Fig. 2 from (14. behaviors terminate) to (18. behaviors are opened Begin order) processing.That is, the behavior pattern of the driving unit 1 of (during voice reproduction) during the giving orders or instructions of robot 101 Terminate, is judged (14. according to the driving condition (shell state) of the driving unit 1 obtained by shell state acquisition device 32 Behavior terminates).The information that shell state acquisition device 32 has terminated expression behavior is as behavior notice output to drive control Device 31 (15. behavior notification command).
When drive dynamic control device 31 has been notified behavior according to the shell state acquired in the shell state acquisition device 32 Terminate, then obtained from reproduction situation acquisition device 28 and reproduce situation (16., which reproduce situations, obtains).In addition, acquired when being judged as Reproduction situation be in and reproduce during, then drive dynamic control device 31 is obtained and is given orders or instructions in subordinate act mode storage device 34 again The corresponding behavior pattern of content (acquisition of 17. behavior patterns).Also, start to drive driving unit 1 to become acquired row For pattern (18. behavior initiation command).
Handle the summary of (3):The information obtained from reproduction situation acquisition device 28 is the feelings for the information for indicating to give orders or instructions to terminate Under condition, drive dynamic control device 31 is judged as giving orders or instructions to terminate, and driving unit 1 is set as idle state or inactive.On the other hand, At the time of reproduction situation acquisition device 28, which is got, gives orders or instructions to terminate, then drive dynamic control device 31 confirms shell state acquisition device 32, in the case where being acted, termination order is sent out to driving unit 1, and restore by driving driving unit 1 It gives orders or instructions to be intended to prompt posture to initial posture.Also, if action within the stipulated time (such as 400ms), is used as and permits Perhaps range terminates order without sending out.
Specifically, the processing (3) corresponds in sequence shown in Fig. 2 from (19. reproductions terminate) to (22. reproduce knot Beam order) processing.That is, transcriber 26 (19. reproductions terminate) after reproduction terminates, defeated to reproducing situation acquisition device 28 Go out to reproduce to terminate Status Change information (20. reproduce end Status Change).
It reproduces situation acquisition device 28 and Status Change information is terminated according to the reproduction being entered, it is logical to drive dynamic control device 31 Know that robot 101 has terminated give orders or instructions (21. reproduce end notification).Here what is notified is to indicate to terminate from robot 101 to user The signal of reproducing speech.
Drive dynamic control device 31 is according to the reproduction end notification obtained from reproduction situation acquisition device 28, to driving unit 1 Send out reproduction end order (terminating order) (22. reproduce end order).Stop the action of driving unit 1 as a result,.
(effect)
As described above, at moment beginning of conversation with user, the posture of robot 101 becomes gives orders or instructions to anticipate for having informed the user Figure gives orders or instructions to be intended to prompt posture.That is, at moment beginning of conversation with user, robot 101 can indicate robot to user Itself whether give orders or instructions to be intended to.Thereby, it is possible to successfully carry out the dialogue between user and robot 101, therefore user and machine Natural nonverbal communication may be implemented between device people.
Also, in present embodiment, moment beginning of conversation with user is set as the voice reproduction from robot to user Start time, but not limited to this, can also be the voice end of input moment of the user of robot.In this case, Fig. 2 Shown in sequence, at the end of input moment of input unit 22, engage in the dialogue to drive dynamic control device 31 and start situation notice. Shell state is obtained from shell state acquisition device 32 in the moment drive dynamic control device 31.Processing later and above-mentioned place It manages identical.
Further, since the end of input moment of input unit 22 is also to carve at the beginning of speech recognition, therefore, with user's Moment beginning of conversation can also be the speech recognition start time of robot.Further, if device is constituting robot 101 Switch is set on shell, microphone 21, and the mute microphone (MIC) 21 in release switch, the dialogue with user are opened when pressing beginning At the time of start time can also be set as starting release switch.In addition it is also possible to be opened the following moment as the dialogue with user Begin the moment:By using the camera in the shell for constituting robot 101, people is detected in the camera, further detects people Lip action terminate, at the time of being assumed to be session start with this.
Also, the case where robot 101 starts to give orders or instructions is, response content of giving orders or instructions is had been formed in Interface 24 Situation, and user view it is insufficient give orders or instructions content the case where or in the case of the giving orders or instructions nonsensical of users such as sneezing, Content of giving orders or instructions cannot be formed.In this case, robot 101 can also make expression from intention prompt posture of giving orders or instructions and not send out What words were intended to gives orders or instructions to be intended to releasing behavior, is given orders or instructions with substituting.As giving orders or instructions, intention releasing behavior can be arbitrary act, as long as From give orders or instructions to be intended to the different behavior of prompt posture, preferred, users simply recognize what robot 101 did not give orders or instructions to be intended to Behavior.
In addition, in the posture control device 3 of above-mentioned composition, posture storage device 33 and behavior pattern storage device are shown 34 examples being independently set, but the two devices can also be a storage device.
< second embodiments >
Another embodiment of the present invention as described below.In addition, for convenience of description, pair illustrate in the above embodiment 1 Component component with the same function, mark identical reference numeral, the description thereof will be omitted.
(summary of robot)
Fig. 3 is the summary composition figure of the robot 201 of present embodiment.The machine of robot 201 and the first embodiment People's 101 the difference lies in that robot 201 is configured to the server (not shown) that speech recognition equipment 23 is arranged on network On, and add the communication device 29 for being communicated with the speech recognition equipment 23.That is, in robot 201, wheat will be passed through After the voice that gram wind 21 inputs is inputted using input unit 22, the voice data inputted is sent to by communication device 29 Server on network, and speech recognition can be carried out by the speech recognition equipment 23 in the server.Also, pass through service The recognition result of speech recognition equipment 23 in device is sent to Interface 24 via communication device 29.Robot 201 this A little points are different from the robot 101 of the first embodiment.Communication device 29 can be the communication device of any way, as long as It can be communicated with the speech recognition equipment 23 being arranged on the external networks such as internet.
(ability of posture control processing)
Fig. 4 is the sequence chart for the ability of posture control process flow for indicating robot 201 shown in Fig. 3.Sequence chart below includes: Processing (11) until 201 reproducing speech of robot;In voice reproduction, the processing at the end of the behavior of robot 201 (12);During the behavior of robot 201, the processing (13) at the end of voice reproduction.
Handle the summary of (11):The processing (11) is roughly the same with processing (1) illustrated in the first embodiment Processing, difference is moment beginning of conversation with user.That is, the difference from the processing (1) of the first embodiment exists In the voice data of giving orders or instructions to be used as of user is obtained by microphone 21, and acquired voice data is passed through input unit 22 input, using at the time of end of input as moment beginning of conversation with user.
Specifically, the processing (11) corresponds in sequence shown in Fig. 4 from (input of 1. voice data) to (15. sound Sound data are sent out) processing.That is, user's microphone 21 gives orders or instructions and the voice that inputs is converted into Wave data, and as Voice data is output to input unit 22 (input of 1. voice data).Input unit 22 inputs the voice data being entered, and is tying In the case that beam inputs, engages in the dialogue to drive dynamic control device 31 and start situation notice (2. beginning of conversation situations notice).According to The beginning of conversation situation notifies, notifies the voice of user inputs to terminate drive dynamic control device 31.
Receiving the letter for indicating that from reproduction situation acquisition device 28, robot 101 starts to user's reproducing speech Number at the time of, drive dynamic control device 31 from shell state acquisition device 32 obtain robot 201 state (shell state) (3. shells Body acquisition of information).Drive dynamic control device 31 further obtains the intention prompt posture (4. of giving orders or instructions for being stored in posture storage device 33 It gives orders or instructions to be intended to the acquisition of prompt posture).Drive dynamic control device 31 passes through posture determining section according to acquired shell state as a result, 31a determines the posture of robot 201, and whether the posture of robot 201 determined by capable of judging is acquired to give orders or instructions to anticipate Figure prompt posture.Also, according to judging result driving driving unit 1, (5. give orders or instructions is intended to prompt posture turn to drive dynamic control device 31 It changes).
In the case where the posture of identified robot 201 is not to give orders or instructions to be intended to prompt posture, drive dynamic control device 31 Driving driving unit 1 gives orders or instructions to be intended to prompt posture to become.On the other hand, it is hair in the posture of identified robot 201 In the case that words are intended to prompt posture, drive dynamic control device 31 makes it be acted as follows:If this gives orders or instructions to be intended to prompt posture Head towards front, then first will then return to front after nose drop.
Then, input unit 22 receives the speech recognition initiation command (1) from control unit (not shown), is filled via communication Setting 29, (6. speech recognitions start to the inputted voice data of the speech recognition equipment 23 for the server being arranged on network transmission It orders (1)).It, will be defeated after the speech recognition initiation command of control unit of the reception of speech recognition equipment 23 in server (2) The voice data entered is converted to text data (7. speech recognition initiation command (2)), and be output to Interface 24 (8. dialogue open Begin to order).
Dialogue is after 24 receive the command session start from control unit (not shown), from the text data analysis inputted The content of giving orders or instructions of user, and the text data for the dialog text for corresponding to content of giving orders or instructions is obtained from database (not shown).Into One step, acquired text data is input to speech synthetic device 25 (9. conversation sentence synthetic order).
After speech synthetic device 25 receives the conversation sentence synthetic order from control unit (not shown), the text that will be entered Notebook data is converted to output sonic data (PCM data), and is output to transcriber 26 (10. voice data reproduction order). After transcriber 26 receives the voice data reproduction order from control unit (not shown), when reproducing output sonic data, It gives orders or instructions to start Status Change information to reproducing the output of situation acquisition device 28 (11., which give orders or instructions, starts Status Change).This gives orders or instructions to start Status Change information is the information for indicating robot 201 and whether having started to give orders or instructions, in this case, being to indicate that robot 201 has opened Originate the information of words.
Situation acquisition device 28 is reproduced according to the beginning Status Change information of giving orders or instructions being entered, and is led to drive dynamic control device 31 Know that robot 201 has given orders or instructions (12., which give orders or instructions, starts situation notice).Here what is notified is to indicate that robot 201 starts to user again The signal of existing voice.Drive dynamic control device 31 when robot 201 starts to give orders or instructions, in subordinate act mode storage device 34 obtain with The corresponding behavior pattern of content of giving orders or instructions (acquisition of 13. behavior patterns) starts to drive driving unit to become acquired behavior Pattern (14. behavior initiation command).When the driving for starting driving unit 1, transcriber 26 is received from control (not shown) After the voice data reproduction order in portion, sonic data is used to send out (15. by loud speaker 27 as sound wave the volume output of being entered Voice data is sent out).
It is as above to handle (11), and handle (12) it is identical as processing (2) of the first embodiment, processing (13) and institute The processing (3) for stating first embodiment is identical, therefore omits the explanation of these processing.
(effect) as described above, at moment beginning of conversation with user, the posture of robot 201 becomes for logical to user Knowing has that gives orders or instructions to be intended to give orders or instructions to be intended to prompt posture.That is, at moment beginning of conversation with user, robot 201 can be to user Indicate whether robot itself gives orders or instructions to be intended to.Thereby, it is possible to successfully carry out the dialogue between user and robot 201.And And in this case, since speech recognition equipment 23 is arranged in the server on network, do not have in machine Device people carries out voice recognition processing in 201 inside, so as to mitigate the processing load of robot 201.
Also, in present embodiment, moment beginning of conversation with user is set as the input of the voice from robot to user Finish time, but not limited to this, can also be set as voice reproduction moment of the robot to user.
In addition, the case where present embodiment, is identical as the first embodiment, due to the end of input of input unit 22 Moment is also to carve at the beginning of speech recognition, and therefore, moment beginning of conversation with user can also be the speech recognition of robot Start time.Further, if switch is arranged in device on the shell for constituting robot 201, microphone is opened when pressing beginning 21, and in release switch in the case of mute microphone (MIC) 21, release switch can also be set as with moment beginning of conversation of user Moment.In addition it is also possible to using the following moment as moment beginning of conversation with user:By using the shell for constituting robot 201 Camera in body, detects people in the camera, further detects that the lip action of people terminates, is assumed to be session start with this Moment.
In addition, robot 201 is the content of giving orders or instructions for having been formed with response in Interface 24 the case where starting to give orders or instructions The case where, and user view it is insufficient give orders or instructions content the case where or the users such as sneezing nonsensical situation of giving orders or instructions Under, content of giving orders or instructions cannot be formed.In this case, robot 201 can also make expression from intention prompt posture of giving orders or instructions and not have There is the intention releasing behavior of giving orders or instructions for giving orders or instructions to be intended to, is given orders or instructions with substituting.
Also, in the posture control device 3 of above-mentioned composition, show posture storage device 33 and behavior pattern storage device 34 examples being independently set, but the two devices can also be a storage device.
In addition, in described first and second embodiment, the output signal based on voice unit 2 (comes from input unit 22, signal that indicate voice end of input, from reproduce situation acquisition device 28, indicate the signal that starts of voice reproduction) The ability of posture control of robot 101,201 is carried out.Correspondingly, explanation is based on using in third embodiment below The example of the face image of the user of camera shooting.
< third embodiments >
The another embodiment of the present invention as described below.In addition, for convenience of description, pair said in the above embodiment 1 Bright component component with the same function, marks identical reference numeral, the description thereof will be omitted.
(summary of robot)
Fig. 5 is the summary composition figure of the robot 301 of present embodiment.The machine of robot 301 and the first embodiment The composition of people 101 is roughly the same, the difference lies in that being newly provided with elementary area 4.Elementary area 4 includes for shooting user's face The camera 41 in portion and the image acquiring device (image acquiring unit) 42 for obtaining the face image shot by camera 41.Figure As unit 4 further includes image judgment means (image determining Section) 43, the face image acquired in image acquiring device 42 is judged Whether it is the image for giving orders or instructions to terminate for indicating the user.
Camera 41 is digital camera, session object, that is, user of robot 301 is shot, as long as may be mounted at machine Can be the camera of any types and mode inside device people 301.Image acquiring device 42 is the user shot from camera 41 The device of the face image of the user is obtained in image.Acquired user's face image is sent to figure by image acquiring device 42 As judgment means 43.
Image judgment means 43 carry out face recognition to the face image of the user sended over from image acquiring device 42, And determine whether to indicate the image for giving orders or instructions to terminate of user according to recognition result.Here, determine whether that user's face is closed When face image.Also, the judging result that will be deemed as face image when user's face is closed is sent to ability of posture control dress Set 3.That is, in posture control device 3, the ability of posture control of robot 301 is carried out at the time of when the face of user is closed.That is, this In embodiment, the posture of the robot 301 of posture control device 3 is judged that the moment sets with moment beginning of conversation of user For at the time of being judged as indicating the image of the end of giving orders or instructions of user by described image judgment means 43.
(effect)
As described above, (being judged as YES by image judgment means 43 with moment beginning of conversation of user and indicating giving orders or instructions for user The image of end), the posture of robot 301 becomes for informing that gives orders or instructions to be intended to gives orders or instructions to be intended to prompt posture to user.That is, At moment beginning of conversation with user, robot 301 can indicate whether robot itself gives orders or instructions to be intended to user.As a result, The dialogue between user and robot 301 can successfully be carried out.
(variation)
Fig. 6 indicates that the summary of the robot 401 of the variation of robot 301 shown in fig. 5 constitutes block diagram.Robot 401 and institute The composition for stating the robot 201 of second embodiment is roughly the same, the difference lies in that being newly provided with elementary area 4.In addition, making It is identical as robot 301 shown in fig. 5 with the ability of posture control of elementary area 4, therefore the description thereof will be omitted.
(effect)
The effect roughly the same with robot 301 can be obtained according to robot 401, also, since speech recognition equipment 23 is set It sets in the server on network, therefore, without carrying out voice recognition processing inside robot 401.Therefore, following effect is obtained Fruit:The processing load of robot 401 can be mitigated.
Also, in the robot 301 of present embodiment and the robot 401 of variation, all the dialogue with user is opened At the time of moment beginning is set as being judged as indicating the image of the end of giving orders or instructions of user, but not limited to this.For example, described first is real It is identical to apply mode robot 101, the robot 201 of the second embodiment, or the output letter from voice unit 2 Number (it is from input unit 22, indicate voice end of input signal, from reproduce situation acquisition device 28, indicate voice Reproduce start signal) the time of reception.
&#91;Pass through the realization Li &#93 of software;
The control block of drive dynamic control device 31 can be real by the logic circuit (hardware) being formed in integrated circuit (IC chip) etc. It is existing, or can be by using CPU (Central Processing Unit:Central processing unit) software realize.
In the latter case, drive dynamic control device 31 includes:CPU is executed as the software for realizing each function Program order;ROM(Read Only Memory:Read-only memory) or storage device (these be referred to as " record be situated between Matter "), above procedure and various data are stored so that computer (or CPU) can be read;RAM(Random Access:At random Access memory), it is used to be unfolded described program etc..Then, above procedure is read from recording medium by computer (or CPU) And program is executed to achieve the object of the present invention.As recording medium, such as tape, disk, card, semiconductor storage can be used " the non-transitory tangible medium " of device, programmable logic circuit etc..In addition, above procedure can be via can send the program Arbitrary transmission medium (communication network, broadcast wave etc.) is supplied to computer.Also, the present invention can also pass through electronics with the program Transmission is realized to embody and be embedded in the form of the data-signal in carrier wave.
&#91;Summarize &#93;
Posture control device according to one method of the present invention be installed in robot (101,201,301,401), and Control the posture control device of the posture of the robot (101,201,301,401), the robot can carry out pair with user Words, and drive multiple driving portions (driving unit 1) and make a variety of postures, which is characterized in that the posture control device packet It includes:Posture determining section (31a), determined from the driving condition of each driving portion (driving unit 1) robot (101, 201,301,401) posture;Drive control part (31b) carries out the drive control of each driving portion (driving unit 1), institute It states drive control part (31b) and applies the moment in the beginning of conversation with user, the machine determined by the posture determining section (31a) The posture of device people (101,201,301,401), which is not representing the robot (101,201,301,401), the hair for the intention given orders or instructions In the case that words are intended to prompt posture, drive each driving portion (driving unit 1) so that the robot (101,201,301, 401) the intention prompt posture of giving orders or instructions is made.
According to the above configuration, at moment beginning of conversation with user, robot (101,201,301,401) can be made always Posture be set as giving orders or instructions being intended to prompt posture, user visually can simply identify robot according to the posture of the robot It gives orders or instructions to be intended to.
As a result, at moment beginning of conversation with user, robot itself can be conclusivelyed show to user and gives orders or instructions to be intended to, Therefore, it is possible to successfully carry out the dialogue between user and robot, as a result, nature may be implemented between user and robot Nonverbal communication.
The posture control device of the 2nd aspect of the present invention is the posture control device such as the first method, the machine People (101,201,301,401) input user voice, and by according to the voice inputted to user's reproducing speech in a manner of Come when carrying out the dialogue between user, it is described with user moment beginning of conversation can also be the robot (101,201, 301,401) to the voice reproduction start time of user.
According to the above configuration, due to being robot (101,201,301,401) to user with moment beginning of conversation of user The voice reproduction moment, can be made to user at the time of robot attempts to give orders or instructions and give orders or instructions to be intended to prompt posture.Pass through as a result, The posture and voice of the robot give orders or instructions to be intended to conclusively show robot to user.
The posture control device of the 3rd aspect of the present invention is the posture control device such as the first method, the machine People (101,201,301,401) inputs the voice of the user, and with according to the voice inputted to user's reproducing speech Mode come when carrying out the dialogue between user, it is described with user moment beginning of conversation can also be the robot (101, 201,301,401) the voice end of input moment of user.
According to the above configuration, due to being robot (101,201,301,401) to user with moment beginning of conversation of user The voice end of input moment therefore made to user in the finish time of giving orders or instructions of user and give orders or instructions to be intended to prompt posture.As a result, Robot can have been informed rapidly to user and give orders or instructions to be intended to.
The posture control device of the 4th aspect of the present invention is the posture control device according to first method, further includes:Figure As acquisition unit (image acquiring device 42), the face image of the face of shooting user is obtained;And image determining Section (sentence by image Disconnected device 43), judge whether acquired face image is the image for indicating user and giving orders or instructions to terminate, the dialogue with user Start time can also be to be judged as indicating the image that user gives orders or instructions to terminate by described image judging part (image judgment means 43) At the time of.
According to the above configuration, due to being by described image judging part (image judgment means with moment beginning of conversation of user 43) at the time of being judged as indicating the image of the end of giving orders or instructions of user, therefore, it is possible to be done to user at the time of user gives orders or instructions and terminates Go out to give orders or instructions to be intended to prompt posture.Robot can have been informed rapidly to user and give orders or instructions to be intended to as a result,.
The robot of the 5th aspect of the present invention, which is characterized in that have according to any one in first to fourth mode Posture control device (31).According to the above configuration, it can explicitly have been informed to user and give orders or instructions to be intended to.
The posture control method of the 6th aspect of the present invention is the appearance for the posture for controlling robot (101,201,301,401) Gesture control method, the robot (101,201,301,401) can engage in the dialogue with user, and can drive multiple drivings Portion (driving unit 1) make a variety of postures, which is characterized in that the posture control method includes:Gesture determination step, The posture of the robot (101,201,301,401) is determined with moment beginning of conversation of user;Drive control step, logical The posture for crossing the robot (101,201,301,401) that the gesture determination step determines be not representing the robot (101, 201,301,401) have give orders or instructions to be intended to give orders or instructions to be intended to prompt posture in the case of, drive each driving portion so that the robot (101,201,301,401) make the intention prompt posture of giving orders or instructions.According to this composition, can obtain and the first method Identical effect.
The posture control device of each mode of the present invention can also be realized by computer, in this case, by making meter Each section (software elements) that calculation machine has as above-mentioned posture control device carries out action to be realized using computer The gesture stability program for stating posture control device and the computer-readable recording medium for storing the program are also contained in Within the scope of the present invention.
The present invention is not limited to the respective embodiments described above, can be made various changes in the range shown in claim, will not The disclosed appropriately combined obtained embodiment of technological means is also contained in the technology model of the present invention respectively in same embodiment It encloses.Moreover, can disclosed technical method forms new technical characteristic respectively by combining each embodiment.
Reference sign
1 driving unit (driving portion), 2 voice units, 3 posture control devices, 4 elementary areas, 21 microphones, 22 input dresses It sets, 23 speech recognition equipments, 24 Interfaces, 25 speech synthetic devices, 26 transcribers, 27 loud speakers, 28 reproduction situations obtain Device is taken, 29 communication devices, 31 drive dynamic control devices, 31a posture determining sections, 31b drive control parts, 32 shell states, which obtain, to be filled It sets, 33 posture storage devices, 34 behavior pattern storage devices, 41 cameras, 42 image acquiring devices (image acquiring unit), 43 figures As judgment means (image determining Section), 101,201,301,401 robots.

Claims (6)

1. a kind of posture control device for the posture for being installed in robot and controlling the robot, the robot energy It is enough to engage in the dialogue with user, and drive multiple driving portions and make a variety of postures, which is characterized in that the posture control device Including:
Posture determining section determines the posture of the robot according to the driving condition of each driving portion;
Drive control part carries out the drive control of each driving portion,
The drive control part with moment beginning of conversation of user robot determined by the posture determining section In the case that posture is not representing the intention prompt posture of giving orders or instructions that there is the intention given orders or instructions in the robot, each driving portion is driven So that the robot makes the intention prompt posture of giving orders or instructions.
2. posture control device according to claim 1, which is characterized in that
The robot input user voice, and by according to the voice inputted to user's reproducing speech in a manner of come When carrying out the dialogue between user,
Described and user moment beginning of conversation is the voice reproduction start time from the robot to user.
3. posture control device according to claim 1, which is characterized in that
Input the voice of the user in the robot, and with according to the voice inputted to the side of user's reproducing speech Formula come when carrying out the dialogue between user,
Described and user moment beginning of conversation is the voice end of input moment of the user of the robot.
4. posture control device according to claim 1, which is characterized in that further include:
Image acquiring unit obtains the face image of the face of shooting user;And
Image determining Section judges whether the face image acquired in described image acquisition unit is to indicate that the user gives orders or instructions to terminate Image,
Described and user moment beginning of conversation is to be judged as indicating the image that user gives orders or instructions to terminate by described image judging part Moment.
5. a kind of robot can engage in the dialogue with user, and can drive multiple driving portions and make a variety of postures, It is characterized in that,
The robot has the posture control device according to any one in Claims 1-4.
6. a kind of posture control method of the posture of control robot, the robot can engage in the dialogue with user, and energy It enough drives multiple driving portions and makes a variety of postures, which is characterized in that the posture control method includes:
Gesture determination step, in the posture for determining the robot with moment beginning of conversation of user;And
Drive control step is not representing the robot in the posture of the robot determined by the gesture determination step There is that gives orders or instructions to be intended to give orders or instructions to be intended in the case of prompting posture, drives each driving portion so that the robot makes the hair Words are intended to prompt posture.
CN201780007508.6A 2016-02-25 2017-02-17 Posture control device, robot and posture control method Pending CN108698231A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2016-034712 2016-02-25
JP2016034712 2016-02-25
PCT/JP2017/005857 WO2017145929A1 (en) 2016-02-25 2017-02-17 Pose control device, robot, and pose control method

Publications (1)

Publication Number Publication Date
CN108698231A true CN108698231A (en) 2018-10-23

Family

ID=59686191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780007508.6A Pending CN108698231A (en) 2016-02-25 2017-02-17 Posture control device, robot and posture control method

Country Status (3)

Country Link
JP (1) JPWO2017145929A1 (en)
CN (1) CN108698231A (en)
WO (1) WO2017145929A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004034274A (en) * 2002-07-08 2004-02-05 Mitsubishi Heavy Ind Ltd Conversation robot and its operation method
JP2006181651A (en) * 2004-12-24 2006-07-13 Toshiba Corp Interactive robot, voice recognition method of interactive robot and voice recognition program of interactive robot
JP2007155986A (en) * 2005-12-02 2007-06-21 Mitsubishi Heavy Ind Ltd Voice recognition device and robot equipped with the same
JP2009222969A (en) * 2008-03-17 2009-10-01 Toyota Motor Corp Speech recognition robot and control method for speech recognition robot
JP2013237124A (en) * 2012-05-15 2013-11-28 Fujitsu Ltd Terminal device, method for providing information, and program
CN103753578A (en) * 2014-01-24 2014-04-30 成都万先自动化科技有限责任公司 Wearing service robot
CN104951077A (en) * 2015-06-24 2015-09-30 百度在线网络技术(北京)有限公司 Man-machine interaction method and device based on artificial intelligence and terminal equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4477921B2 (en) * 2004-03-31 2010-06-09 本田技研工業株式会社 Mobile robot
JP4976903B2 (en) * 2007-04-05 2012-07-18 本田技研工業株式会社 robot
JP5982840B2 (en) * 2012-01-31 2016-08-31 富士通株式会社 Dialogue device, dialogue program, and dialogue method
US9044863B2 (en) * 2013-02-06 2015-06-02 Steelcase Inc. Polarized enhanced confidentiality in mobile camera applications
JP6150429B2 (en) * 2013-09-27 2017-06-21 株式会社国際電気通信基礎技術研究所 Robot control system, robot, output control program, and output control method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004034274A (en) * 2002-07-08 2004-02-05 Mitsubishi Heavy Ind Ltd Conversation robot and its operation method
JP2006181651A (en) * 2004-12-24 2006-07-13 Toshiba Corp Interactive robot, voice recognition method of interactive robot and voice recognition program of interactive robot
JP2007155986A (en) * 2005-12-02 2007-06-21 Mitsubishi Heavy Ind Ltd Voice recognition device and robot equipped with the same
JP2009222969A (en) * 2008-03-17 2009-10-01 Toyota Motor Corp Speech recognition robot and control method for speech recognition robot
JP2013237124A (en) * 2012-05-15 2013-11-28 Fujitsu Ltd Terminal device, method for providing information, and program
CN103753578A (en) * 2014-01-24 2014-04-30 成都万先自动化科技有限责任公司 Wearing service robot
CN104951077A (en) * 2015-06-24 2015-09-30 百度在线网络技术(北京)有限公司 Man-machine interaction method and device based on artificial intelligence and terminal equipment

Also Published As

Publication number Publication date
WO2017145929A1 (en) 2017-08-31
JPWO2017145929A1 (en) 2018-10-25

Similar Documents

Publication Publication Date Title
CN103456299B (en) A kind of method and device controlling speech recognition
JP6505748B2 (en) Method for performing multi-mode conversation between humanoid robot and user, computer program implementing said method and humanoid robot
CN107077840B (en) Speech synthesis apparatus and method
KR20100062207A (en) Method and apparatus for providing animation effect on video telephony call
CN202150884U (en) Handset mood-induction device
CN102355527A (en) Mood induction apparatus of mobile phone and method thereof
TW201327226A (en) Electronic device and method thereof for offering mood services according to user expressions
JP2003248837A (en) Device and system for image generation, device and system for sound generation, server for image generation, program, and recording medium
JP7279494B2 (en) CONFERENCE SUPPORT DEVICE AND CONFERENCE SUPPORT SYSTEM
CN109935226A (en) A kind of far field speech recognition enhancing system and method based on deep neural network
JP5083033B2 (en) Emotion estimation device and program
CN108304121A (en) The control method and device of PowerPoint
EP3982358A2 (en) Whisper conversion for private conversations
CN111936964A (en) Non-interruptive NUI command
WO2018135276A1 (en) Speech and behavior control device, robot, control program, and control method for speech and behavior control device
JP4599606B2 (en) Head motion learning device, head motion synthesis device, and computer program for automatic head motion generation
EP1670165B1 (en) Method and model-based audio and visual system for displaying an avatar
JP6448950B2 (en) Spoken dialogue apparatus and electronic device
CN108698231A (en) Posture control device, robot and posture control method
CN109249386A (en) Voice dialogue robot and speech dialogue system
CN109922397B (en) Intelligent audio processing method, storage medium, intelligent terminal and intelligent Bluetooth headset
CN109891501A (en) Voice adjusts the control method of device, control program, electronic equipment and voice adjustment device
CN106469553A (en) Audio recognition method and device
CN109977411A (en) A kind of data processing method, device and electronic equipment
US8666549B2 (en) Automatic machine and method for controlling the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20181023