CN110136704A - Robot voice control method and device, robot and medium - Google Patents
Robot voice control method and device, robot and medium Download PDFInfo
- Publication number
- CN110136704A CN110136704A CN201910265960.3A CN201910265960A CN110136704A CN 110136704 A CN110136704 A CN 110136704A CN 201910265960 A CN201910265960 A CN 201910265960A CN 110136704 A CN110136704 A CN 110136704A
- Authority
- CN
- China
- Prior art keywords
- robot
- phonetic order
- control
- class phonetic
- voice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000003860 storage Methods 0.000 claims abstract description 30
- 238000004140 cleaning Methods 0.000 claims description 30
- 230000002618 waking effect Effects 0.000 claims description 27
- 238000001514 detection method Methods 0.000 claims description 21
- 238000004590 computer program Methods 0.000 claims description 13
- 238000011084 recovery Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 5
- 230000001052 transient effect Effects 0.000 claims description 3
- 230000003993 interaction Effects 0.000 abstract description 6
- 230000009471 action Effects 0.000 abstract description 2
- 238000010408 sweeping Methods 0.000 description 19
- 238000010586 diagram Methods 0.000 description 12
- 238000004891 communication Methods 0.000 description 10
- 239000000428 dust Substances 0.000 description 10
- 230000006870 function Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 9
- 238000005259 measurement Methods 0.000 description 8
- 230000004913 activation Effects 0.000 description 7
- 230000004888 barrier function Effects 0.000 description 6
- 238000004020 luminiscence type Methods 0.000 description 6
- 230000005611 electricity Effects 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000005291 magnetic effect Effects 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 241000153246 Anteros Species 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000005108 dry cleaning Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 241001417527 Pempheridae Species 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 229910044991 metal oxide Inorganic materials 0.000 description 2
- 150000004706 metal oxides Chemical class 0.000 description 2
- 230000001953 sensory effect Effects 0.000 description 2
- 238000009333 weeding Methods 0.000 description 2
- BVPWJMCABCPUQY-UHFFFAOYSA-N 4-amino-5-chloro-2-methoxy-N-[1-(phenylmethyl)-4-piperidinyl]benzamide Chemical compound COC1=CC(N)=C(Cl)C=C1C(=O)NC1CCN(CC=2C=CC=CC=2)CC1 BVPWJMCABCPUQY-UHFFFAOYSA-N 0.000 description 1
- 235000007926 Craterellus fallax Nutrition 0.000 description 1
- 240000007175 Datura inoxia Species 0.000 description 1
- WHXSMMKQMYFTQS-UHFFFAOYSA-N Lithium Chemical compound [Li] WHXSMMKQMYFTQS-UHFFFAOYSA-N 0.000 description 1
- 240000007594 Oryza sativa Species 0.000 description 1
- 235000007164 Oryza sativa Nutrition 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 229910052744 lithium Inorganic materials 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 229910052987 metal hydride Inorganic materials 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 239000004033 plastic Substances 0.000 description 1
- 229920003023 plastic Polymers 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 238000010926 purge Methods 0.000 description 1
- 235000009566 rice Nutrition 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 239000000725 suspension Substances 0.000 description 1
- 238000004804 winding Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Manipulator (AREA)
Abstract
The embodiment of the disclosure provides a robot voice control method, a device, a robot and a medium, wherein the robot voice control method comprises the following steps: receiving a control voice instruction; matching the control voice command with a storage command of the robot local database; and if the matching is successful, executing the control type voice instruction content. According to the embodiment of the disclosure, when a voice instruction is received, the voice recognition system of the robot is firstly enabled to be in an activated state, and the robot is controlled to turn to the direction of the voice source, so that the robot is enabled to be in a standby state. And then receiving a control command for controlling execution operation within a certain time, performing semantic matching on the control command from a local database and a cloud of the robot and a prestored instruction, and executing an expected action according to a matching result. The method and the device can be operated accurately according to the indication, improve the recognition rate of the voice command input by the user, can work accurately according to the voice command of the user, and increase the interestingness of man-machine interaction.
Description
Technical field
This disclosure relates to control technology field more particularly to a kind of robot voice control method, device, robot and Jie
Matter.
Background technique
With the development of technology, there are the various robots with speech recognition system, such as machine of sweeping the floor
People, floor-mopping robot, dust catcher, weeder etc..These robots can receive the language of user's input by speech recognition system
Sound instruction, to execute the operation of phonetic order instruction, this has not only liberated labour, also as save human cost.
In speech control process, common execution method is that robot receives voice signal, then recognition of speech signals
Semanteme, finally acted accordingly according to semantic content, but to will lead to execution efficiency low for such semantic analysis process, and
Semantic meaning can not accurately sometimes be obtained so as to cause mistake is executed, bring great puzzlement to voice control.
Summary of the invention
It is situated between in view of this, the embodiment of the present disclosure provides a kind of robot voice control method, device, robot and storage
Matter accurately matches speech database to allow the robot to, accurately executes phonetic control command.
In a first aspect, the embodiment of the present disclosure provides a kind of robot voice control method, which comprises
Receive control class phonetic order;
The control class phonetic order is matched with the robot local data base store instruction;
If successful match, the control class phonetic order content is executed.
In some possible implementations, if the successful match, execute the control class phonetic order content it
Afterwards, comprising:
If matching is unsuccessful, detect whether the robot is connected to cloud;
If being not attached to cloud, it is restored to the state received before control class phonetic order.
In some possible implementations, if described be not attached to cloud, it is restored to the reception control class voice
After state before instruction, comprising:
If being connected to cloud, the control class phonetic order is matched with the cloud database;
If successful match, the control class phonetic order content is executed.
In some possible implementations, if the successful match, execute the control class phonetic order content it
Afterwards, comprising:
If matching is unsuccessful, it is restored to the state received before control class phonetic order.
In some possible implementations, before the reception control class phonetic order, comprising:
It receives and wakes up class phonetic order;
It identifies the Sounnd source direction for waking up class phonetic order and the robot is made to turn to the Sounnd source direction.
In some possible implementations, the Sounnd source direction of the identification wake-up class phonetic order simultaneously makes the machine
Device people turns to the Sounnd source direction, comprising:
Identify the Sounnd source direction for waking up class phonetic order;
Do not stop that the robot is made to turn to the Sounnd source direction while operating of driving motor;
Stop the operating of the driving motor.
In some possible implementations, the state before the control class phonetic order includes: stationary state or clear
Sweep state.
Second aspect, the embodiment of the present disclosure provide a kind of robot voice control device, comprising:
First receiving unit, for receiving control class phonetic order;
First matching unit, for by the control class phonetic order and the robot local data base store instruction into
Row matching;
First execution unit executes the control class phonetic order content if being used for successful match.
In some possible implementations, further includes:
If detection unit detects whether the robot is connected to cloud unsuccessful for matching;
First recovery unit, if being restored to before the reception control class phonetic order for being not attached to cloud
State.
In some possible implementations, further includes:
Second matching unit, if for being connected to cloud, by the control class phonetic order and the cloud database into
Row matching;
Second execution unit executes the control class phonetic order content if being used for successful match.
In some possible implementations, further includes:
Second recovery unit is restored to the shape received before control class phonetic order if unsuccessful for matching
State.
In some possible implementations, further includes:
Second receiving unit wakes up class phonetic order for receiving;
Recognition unit the Sounnd source direction for waking up class phonetic order and makes the robot turn to the sound for identification
Source direction.
In some possible implementations, the recognition unit is also used to:
Identify the Sounnd source direction for waking up class phonetic order;
Do not stop that the robot is made to turn to the Sounnd source direction while operating of driving motor;
Stop the operating of the driving motor.
In some possible implementations, the state before the control class phonetic order includes: stationary state or clear
Sweep state.
The third aspect, the embodiment of the present disclosure provide a kind of robot voice control device, including processor and memory, institute
It states memory and is stored with the computer program instructions that can be executed by the processor, the processor executes the computer journey
When sequence instructs, as above any method and step is realized.
Fourth aspect, the embodiment of the present disclosure provide a kind of robot, including as above described in any item devices.
5th aspect, the embodiment of the present disclosure provide a kind of non-transient computer readable storage medium, are stored with computer
Program instruction, the computer program instructions realize as above any method and step when being called and being executed by processor.
Compared with the existing technology, the disclosure at least has following technical effect that
The embodiment of the present disclosure when receiving phonetic order, can first make the speech recognition system of robot activate shape
State, and the Sounnd source direction that robot turns to voice is controlled, it is at armed state.Then control is received within a certain period of time to hold
The control command of row operation carries out semantic matches with the instruction prestored from robot local data base, cloud according to control command,
Expected movement is executed according to matching result.The disclosure can be operated accurately as indicated, improve the language of user's input
The discrimination of sound instruction can relatively accurately work according to the phonetic order of user, also increase the interest of human-computer interaction.
Detailed description of the invention
In order to illustrate more clearly of the embodiment of the present disclosure or technical solution in the prior art, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this public affairs
The some embodiments opened for those of ordinary skill in the art without creative efforts, can be with root
Other attached drawings are obtained according to these attached drawings.
Fig. 1 is the application scenarios schematic diagram that the embodiment of the present disclosure provides;
Fig. 2 is robot architecture's top view that the embodiment of the present disclosure provides;
Fig. 3 is robot architecture's bottom view that the embodiment of the present disclosure provides;
Fig. 4 is robot architecture's front view that the embodiment of the present disclosure provides;
Fig. 5 is robot architecture's perspective view that the embodiment of the present disclosure provides;
Fig. 6 is robot architecture's block diagram that the embodiment of the present disclosure provides;
Fig. 7 is the flow diagram for the robot voice control method that one embodiment of the disclosure provides;
Fig. 8 is the flow diagram for the robot voice control method that the another embodiment of the disclosure provides;
Fig. 9 is the structural schematic diagram for the robot voice control device that one embodiment of the disclosure provides;
Figure 10 is the structural schematic diagram for the robot voice control device that another embodiment of the disclosure provides;
Figure 11 is the electronic structure schematic diagram for the robot that the embodiment of the present disclosure provides.
Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present disclosure clearer, below in conjunction with the embodiment of the present disclosure
In attached drawing, the technical solution in the embodiment of the present disclosure is clearly and completely described, it is clear that described embodiment is
Disclosure a part of the embodiment, instead of all the embodiments.Based on the embodiment in the disclosure, those of ordinary skill in the art
Every other embodiment obtained without creative efforts belongs to the range of disclosure protection.
It will be appreciated that though may be described in the embodiments of the present disclosure using term first, second, third, etc..,
But these ... it should not necessarily be limited by these terms.These terms be only used to by ... be distinguished from each other out.For example, not departing from the disclosure
In the case where scope of embodiments, first ... can also be referred to as second ..., and similarly, second ... can also be referred to as
One ....
In order to clearly describe the behavior of robot, following direction definition is carried out:
As shown in figure 5, robot 100 can be by being mutually perpendicular to the shifting of axis relative to the following three defined by main body 110
Dynamic various combinations are advanced on the ground: antero posterior axis X, lateral shaft Y and central vertical shaft Z.Along the forward side of antero posterior axis X
To being denoted as " forward direction ", and " backward " is denoted as along the rearward drive direction of antero posterior axis X.Lateral shaft Y be substantially along by
The axle center that the central point of driving wheel module 141 defines extends between the right wheel and revolver of robot.
Robot 100 can be rotated about the Y axis.When the forward portion of robot 100 tilts upwards, dip down to backward part
It is when tiltedly " facing upward ", and the forward portion for working as robot 100 tilts down, is when being tilted upwards to backward part " nutation ".Separately
Outside, robot 100 can turn about the Z axis.On the forward direction of robot, when robot 100 is to the inclination of the right side of X-axis
" right-hand rotation ", when robot 100 to the left side of X-axis be " left-hand rotation ".
Referring to Fig. 1, the application scenarios include machine for a kind of possible application scenarios that the embodiment of the present disclosure provides
People, such as sweeping robot, floor-mopping robot, dust catcher, weeder etc..In certain embodiments, which can be
Robot is specifically as follows sweeping robot, floor-mopping robot.In an implementation, speech recognition system has can be set in robot,
It to receive the phonetic order of user's sending, and is rotated according to phonetic order according to arrow direction, to respond the voice of user
Instruction.Robot is also provided with instantaneous speech power, to export suggestion voice.In other embodiments, robot can be set
It is equipped with touch-sensitive display, to receive the operational order of user's input.Robot is also provided with WIFI module, Bl uetooth
The wireless communication modules such as module to connect with intelligent terminal, and are received user by wireless communication module and are passed using intelligent terminal
Defeated operational order.
The structure of correlation machine people is described as follows, as shown in Figure 2-5:
Robot 100 includes machine body 110, sensory perceptual system 120, control system, drive system 140, cleaning systems, energy
Source system and man-machine interactive system 170.As shown in Figure 2.
Machine body 110 includes forward portion 111 and backward part 112, and having approximate circular shape, (front and back is all round
Shape), there can also be other shapes, the approximate D-shape of circle including but not limited to behind front.
As shown in figure 4, sensory perceptual system 120 includes positioned at the position determining means 121 of 110 top of machine body, positioned at machine
The buffer 122 of the forward portion 111 of device main body 110, steep cliff sensor 123 and ultrasonic sensor, infrared sensor, magnetic force
The sensing devices such as meter, accelerometer, gyroscope, odometer provide the various positions information and movement of machine to control system 130
Status information.Position determining means 121 include but is not limited to camera, laser ranging system (LDS).Below with triangle telemetry
Laser ranging system for illustrate how carry out position determine.The basic principle of triangle telemetry based on similar triangles etc.
Than relationship, this will not be repeated here.
Laser ranging system includes luminescence unit and light receiving unit.Luminescence unit may include the light source for emitting light, light source
It may include light-emitting component, such as the infrared or luminous ray light emitting diode (LED) of transmitting infrared light or luminous ray.It is excellent
Selection of land, light source can be the light-emitting component of transmitting laser beam.In the present embodiment, the example by laser diode (LD) as light source
Son.Specifically, due to the monochrome of laser beam, orientation and collimation property, use the light source of laser beam can make measurement compared to
Other light are more accurate.For example, the infrared light or luminous ray of light emitting diode (LED) transmitting are by week compared to laser beam
Such environmental effects (such as color or texture of object) is enclosed, and may be decreased in measurement accuracy.Laser diode
(LD) it can be dot laser, measure the two-dimensional position information of barrier, be also possible to line laser, measure the certain model of barrier
Enclose interior three dimensional local information.
Light receiving unit may include imaging sensor, and the light for being reflected by barrier or being scattered is formed on the imaging sensor
Point.Imaging sensor can be the set of single or plurality of rows of multiple unit pixels.These light receiving elements can be by optical signal
Be converted to electric signal.Imaging sensor can be complementary metal oxide semiconductor (CMOS) sensor or charge coupled cell
(CCD) sensor, since the advantage in cost is preferably complementary metal oxide semiconductor (CMOS) sensor.Moreover, light
Unit may include sensitive lens component.The light for being reflected by barrier or being scattered can advance via sensitive lens component to scheme
As forming image on sensor.Sensitive lens component may include single or multiple lens.
Base portion can support luminescence unit and light receiving unit, and luminescence unit and light receiving unit are arranged on base portion and to each other
Every a specific range.For the barrier situation around robot measurement on 360 degree of directions, base portion can be made to be rotatably arranged
In main body 110, it can not also be rotated with base portion itself and rotate transmitting light, reception light by the way that rotating element is arranged.
The angular velocity of rotation of rotating element can be obtained by setting optic coupling element and code-disc, and optic coupling element incudes the tooth on code-disc and lacks,
By tooth lack spacing slip over time and tooth lack between distance value be divided by instantaneous angular velocity can be obtained.The scarce density of tooth is got on code-disc
Greatly, the accuracy rate and precision of measurement are also just corresponding higher but just more accurate in structure, and calculation amount is also higher;Conversely, tooth lacks
Density it is smaller, the accuracy rate and precision of measurement are accordingly also lower, but can be relatively easy in structure, and calculation amount is also got over
It is small, some costs can be reduced.
The data processing equipment connecting with light receiving unit, such as DSP, will be relative to all angles on 0 degree of angular direction of robot
Obstacle distance value at degree records and sends to the data processing unit in control system 130, such as the application processing comprising CPU
Device (AP), location algorithm of the CPU operation based on particle filter obtain the current location of robot, and are charted according to this position, supply
Navigation uses.It is preferable to use instant positioning and map structuring (SLAM) for location algorithm.
Although the laser ranging system based on triangle telemetry can measure the infinity other than certain distance in principle
Distance value at distance, but actually telemeasurement, such as 6 meters or more, realization be it is very difficult, be primarily due to light
The size limitation of pixel unit on the sensor of unit, while also by the photoelectric conversion speed of sensor, sensor and connection
The calculating speed of data transmission bauds, DSP between DSP influences.The measured value that laser ranging system is affected by temperature
The variation that meeting generating system can not put up with, the thermal expansion that the structure being primarily due between luminescence unit and light receiving unit occurs become
Shape leads to the angle change between incident light and emergent light, and luminescence unit and light receiving unit itself can also have temperature drift.Swash
Optical range finding apparatus be used for a long time after, as many factors such as temperature change, vibration accumulate and caused by deformation also can serious shadow
Ring measurement result.The accuracy of measurement result directly determines the accuracy of map making, is robot further progress strategy
The basis of implementation, it is particularly important.
As shown in figure 3, the forward portion 111 of machine body 110 can carry buffer 122, driving wheel during cleaning
Module 141 promotes robot in ground running, and buffer 122 detects machine via sensing system, such as infrared sensor
One or more events in the driving path of people 100, robot can pass through the event that is detected by buffer 122, such as obstacle
Object, wall, and controlling driving wheel module 141 makes robot to respond to the event, for example away from barrier.
Control system 130 is arranged on the circuit main board in machine body 110, including with non-transitory memory, such as
Hard disk, flash memory, random access memory, the computation processor of communication, such as central processing unit, application processor,
The obstacle information that application processor is fed back according to laser ranging system draws institute, robot using location algorithm, such as SLAM
Instant map in the environment.And combining buffer 122, steep cliff sensor 123 and ultrasonic sensor, infrared sensor, magnetic
Range information, the velocity information comprehensive descision sweeper of the sensing devices such as power meter, accelerometer, gyroscope, odometer feedback are worked as
It is preceding which kind of working condition be in, threshold is such as crossed, upper carpet is located at steep cliff, and either above or below is stuck, and dirt box is full, is taken
Rise etc., also specific next step action policy can be provided for different situations, so that the work of robot is more in line with owner
Requirement, have better user experience.Further, even if control system 130 can be based on the cartographic information planning that SLAM is drawn
Cleaning path the most efficient and rational and cleaning method greatly improve the sweeping efficiency of robot.
Drive system 140 can be based on having distance and an angle information, such as x, y and θ component, drive command and manipulate machine
Device people 100 crosses over ground run.Drive system 140 includes driving wheel module 141, and driving wheel module 141 can control a left side simultaneously
Wheel and right wheel, in order to more accurately control the movement of machine, preferably driving wheel module 141 respectively include left driving wheel module and
Right driving wheel module.Left and right driving wheel module is opposed along the lateral shaft defined by main body 110.In order to which robot can be on ground
It is moved more stablely on face or stronger locomitivity, robot may include one or more driven wheel 142, driven
Wheel includes but is not limited to universal wheel.Driving wheel module includes traveling wheel and drive motor and the control electricity for controlling drive motor
Road, driving wheel module can also connect the circuit and odometer of measurement driving current.Driving wheel module 141 can removably connect
It is connected in main body 110, easy disassembly and maintenance.Driving wheel can have biasing drop suspension system, movably fasten,
Such as be rotatably attached, robot body 110 is arrived, and receive spring that is downward and biasing far from robot body 110
Biasing.Spring biasing allows driving wheel with certain contact and traction of the Productivity maintenance with ground, while robot 100 is clear
Clean element is also with certain pressure contact ground 10.
Cleaning systems can be dry cleaning system and/or wet cleaning system.As dry cleaning system, main cleaning
The purging system 151 that connecting component of the function between round brush, dirt box, blower, air outlet and four is constituted.With ground
With the round brush centainly interfered by the rubbish on ground sweep up and winding between round brush and dirt box suction inlet front, then by
Blower generates and passes through the gas sucking dirt box for having suction of dirt box.The dust collection capacity of sweeper can use the sweeping efficiency of rubbish
DPU (Dust pick up efficiency) is characterized, and sweeping efficiency DPU is by roller brushes structure and Effect of Materials, by dust suction
The wind power utilization rate in the air duct that mouth, the connecting component between dirt box, blower, air outlet and four are constituted influences, by blower
Type and power influence, be a responsible system design problem.Compared to common plug-in dust catcher, the raising of dust collection capacity
Meaning is bigger for the clean robot of limited energy.Because the raising of dust collection capacity is directly effectively reduced for the energy
It is required that, that is to say, that the machine on 80 square meter ground can be cleaned by filling primary electricity originally, can be evolved flat to fill primary electricity cleaning 100
Rice is even more.And the service life for reducing the battery of charging times can also greatly increase, so that user replaces the frequency of battery
Rate also will increase.It is more intuitive and importantly, the raising of dust collection capacity is the most obvious and important user experience, Yong Huhui
Immediately arrive at sweep whether clean/wipe whether clean conclusion.Dry cleaning system also may include that there is the side of rotary shaft to brush
152, rotary shaft is angled relative to ground, for being moved to clast in the round brush region of cleaning systems.
Energy resource system includes rechargeable battery, such as nickel-metal hydride battery and lithium battery.Rechargeable battery can connect charge control
Circuit, battery pack charging temperature detection circuit and battery undervoltage observation circuit, charging control circuit, the detection of battery pack charging temperature
Circuit, battery undervoltage observation circuit are connected with single chip machine controlling circuit again.Host, which passes through, is arranged in fuselage side or lower section
Charging electrode connect with charging pile and charges.If having attached dust on exposed charging electrode, can during the charging process by
In the cumulative effect of charge, causes the plastics body of electrode perimeter to melt deformation, even result in electrode itself and deform, it can not
Continue to charge normal.
Man-machine interactive system 170 includes the key on host panel, and key carries out function selection for user;Can also include
Display screen and/or indicator light and/or loudspeaker, display screen, indicator light and loudspeaker to user show current machine status or
Function options;It can also include mobile phone client program.For path navigation type cleaning equipment, cell phone client can be to
The map of environment and machine present position where user's presentation device, can provide a user more horn of plenty and hommization
Function items.
Fig. 6 is the block diagram according to the sweeping robot of the disclosure.
Sweeping robot according to present example may include: the voice of user for identification microphone array unit,
Communication unit for being communicated with remote control equipment or other equipment, the mobile unit for driving main body, cleaning unit, with
And the memory cell for storing information.Input unit (key etc. of sweeping robot), object detection sensors, charging are single
Member, microphone array unit, angle detecting unit, position detection unit, communication unit, driving unit and memory cell can be with
It is connected to control unit, predetermined information is transmitted to control unit or receives predetermined information from control unit.
Microphone array unit can be by the information ratio of the voice inputted by receiving unit and storage in a memory cell
Compared with to determine whether input voice corresponds to specific order.If it is determined that the voice inputted corresponds to specific order, then
Corresponding order is transmitted to control unit.If information phase of the voice that can not be will test with storage in a memory cell
Compare, then detected voice can be considered as noise to ignore detected voice.
For example, the voice corresponding word " come, come here, arriving here, to here " detected, and exist and be stored in
The corresponding text control command of word (come here) in the information of memory cell.In such a case, it is possible to by right
The order answered is transmitted in control unit.
Angle detecting unit can be detected by using the time difference or level of the voice for being input to multiple receiving units
The direction of voice.The direction for the voice that angle detecting unit will test is transmitted to control unit.Control unit can be by making
Movement routine is determined with the voice direction detected by angle detecting unit.
Position detection unit can detecte coordinate of the main body in predetermined cartographic information.In one embodiment, by imaging
The cartographic information of the information and storage that head detects in a memory cell can be compared to each other to detect the current location of main body.
Other than camera, position detection unit can also use global positioning system (GPS).
In a broad sense, position detection unit can detecte whether main body is arranged on specific position.For example, position is examined
Surveying unit may include the unit whether being arranged on charging pile for detecting main body.
For example, whether can be input to according to electric power for detecting in the method whether main body is arranged on charging pile
Detect whether main body is arranged at charge position in charhing unit.In another example can be by being arranged on main body or charging pile
Charge position detection unit detect whether main body is arranged at charge position.
Predetermined information can be transmitted to by communication unit/received from remote control equipment or other equipment.Communication unit
The cartographic information of sweeping robot can be updated.
Driving unit can be with mobile unit operating and cleaning unit.Driving unit can be moved along what is determined by control unit
The mobile mobile unit in path.
Predetermined information related with the operation of sweeping robot is stored in memory cell.For example, sweeping robot institute cloth
The cartographic information in the region set, control command information corresponding with the voice that microphone array unit is identified, by angle detecting
It direction angle information that unit detects, the location information detected by position detection unit and is detected by object detection sensors
To obstacle information can store in a memory cell.
Control unit can receive the information detected by receiving unit, camera and object detection sensors.Control
Unit can identify the direction and detection sweeping robot that the voice of user, detection voice occur based on the information transmitted
Position.In addition, control unit can be with mobile unit operating and cleaning unit.
As shown in fig. 7, being applied to the robot in Fig. 1 application scenarios, user is executed by voice command control robot
Relevant control instruction.The embodiment of the present disclosure provides a kind of robot voice control method, and the method includes following method steps
It is rapid:
Step S702: control class phonetic order is received.
Control class phonetic order: being used to indicate operation, i.e., instruction control robot executes operation.The operation, which can be, to be made by oneself
The operation of justice setting is also possible to system default setting, such as: sweep operation, the operation that mops floor, weeding operation etc..Specific
In embodiment, control class phonetic order customized can be arranged, and can also be arranged with system default, such as: control class phonetic order
It can be user's customized " sweeping the floor ", " mopping floor " or " cleaning ".Control class phonetic order (manipulation class phonetic order) in advance
The cloud for being stored in the robot or being connect with the robot.It is hereafter " to beat to control class phonetic order for convenience of description
Sweep health " for be illustrated.
It is described to judge whether to receive the control class phonetic order for being used to indicate operation and can be to sentence in the preset period
It is disconnected, such as 1 minute, 2 minutes etc., which can be preset by touch apparatus.According in the preset time model
Monitoring situation is enclosed, executes the following two kinds situation respectively.
The first situation, if it is determined that receiving the control class phonetic order, then the robot executes the control class
The operation of phonetic order instruction.
For example, monitoring the control command of " cleaning " in 1 minute, robot is making a reservation for according to the order of user
Direction or position are swept, until waking up class voice control command until receiving another time.
Second situation, if it is determined that not receiving the control class phonetic order, then the robot goes back to former direction, after
It is continuous to execute original operation.
For example, do not monitor the control command of " cleaning " in 1 minute, robot according to original cleaning direction or
Position is swept, until waking up class voice control command again until receiving.
Step S704: the control class phonetic order is matched with the robot local data base store instruction.
Can all there be the storage equipment of oneself, such as self-contained memory, caching or external hanging type storage in usual robot
Device (USB flash disk or mobile storage disc), phonetic order all at this time (including waking up the phonetic orders such as class, control class) are all lifted with group
Mode is stored in storage dish database, for example, waking up class phonetic order, " opening voice " " booting " " coming ", " comes this
In ", " to here ", " to here " etc..Control class phonetic order, " sweeping the floor ", " mopping floor ", " intending parlor " or " cleaning "
Deng.After robot is connected to phonetic order, selection scans for matching with the storing data library of itself first, which can be mould
It is instructed whether pasting matching or precisely matching, and return to successful match.
Step S706: if successful match, the control class phonetic order content is executed.
In some possible implementations, if the successful match, execute the control class phonetic order content it
Afterwards, comprising:
Step S708: if matching is unsuccessful, detect whether the robot is connected to cloud.
Match it is unsuccessful, that is, illustrate to be can choose at this time without storing corresponding phonetic order in local data base from
Other channel analysis phonetic orders, a kind of mode are to be matched with the phonetic order library that cloud stores.Therefore, it is necessary first to examine
It surveys whether the robot is connectable to internet, if not connecting internet, confirms and do not execute the phonetic order, if can
It is connected to internet, then needs to carry out Secondary Match again.
Step S710: if being not attached to cloud, it is restored to the state received before control class phonetic order.
In some possible implementations, the state before the control class phonetic order includes: stationary state or clear
Sweep state.Stationary state refers to non-operational state or standby mode, and cleaning state refers to the state for being carrying out cleaning movement.
Step S712: if being connected to cloud, the control class phonetic order is matched with the cloud database.
It needs for all phonetic orders (including waking up the phonetic orders such as class, control class) to be all stored in such a way that group lifts
Cloud, for example, class phonetic order is waken up, " opening voice ", " booting " " coming ", " coming here ", " to here ", " to here "
Deng.Control class phonetic order, " sweeping the floor ", " mopping floor ", " intend parlor " or " cleaning " etc..After robot is connected to cloud,
It scans for matching with cloud database, which can be fuzzy matching or precisely matching, and return to successful match whether refers to
It enables.
Step S714: if successful match, the control class phonetic order content is executed.
Step S716: if matching is unsuccessful, it is restored to the state received before control class phonetic order.
The embodiment of the present disclosure can be when receiving phonetic control command, according to control instruction from robot local data
Library, cloud carry out semantic matches with the instruction prestored, execute expected movement according to matching result.The disclosure can be pressed accurately
It is operated according to instruction, improves the discrimination of the phonetic order of user's input, can relatively accurately refer to according to the voice of user
Work is enabled, the interest of human-computer interaction is also increased.
In other embodiment, as shown in figure 8, being applied to the robot in Fig. 1 application scenarios, user passes through voice
Instruction control robot executes relevant control instruction.The embodiment of the present disclosure provides a kind of robot voice control method, described
Method includes following method and step:
Step S800: it receives and wakes up class phonetic order.
Under normal conditions, the speech recognition system of robot can have dormant state and state of activation.Such as work as robot
Be in working condition or unused state, speech recognition system at this time in a dormant state, in the dormant state, language
Sound identifying system hardly occupies the excess resource of robot, is that will not remove to identify other in addition to waking up class phonetic order
Phonetic order.
If speech recognition system in a dormant state receive wake up class phonetic order, can by speech recognition system by
Dormant state switches to state of activation.In active state, speech recognition system can identify configuration in speech recognition system
In phonetic order, such as control class phonetic order etc..
Specifically, waking up class phonetic order: for waking up speech recognition system, i.e., instruction control speech recognition system is in
State of activation.In an implementation, if speech recognition system in a dormant state, when robot receives the first phonetic order, meeting
Speech recognition system is switched into state of activation by dormant state.If speech recognition system is active, voice is controlled
Identifying system keeps state of activation, can also do nothing.In the particular embodiment, class voice is waken up
Instruction customized can be arranged, and can also be arranged with system default, such as: waking up class phonetic order can be that user is customized " to open
Open voice ", " booting " " coming ", " coming here ", " to here ", " to here " etc..For convenience of description, hereafter to wake up class voice
It instructs to be illustrated for " coming ".
Step S801: the identification Sounnd source direction for waking up class phonetic order simultaneously makes the robot turn to the sound source side
To.
After the wake-up Class Activation instruction for example " to come " etc. has been received in robot, robot passes through angle detecting list
The direction of member detection voice, such as voice is detected by using the time difference of the voice that is input to multiple receiving units or level
Direction.The direction for the voice that angle detecting unit will test is transmitted to control unit.Control unit can by using by
The voice direction that angle detecting unit detects controls drive system, so that robot is carried out the movement such as original place rotation, so that machine
People's direction of advance turns to user's Sounnd source direction.After such human-computer interaction is uttered a sound similar to a people to work by people, stop
Work in assistant turns to dialogue state, so that human-computer interaction is more humanized.
In some possible implementations, the Sounnd source direction of the identification wake-up class phonetic order simultaneously makes the machine
Device people turns to the Sounnd source direction, specifically includes following method and step:
Step S8011: the identification Sounnd source direction for waking up class phonetic order;
Step S8013: do not stop that the robot is made to turn to the Sounnd source direction while operating of driving motor;
After the wake-up Class Activation instruction for example " to come " etc. has been received in robot, robot passes through angle detecting list
The direction of member detection voice, such as voice is detected by using the time difference of the voice that is input to multiple receiving units or level
Direction.The direction for the voice that angle detecting unit will test is transmitted to control unit.Control unit can by using by
The voice direction that angle detecting unit detects controls drive system, so that robot is carried out the movement such as original place rotation, so that machine
People's direction of advance turns to user's Sounnd source direction.This process, robot do not stop its working condition, clean motor and are still within booting
State.
Step S8015: stop the operating of the driving motor.
After waiting revolute to Sounnd source direction, robot only retains and is active about all drive systems
Speech recognition system, robot is in complete armed state at this time, has detected whether control command sending in real time.
Step S802: control class phonetic order is received.
Control class phonetic order: being used to indicate operation, i.e., instruction control robot executes operation.The operation, which can be, to be made by oneself
The operation of justice setting is also possible to system default setting, such as: sweep operation, the operation that mops floor, weeding operation etc..Specific
In embodiment, control class phonetic order customized can be arranged, and can also be arranged with system default, such as: control class phonetic order
It can be user's customized " sweeping the floor ", " mopping floor " or " cleaning ".Control class phonetic order (manipulation class phonetic order) in advance
The cloud for being stored in the robot or being connect with the robot.It is hereafter " to beat to control class phonetic order for convenience of description
Sweep health " for be illustrated.
It is described to judge whether to receive the control class phonetic order for being used to indicate operation and can be to sentence in the preset period
It is disconnected, such as 1 minute, 2 minutes etc., which can be preset by touch apparatus.According in the preset time model
Monitoring situation is enclosed, executes the following two kinds situation respectively.
The first situation, if it is determined that receiving the control class phonetic order, then the robot executes the control class
The operation of phonetic order instruction.
For example, monitoring the control command of " cleaning " in 1 minute, robot is making a reservation for according to the order of user
Direction or position are swept, until waking up class voice control command until receiving another time.
Second situation, if it is determined that not receiving the control class phonetic order, then the robot goes back to former direction, after
It is continuous to execute original operation.
For example, do not monitor the control command of " cleaning " in 1 minute, robot according to original cleaning direction or
Position is swept, until waking up class voice control command again until receiving.
Step S804: the control class phonetic order is matched with the robot local data base store instruction.
Can all there be the storage equipment of oneself, such as self-contained memory, caching or external hanging type storage in usual robot
Device (USB flash disk or mobile storage disc), phonetic order all at this time (including waking up the phonetic orders such as class, control class) are all lifted with group
Mode is stored in storage dish database, for example, waking up class phonetic order, " opening voice " " booting " " coming ", " comes this
In ", " to here ", " to here " etc..Control class phonetic order, " sweeping the floor ", " mopping floor ", " intending parlor " or " cleaning "
Deng.After robot is connected to phonetic order, selection scans for matching with the storing data library of itself first, which can be mould
It is instructed whether pasting matching or precisely matching, and return to successful match.
Step S806: if successful match, the control class phonetic order content is executed.
In some possible implementations, if the successful match, execute the control class phonetic order content it
Afterwards, comprising:
Step S808: if matching is unsuccessful, detect whether the robot is connected to cloud.
Match it is unsuccessful, that is, illustrate to be can choose at this time without storing corresponding phonetic order in local data base from
Other channel analysis phonetic orders, a kind of mode are to be matched with the phonetic order library that cloud stores.Therefore, it is necessary first to examine
It surveys whether the robot is connectable to internet, if not connecting internet, confirms and do not execute the phonetic order, if can
It is connected to internet, then needs to carry out Secondary Match again.
Step S810: if being not attached to cloud, it is restored to the state received before control class phonetic order.
In some possible implementations, the state before the control class phonetic order includes: stationary state or clear
Sweep state.Stationary state refers to non-operational state or standby mode, and cleaning state refers to the state for being carrying out cleaning movement.
Step S812: if being connected to cloud, the control class phonetic order is matched with the cloud database.
It needs for all phonetic orders (including waking up the phonetic orders such as class, control class) to be all stored in such a way that group lifts
Cloud, for example, class phonetic order is waken up, " opening voice ", " booting " " coming ", " coming here ", " to here ", " to here "
Deng.Control class phonetic order, " sweeping the floor ", " mopping floor ", " intend parlor " or " cleaning " etc..After robot is connected to cloud,
It scans for matching with cloud database, which can be fuzzy matching or precisely matching, and return to successful match whether refers to
It enables.
Step S814: if successful match, the control class phonetic order content is executed.
Step S816: if matching is unsuccessful, it is restored to the state received before control class phonetic order.
The embodiment of the present disclosure when receiving phonetic order, can first make the speech recognition system of robot activate shape
State, and the Sounnd source direction that robot turns to voice is controlled, it is at armed state.Then control is received within a certain period of time to hold
The control command of row operation carries out semantic matches with the instruction prestored from robot local data base, cloud according to control command,
Expected movement is executed according to matching result.The disclosure can be operated accurately as indicated, improve the language of user's input
The discrimination of sound instruction can relatively accurately work according to the phonetic order of user, also increase the interest of human-computer interaction.
In a further embodiment, as shown in figure 9, being conjointly employed in the robot in Fig. 1 application scenarios, the disclosure is implemented
Example provides a kind of robot voice control device, including the first receiving unit 902, the second matching unit 903, the first matching unit
904, the second execution unit 905, the first execution unit 906, detection unit 908, the second recovery unit 909, the first recovery unit
910, each unit is described as follows.The method that Fig. 9 shown device can execute embodiment illustrated in fig. 7, the present embodiment are not retouched in detail
The part stated can refer to the related description to embodiment illustrated in fig. 7.The implementation procedure and technical effect of the technical solution are referring to figure
Description in 7 illustrated embodiments, details are not described herein.
First receiving unit 902, for receiving control class phonetic order;
First matching unit 904, for referring to the control class phonetic order and the robot local data base storage
Order is matched;
First execution unit 906 executes the control class phonetic order content if being used for successful match.
In some possible implementations, further includes:
If detection unit 908 detects whether the robot is connected to cloud unsuccessful for matching;
First recovery unit 910, if being restored to before the reception control class phonetic order for being not attached to cloud
State.
In some possible implementations, further includes:
Second matching unit 903, if for being connected to cloud, by the control class phonetic order and the cloud database
It is matched;
Second execution unit 905 executes the control class phonetic order content if being used for successful match.
In some possible implementations, further includes:
Second recovery unit 909 is restored to before the reception control class phonetic order if unsuccessful for matching
State.
In some possible implementations, the state before the control class phonetic order includes: stationary state or clear
Sweep state.
In a further embodiment, as shown in Figure 10, the robot being conjointly employed in Fig. 1 application scenarios, the disclosure are real
It applies example and a kind of robot voice control device, including recognition unit 1000, the second receiving unit 1001, the first receiving unit is provided
1002, the second matching unit 1003, the first matching unit 1004, the second execution unit 1005, the first execution unit 1006, detection
Unit 1008, the second recovery unit 1009, the first recovery unit 1010, each unit are described as follows.Figure 10 shown device can be with
The method for executing embodiment illustrated in fig. 8, the part that the present embodiment is not described in detail can refer to the correlation to embodiment illustrated in fig. 8
Explanation.Description in implementation procedure and the technical effect embodiment shown in Figure 8 of the technical solution, details are not described herein.
First receiving unit 1002, for receiving control class phonetic order;
First matching unit 1004, for referring to the control class phonetic order and the robot local data base storage
Order is matched;
First execution unit 1006 executes the control class phonetic order content if being used for successful match.
In some possible implementations, further includes:
If detection unit 1008 detects whether the robot is connected to cloud unsuccessful for matching;
First recovery unit 1010, if for being not attached to cloud, be restored to reception control class phonetic order it
Preceding state.
In some possible implementations, further includes:
Second matching unit 1003, if for being connected to cloud, by the control class phonetic order and the cloud data
Library is matched;
Second execution unit 1005 executes the control class phonetic order content if being used for successful match.
In some possible implementations, further includes:
Second recovery unit 1009 is restored to before the reception control class phonetic order if unsuccessful for matching
State.
In some possible implementations, further includes:
Second receiving unit 1001 wakes up class phonetic order for receiving;
Recognition unit 1000 Sounnd source direction for waking up class phonetic order and makes the robot turn to institute for identification
State Sounnd source direction.
In some possible implementations, the recognition unit 1000 is also used to:
Identify the Sounnd source direction for waking up class phonetic order;
Do not stop that the robot is made to turn to the Sounnd source direction while operating of driving motor;
Stop the operating of the driving motor.
In some possible implementations, the state before the control class phonetic order includes: stationary state or clear
Sweep state.
The embodiment of the present disclosure provides a kind of robot, including as above any robot voice control device.
The embodiment of the present disclosure provides a kind of robot, including processor and memory, and the memory is stored with can be by
The computer program instructions that the processor executes when the processor executes the computer program instructions, realize aforementioned
The method and step of one embodiment.
The embodiment of the present disclosure provides a kind of non-transient computer readable storage medium, is stored with computer program instructions,
The computer program instructions realize the method and step of aforementioned any embodiment when being called and being executed by processor.
As shown in figure 11, robot 1100 may include processing unit (such as central processing unit, graphics processor etc.)
1101, it can be loaded at random according to the program being stored in read-only memory (ROM) 1102 or from storage device 1108
It accesses the program in memory (RAM) 1103 and executes various movements appropriate and processing.In RAM 1103, it is also stored with electricity
Child robot 1100 operates required various programs and data.Processing unit 1101, ROM 1102 and RAM 1103 pass through total
Line 1104 is connected with each other.Input/output (I/O) interface 1105 is also connected to bus 1104.
In general, following device can connect to I/O interface 1105: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph
As the input unit 1106 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration
The output device 1107 of dynamic device etc.;Storage device 1108 including such as tape, hard disk etc.;And communication device 1109.Communication
Device 1109 can permit electronic robot 1100 and wirelessly or non-wirelessly be communicated with other robot to exchange data.Although figure
7 show the electronic robot 1100 with various devices, it should be understood that being not required for implementing or having all show
Device.It can alternatively implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communication device 1109, or from storage device
1108 are mounted, or are mounted from ROM 1102.When the computer program is executed by processing unit 1101, the disclosure is executed
The above-mentioned function of being limited in the method for embodiment.
It should be noted that the above-mentioned computer-readable medium of the disclosure can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires
Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage journey
The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this
In open, computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated,
In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to
Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and deposit
Any computer-readable medium other than storage media, the computer-readable signal media can send, propagate or transmit and be used for
By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. are above-mentioned
Any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned robot;It is also possible to individualism, and without
It is incorporated in the robot.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof
Machine program code, above procedure design language include object oriented program language-such as Java, Smalltalk, C+
+, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package,
Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part.
In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN)
Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service
Provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present disclosure can be realized by way of software, can also be by hard
The mode of part is realized.Wherein, the title of unit does not constitute the restriction to the unit itself under certain conditions, for example, the
One acquiring unit is also described as " obtaining the unit of at least two internet protocol addresses ".
The apparatus embodiments described above are merely exemplary, wherein described, unit can as illustrated by the separation member
It is physically separated with being or may not be, component shown as a unit may or may not be physics list
Member, it can it is in one place, or may be distributed over multiple network units.It can be selected according to the actual needs
In some or all of the modules achieve the purpose of the solution of this embodiment.Those of ordinary skill in the art are not paying creativeness
Labour in the case where, it can understand and implement.
Finally, it should be noted that above embodiments are only to illustrate the technical solution of the disclosure, rather than its limitations;Although
The disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: it still may be used
To modify the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features;
And these are modified or replaceed, each embodiment technical solution of the disclosure that it does not separate the essence of the corresponding technical solution spirit and
Range.
Claims (17)
1. a kind of robot voice control method, which is characterized in that the described method includes:
Receive control class phonetic order;
The control class phonetic order is matched with the robot local data base store instruction;
If successful match, the control class phonetic order content is executed.
2. the method according to claim 1, wherein if the successful match, executes the control class voice
After command content, comprising:
If matching is unsuccessful, detect whether the robot is connected to cloud;
If being not attached to cloud, it is restored to the state received before control class phonetic order.
3. according to the method described in claim 2, be restored to the reception it is characterized in that, if described be not attached to cloud
After state before control class phonetic order, comprising:
If being connected to cloud, the control class phonetic order is matched with the cloud database;
If successful match, the control class phonetic order content is executed.
4. according to the method described in claim 3, it is characterized in that, if the successful match, executes the control class voice
After command content, comprising:
If matching is unsuccessful, it is restored to the state received before control class phonetic order.
5. method according to claim 1 to 4, which is characterized in that before the reception control class phonetic order, packet
It includes:
It receives and wakes up class phonetic order;
It identifies the Sounnd source direction for waking up class phonetic order and the robot is made to turn to the Sounnd source direction.
6. according to the method described in claim 5, it is characterized in that, the identification Sounnd source direction for waking up class phonetic order
And the robot is made to turn to the Sounnd source direction, comprising:
Identify the Sounnd source direction for waking up class phonetic order;
Do not stop that the robot is made to turn to the Sounnd source direction while operating of driving motor;
Stop the operating of the driving motor.
7. according to the method described in claim 4, it is characterized in that, the state before the control class phonetic order includes: quiet
Only state or cleaning state.
8. a kind of robot voice control device characterized by comprising
First receiving unit, for receiving control class phonetic order;
First matching unit is used for the control class phonetic order and robot local data base store instruction progress
Match;
First execution unit executes the control class phonetic order content if being used for successful match.
9. device according to claim 8, which is characterized in that further include:
If detection unit detects whether the robot is connected to cloud unsuccessful for matching;
First recovery unit, if being restored to the state received before control class phonetic order for being not attached to cloud.
10. device according to claim 9, which is characterized in that further include:
Second matching unit, if for being connected to cloud, by the control class phonetic order and cloud database progress
Match;
Second execution unit executes the control class phonetic order content if being used for successful match.
11. device according to claim 10, which is characterized in that further include:
Second recovery unit is restored to the state received before control class phonetic order if unsuccessful for matching.
12. according to any device of claim 8-11, which is characterized in that further include:
Second receiving unit wakes up class phonetic order for receiving;
Recognition unit the Sounnd source direction for waking up class phonetic order and makes the robot turn to the sound source side for identification
To.
13. device according to claim 12, which is characterized in that the recognition unit is also used to:
Identify the Sounnd source direction for waking up class phonetic order;
Do not stop that the robot is made to turn to the Sounnd source direction while operating of driving motor;
Stop the operating of the driving motor.
14. device according to claim 11, which is characterized in that it is described control class phonetic order before state include:
Stationary state cleans state.
15. a kind of robot voice control device, which is characterized in that including processor and memory, the memory is stored with
The computer program instructions that can be executed by the processor when processor executes the computer program instructions, are realized
Method and step as claimed in claim 1 to 7.
16. a kind of robot, which is characterized in that including such as described in any item devices of claim 8-15.
17. a kind of non-transient computer readable storage medium, which is characterized in that be stored with computer program instructions, the meter
Calculation machine program instruction realizes method and step as claimed in claim 1 to 7 when being called and being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910265960.3A CN110136704B (en) | 2019-04-03 | 2019-04-03 | Robot voice control method and device, robot and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910265960.3A CN110136704B (en) | 2019-04-03 | 2019-04-03 | Robot voice control method and device, robot and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110136704A true CN110136704A (en) | 2019-08-16 |
CN110136704B CN110136704B (en) | 2021-12-28 |
Family
ID=67569072
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910265960.3A Active CN110136704B (en) | 2019-04-03 | 2019-04-03 | Robot voice control method and device, robot and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110136704B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110844402A (en) * | 2019-11-01 | 2020-02-28 | 贵州大学 | Garbage bin system is summoned to intelligence |
CN111261158A (en) * | 2020-01-15 | 2020-06-09 | 上海思依暄机器人科技股份有限公司 | Function menu customization method, voice shortcut control method and robot |
CN111370111A (en) * | 2020-03-03 | 2020-07-03 | 赛诺威盛科技(北京)有限公司 | Large-scale image equipment control system and method based on voice and storage medium |
WO2021043080A1 (en) * | 2019-09-05 | 2021-03-11 | 北京石头世纪科技股份有限公司 | Cleaning robot and control method therefor |
CN112596928A (en) * | 2020-12-25 | 2021-04-02 | 深圳市越疆科技有限公司 | Industrial robot data management method, device, equipment and computer storage medium |
CN113658601A (en) * | 2021-08-18 | 2021-11-16 | 开放智能机器(上海)有限公司 | Voice interaction method, device, terminal equipment, storage medium and program product |
WO2023179226A1 (en) * | 2022-03-22 | 2023-09-28 | 青岛海尔空调器有限总公司 | Method and apparatus for voice control of air conditioner, and air conditioner and storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005006935A1 (en) * | 2003-07-16 | 2005-01-27 | Alfred Kärcher Gmbh & Co. Kg | Floor cleaning system |
US20090082879A1 (en) * | 2007-09-20 | 2009-03-26 | Evolution Robotics | Transferable intelligent control device |
CN105976814A (en) * | 2015-12-10 | 2016-09-28 | 乐视致新电子科技(天津)有限公司 | Headset control method and device |
CN106098062A (en) * | 2016-06-16 | 2016-11-09 | 杭州古北电子科技有限公司 | Intelligent sound control system for identifying that processing locality is combined with wireless network and method |
CN106328132A (en) * | 2016-08-15 | 2017-01-11 | 歌尔股份有限公司 | Voice interaction control method and device for intelligent equipment |
US9734827B2 (en) * | 2013-07-11 | 2017-08-15 | Samsung Electronics Co., Ltd. | Electric equipment and control method thereof |
CN107274902A (en) * | 2017-08-15 | 2017-10-20 | 深圳诺欧博智能科技有限公司 | Phonetic controller and method for household electrical appliances |
CN107437419A (en) * | 2016-05-27 | 2017-12-05 | 广州零号软件科技有限公司 | A kind of method, instruction set and the system of the movement of Voice command service robot |
CN206964595U (en) * | 2017-02-23 | 2018-02-06 | 白思琦 | It is a kind of can Voice command sweeping robot |
CN109065040A (en) * | 2018-08-03 | 2018-12-21 | 北京奔流网络信息技术有限公司 | A kind of voice information processing method and intelligent electric appliance |
CN109358751A (en) * | 2018-10-23 | 2019-02-19 | 北京猎户星空科技有限公司 | A kind of wake-up control method of robot, device and equipment |
-
2019
- 2019-04-03 CN CN201910265960.3A patent/CN110136704B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005006935A1 (en) * | 2003-07-16 | 2005-01-27 | Alfred Kärcher Gmbh & Co. Kg | Floor cleaning system |
US20090082879A1 (en) * | 2007-09-20 | 2009-03-26 | Evolution Robotics | Transferable intelligent control device |
US9734827B2 (en) * | 2013-07-11 | 2017-08-15 | Samsung Electronics Co., Ltd. | Electric equipment and control method thereof |
CN105976814A (en) * | 2015-12-10 | 2016-09-28 | 乐视致新电子科技(天津)有限公司 | Headset control method and device |
CN107437419A (en) * | 2016-05-27 | 2017-12-05 | 广州零号软件科技有限公司 | A kind of method, instruction set and the system of the movement of Voice command service robot |
CN106098062A (en) * | 2016-06-16 | 2016-11-09 | 杭州古北电子科技有限公司 | Intelligent sound control system for identifying that processing locality is combined with wireless network and method |
CN106328132A (en) * | 2016-08-15 | 2017-01-11 | 歌尔股份有限公司 | Voice interaction control method and device for intelligent equipment |
CN206964595U (en) * | 2017-02-23 | 2018-02-06 | 白思琦 | It is a kind of can Voice command sweeping robot |
CN107274902A (en) * | 2017-08-15 | 2017-10-20 | 深圳诺欧博智能科技有限公司 | Phonetic controller and method for household electrical appliances |
CN109065040A (en) * | 2018-08-03 | 2018-12-21 | 北京奔流网络信息技术有限公司 | A kind of voice information processing method and intelligent electric appliance |
CN109358751A (en) * | 2018-10-23 | 2019-02-19 | 北京猎户星空科技有限公司 | A kind of wake-up control method of robot, device and equipment |
Non-Patent Citations (2)
Title |
---|
B.M. KIM ET AL.: "《Design guideline of anthropomorphic sound feedback for service robot malfunction - with emphasis on the vacuum cleaning robot》", 《RO-MAN 2009 - THE 18TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION》 * |
陈杨等: "《基于STM32的公共场所清洁机器人》", 《科技与创新》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021043080A1 (en) * | 2019-09-05 | 2021-03-11 | 北京石头世纪科技股份有限公司 | Cleaning robot and control method therefor |
CN110844402A (en) * | 2019-11-01 | 2020-02-28 | 贵州大学 | Garbage bin system is summoned to intelligence |
CN110844402B (en) * | 2019-11-01 | 2022-05-17 | 贵州大学 | Garbage bin system is summoned to intelligence |
CN111261158A (en) * | 2020-01-15 | 2020-06-09 | 上海思依暄机器人科技股份有限公司 | Function menu customization method, voice shortcut control method and robot |
CN111370111A (en) * | 2020-03-03 | 2020-07-03 | 赛诺威盛科技(北京)有限公司 | Large-scale image equipment control system and method based on voice and storage medium |
CN112596928A (en) * | 2020-12-25 | 2021-04-02 | 深圳市越疆科技有限公司 | Industrial robot data management method, device, equipment and computer storage medium |
CN113658601A (en) * | 2021-08-18 | 2021-11-16 | 开放智能机器(上海)有限公司 | Voice interaction method, device, terminal equipment, storage medium and program product |
WO2023179226A1 (en) * | 2022-03-22 | 2023-09-28 | 青岛海尔空调器有限总公司 | Method and apparatus for voice control of air conditioner, and air conditioner and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110136704B (en) | 2021-12-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110136704A (en) | Robot voice control method and device, robot and medium | |
US20220167820A1 (en) | Method and Apparatus for Constructing Map of Working Region for Robot, Robot, and Medium | |
CN110051289A (en) | Robot voice control method and device, robot and medium | |
CN110495821B (en) | Cleaning robot and control method thereof | |
CN110623606B (en) | Cleaning robot and control method thereof | |
CN109920424A (en) | Robot voice control method and device, robot and medium | |
JP6823794B2 (en) | Automatic cleaning equipment and cleaning method | |
TWI821992B (en) | Cleaning robot and control method thereof | |
US20220125270A1 (en) | Method for controlling automatic cleaning device, automatic cleaning device, and non-transitory storage medium | |
CN109932726A (en) | Robot ranging calibration method and device, robot and medium | |
CN109920425A (en) | Robot voice control method and device, robot and medium | |
US20240029298A1 (en) | Locating method and apparatus for robot, and storage medium | |
CN210931181U (en) | Cleaning robot | |
CN217792839U (en) | Automatic cleaning equipment | |
CN114296447B (en) | Self-walking equipment control method and device, self-walking equipment and storage medium | |
CN210931183U (en) | Cleaning robot | |
CN116149307A (en) | Self-walking equipment and obstacle avoidance method thereof | |
CN116977858A (en) | Ground identification method, device, robot and storage medium | |
CN117008148A (en) | Method, apparatus and storage medium for detecting slip state | |
CN116942017A (en) | Automatic cleaning device, control method, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20220104 Address after: 102299 No. 8008, floor 8, building 16, yard 37, Chaoqian Road, Zhongguancun Science and Technology Park, Changping District, Beijing Patentee after: Beijing Stone Innovation Technology Co.,Ltd. Address before: No. 6016, 6017 and 6018, Block C, No. 8 Heiquan Road, Haidian District, Beijing 100085 Patentee before: Beijing Roborock Technology Co.,Ltd. |
|
TR01 | Transfer of patent right |