CN109901589A - Mobile robot control method and apparatus - Google Patents

Mobile robot control method and apparatus Download PDF

Info

Publication number
CN109901589A
CN109901589A CN201910247676.3A CN201910247676A CN109901589A CN 109901589 A CN109901589 A CN 109901589A CN 201910247676 A CN201910247676 A CN 201910247676A CN 109901589 A CN109901589 A CN 109901589A
Authority
CN
China
Prior art keywords
action
model
sample
information
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910247676.3A
Other languages
Chinese (zh)
Other versions
CN109901589B (en
Inventor
尚云
刘洋
华仁红
王毓玮
冯卓玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yida Turing Technology Co Ltd
Original Assignee
Beijing Yida Turing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yida Turing Technology Co Ltd filed Critical Beijing Yida Turing Technology Co Ltd
Priority to CN201910247676.3A priority Critical patent/CN109901589B/en
Publication of CN109901589A publication Critical patent/CN109901589A/en
Application granted granted Critical
Publication of CN109901589B publication Critical patent/CN109901589B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the present invention provides a kind of mobile robot control method and apparatus, and wherein method includes: to be based respectively on the orientation sensing device installed in mobile robot and vision sensing equipment acquisition current location information and present image;Current location information and present image are input to action planning model, obtain the current action information of action planning model output;Wherein, action planning model is obtained based on the training of sample positioning information, sample image, sample action information and sample identification;Mobile robot is controlled based on current action information.Method and apparatus provided in an embodiment of the present invention directly obtain current location information by orientation sensing device, are not necessarily to high-precision sensory perceptual system, without a large amount of priori knowledge, realize simply, reduce the threshold of applying of research staff, and location efficiency is high.In addition, obtaining current action information by action planning model trained in advance, transfer ability is strong, can adapt to most of scenes, has a wide range of application, and stability is high.

Description

Mobile robot control method and apparatus
Technical field
The present embodiments relate to technical field of computer vision more particularly to a kind of mobile robot control method and dresses It sets.
Background technique
With the development of science and technology, mobile robot is in fields such as storage, logistics, electric inspection process using increasingly extensive.
Existing mobile robot control method, usually using the monocular camera installed in mobile robot, in workspace Pertinent image information is acquired in domain, and image information is handled offline, the sparse point cloud chart of the 3D of constructing environment.Independently moving During dynamic, the sparse point cloud chart of 3D of the mobile robot based on priori, and the texture information being stored in map, view-based access control model SLAM (Simultaneous Localization and Mapping, synchronous to position and build figure) technology, to the figure obtained in real time As being handled, the accurate estimation to current pose is realized.Conduct programming is then carried out based on current pose, and combines barrier Testing result carries out action planning to mobile robot.
Then, the realization of the above method is sufficiently complex, needs high-precision sensory perceptual system to carry out map structuring and positioning, It is very big to the demand of priori knowledge, and the application threshold of research staff is higher.In addition, the transfer ability of the above method is poor, one Denier application scenarios change, it is necessary to map is repainted, it is time-consuming and laborious.
Summary of the invention
The embodiment of the present invention provides a kind of mobile robot control method and apparatus, to solve existing mobile robot Control method is high to the required precision of sensory perceptual system, needs a large amount of priori knowledges, and the problem of transfer ability difference.
In a first aspect, the embodiment of the present invention provides a kind of mobile robot control method, comprising:
It is based respectively on the orientation sensing device installed in mobile robot and vision sensing equipment obtains current location information And present image;
The current location information and the present image are input to action planning model, obtain the action planning mould The current action information of type output;Wherein, the action planning model is dynamic based on sample positioning information, sample image, sample Make information and sample identification training obtains;
The mobile robot is controlled based on the current action information.
Second aspect, the embodiment of the present invention provide a kind of control device for moving robot, comprising:
Acquiring unit is obtained for being based respectively on the orientation sensing device installed in mobile robot and vision sensing equipment Current location information and present image;
Planning unit is obtained for the current location information and the present image to be input to action planning model The current action information of the action planning model output;Wherein, the action planning model is based on sample positioning information, sample What this image, sample action information and sample identification training obtained;
Control unit, for controlling the mobile robot based on the current action information.
The third aspect, the embodiment of the present invention provide a kind of electronic equipment, including processor, communication interface, memory and total Line, wherein processor, communication interface, memory complete mutual communication by bus, and processor can call in memory Logical order, to execute as provided by first aspect the step of method.
Fourth aspect, the embodiment of the present invention provide a kind of non-transient computer readable storage medium, are stored thereon with calculating Machine program is realized as provided by first aspect when the computer program is executed by processor the step of method.
A kind of mobile robot control method and apparatus provided in an embodiment of the present invention are directly obtained by orientation sensing device Current location information is taken, high-precision sensory perceptual system is not necessarily to, without a large amount of priori knowledge, realizes simply, reduces research and development Personnel's applies threshold, and location efficiency is high.In addition, obtaining current action information by action planning model trained in advance, move Shifting ability is strong, can adapt to most of scenes, has a wide range of application, and stability is high.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this hair Bright some embodiments for those of ordinary skill in the art without creative efforts, can be with root Other attached drawings are obtained according to these attached drawings.
Fig. 1 is the flow diagram of mobile robot control method provided in an embodiment of the present invention;
Fig. 2 be another embodiment of the present invention provides mobile robot control method flow diagram;
Fig. 3 is the structural schematic diagram of control device for moving robot provided in an embodiment of the present invention;
Fig. 4 is the structural schematic diagram of electronic equipment provided in an embodiment of the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
High-precision sensory perceptual system is needed to carry out map structuring and positioning for existing mobile robot control method, It is very big to the demand of priori knowledge, and transfer ability is poor, once application scenarios change, it is necessary to map is repainted, Time-consuming and laborious problem, the embodiment of the present invention provide a kind of mobile robot control method.Fig. 1 is provided in an embodiment of the present invention The flow diagram of mobile robot control method, as shown in Figure 1, this method comprises:
Step 110, it is based respectively on the orientation sensing device installed in mobile robot and vision sensing equipment obtains currently Location information and present image.
Specifically, orientation sensing device is installed in mobile robot, orientation sensing device is for obtaining current time shifting The location information of mobile robot, i.e. current location information.Orientation sensing device can be GPS (Global Positioning System, global positioning system) module, it is also possible to based on RFID (Radio Frequency Identification, radio frequency Identification) location technology, bluetooth location technology or ultrasonic wave location technology etc. can obtain mobile robot location information biography Induction device, the present invention is not especially limit this.
In addition, being also installed with vision sensing equipment in mobile robot, vision sensing equipment is for obtaining current time shifting The image of environment, i.e. present image in front of the path of motion of mobile robot.Vision sensing equipment can be monocular vision sensing dress It sets, is also possible to be integrated with the binocular vision sensing device of two or more vision sensing equipments or multi-vision visual sensing dress It sets.Vision sensing equipment specifically can be Visible Light Camera, can also be infrared camera or Visible Light Camera and infrared phase Machine combines, and the present invention is not especially limit this.
Step 120, current location information and present image are input to action planning model, it is defeated obtains action planning model Current action information out;Wherein, action planning model be based on sample positioning information, sample image, sample action information and Sample identification training obtains.
Specifically, action planning model is used to be based on current location information and present image, to mobile robot current The specific movement that moment should execute is planned.After obtaining current location information and present image, by current location information It is input to action planning model with present image, the current action information of action planning model output can be obtained.Herein, currently Action message is used to indicate the action message that current time mobile robot should execute, and action message can be mobile robot Linear velocity and/or angular speed.
In addition, can also train in advance before executing step 120 and obtain action planning model, it specifically can be by as follows Mode training obtains: firstly, collecting great amount of samples location information, sample image, sample action information and sample identification.Wherein, Sample positioning information is to manually control in mobile robot action process to fill by the orientation sensing being installed in mobile robot Set the location information of acquisition;Sample image is to manually control in mobile robot action process by being installed in mobile robot Vision sensing equipment obtain path of motion in front of environment image;Sample action information is operator for sample image table The movement that environmental Kuznets Curves mobile robot specifically executes in front of the path of motion of sign, can be the linear velocity to mobile robot And/or the adjusting of angular speed;Sample identification is used to indicate based on the control mobile robot movement of sample action information as a result, sample This mark can be used for characterizing mobile robot whether along intended path walk, can be also used for characterization mobile robot whether at Function avoiding obstacles, the present invention is not especially limit this.
Initial model is instructed based on sample positioning information, sample image, sample action information and sample identification immediately Practice, to obtain action planning model.Wherein, initial model can be single neural network model, be also possible to multiple nerves The combination of network model, the embodiment of the present invention do not make specific limit to the type of initial model and structure.
Step 130, mobile robot is controlled based on current action information.
Specifically, it after obtaining current action information, is executed based on current action information control mobile robot corresponding Movement, to guarantee that mobile robot can succeed avoiding obstacles while walking along intended path.
Method provided in an embodiment of the present invention directly obtains current location information by orientation sensing device, without high-precision The sensory perceptual system of degree realizes simply that reduce research staff applies threshold, location efficiency without a large amount of priori knowledge It is high.In addition, obtaining current action information by action planning model trained in advance, transfer ability is strong, can adapt to most of Scene has a wide range of application, and stability is high.
Based on the above embodiment, step 120 specifically includes:
Step 121, current location information and present image are input to the path planning model in action planning model, obtained The first action message for taking path planning model to export;Wherein, path planning model is based on sample positioning information, sample graph As, sample action information and sample path mark training obtain.
Specifically, action planning model includes path planning model, and path planning model is used to be based on current location information How to enable mobile robot to walk along intended path with present image analysis, and exports the first action message.Herein, One action message is used to indicate current time to guarantee that mobile robot is walked the action message that should execute along intended path.
In addition, can also train in advance before executing step 121 and obtain path planning model, it specifically can be by as follows Mode training obtains: firstly, collecting great amount of samples location information, sample image, sample action information and sample path mark.Its In, sample positioning information, sample image and sample action information are to manually control mobile robot along intended path walking During obtain, sample path mark is used to indicate whether mobile robot walks along intended path, such as works as moving machine In normally travel in intended path, sample path mark is positive device people, when mobile robot deviates intended path, sample arm Diameter mark is negative.In addition, the value of sample path mark can be determined according to preset walking rule, such as complete plan road The time of diameter traveling is shorter, then the value of sample path mark is bigger, and the time for completing intended path traveling is longer, then sample path The value of mark is smaller.Immediately based on sample positioning information, sample image, sample action information and sample path mark to introductory die Type is trained, to obtain action planning model.Wherein, initial model can be single neural network model, be also possible to The combination of multiple neural network models, the embodiment of the present invention do not make specific limit to the type of initial model and structure.
Step 122, present image is input to the Obstacle avoidance model in action planning model, obtains the of Obstacle avoidance model output Two action messages;Wherein, Obstacle avoidance model is obtained based on the mark training of sample image, sample action information and sample avoidance.
Specifically, action planning model further includes Obstacle avoidance model, when Obstacle avoidance model is used for current based on present image analysis It carves in front of the path of motion of mobile robot that whether there are obstacles in environment, and how to be kept away automatically if there is barrier Barrier.Herein, the second action message be used to indicate current time should in order to guarantee that mobile robot can succeed avoiding obstacles The action message of execution.
In addition, can also train in advance before executing step 122 and obtain Obstacle avoidance model, it specifically can be in the following way Training obtains: firstly, collecting great amount of samples image, sample action information and sample avoidance mark.Wherein, sample image and sample Action message is obtained during manually controlling Mobile Robot Obstacle Avoidance, and sample avoidance mark is used to indicate moving machine The whether successful avoidance of device people, such as when mobile robot does not knock barrier, sample avoidance mark is positive, when mobile machine When people knocks barrier, sample avoidance mark is negative.Immediately based on sample image, sample action information and sample avoidance mark pair Initial model is trained, to obtain Obstacle avoidance model.Wherein, initial model can be single neural network model, can also be with It is the combination of multiple neural network models, the embodiment of the present invention does not make specific limit to the type of initial model and structure.
It should be noted that the embodiment of the present invention does not make specific restriction, step to the sequencing of step 121 and step 122 Rapid 121 can execute prior to step 122, can also execute after step 122, synchronous with step 122 can also execute.
Step 123, current action information is obtained based on the first action message and the second action message.
Herein, the first action message is only to consider the action message of intended path, and the second action message is only to consider avoidance Action message can be obtained while maintaining and walk simultaneously in intended path in conjunction with the first action message and the second action message The action message for the avoidance that can succeed, that is, current action information.
Method provided in an embodiment of the present invention, by the way that path planning model is arranged, acquisition exists for controlling mobile robot The first action message walked in intended path;By the way that Obstacle avoidance model is arranged, for controlling Mobile Robot Obstacle Avoidance is obtained Two action messages;By combining the first action message and the second action message, enable mobile robot along intended path Success avoidance while walking.By the way that action planning model is refined as path planning model and Obstacle avoidance model, simplified model instruction Practice process, so that the action message of single model output is more accurate, and then improves the precision of prediction of action planning model.
Based on any of the above-described embodiment, step 121 is specifically included: step 1211, based on current location information and current figure Picture chooses current path from road straight trip model, road left-hand rotation model and the road right-hand rotation model that path planning model includes Model.Step 1212, current location information and present image are input to current path model, obtain the output of current path model The first action message.
Specifically, path planning model includes road straight trip model, road left-hand rotation model and road right-hand rotation model, wherein road Road straight trip model, road left-hand rotation model and road right-hand rotation model are respectively used to when intended path needs to keep straight on, turns left and turn right Mobile robot how is enabled to walk along intended path based on current location information and present image analysis.
Accordingly, before executing step 1211, can train in advance obtain road straight trip model, road left-hand rotation model and Road right-hand rotation model can specifically be trained in the following way and be obtained: firstly, collecting great amount of samples location information, sample image, sample This action message and sample path mark, and according to the intended path at acquisition moment to sample positioning information, sample image, sample Action message and sample path mark are classified, and it is dynamic to respectively obtain the corresponding sample positioning information of straight trip, sample image, sample Make information and sample path mark, the corresponding sample positioning information that turns left, sample image, sample action information and sample path mark Know, and corresponding sample positioning information of turning right, sample image, sample action information and sample path mark.Immediately based on not With target bearing corresponding sample positioning information, sample image, sample action information and sample path mark to initial model into Row training, to respectively obtain road straight trip model, road left-hand rotation model and road right-hand rotation model.Wherein, initial model can be with It is single neural network model, is also possible to the combination of multiple neural network models, the embodiment of the present invention is not to initial model Type and structure make specific limit.
Current path model is to choose to obtain from path planning model based on current location information and present image, when Preceding path model is road straight trip model, road left-hand rotation model or road right-hand rotation model.After determining current path model, it will work as Prelocalization information and present image are input to current path model, obtain the first action message of current path model output.
Method provided in an embodiment of the present invention, by the way that path planning model is refined as road straight trip model, road turns left Model or road right-hand rotation model, can the further work that is undertaken of refined model, improve the precision of prediction of single model.
Based on any of the above-described embodiment, step 1211 is specifically included: being based on current location information and destination information, is obtained Task path;Task based access control path and present image obtain target bearing;Based on target bearing, from road straight trip model, road Current path model is chosen in left-hand rotation model and road right-hand rotation model.
Specifically, destination information is the location information in the place that preset expectation mobile robot arrives at.It is based on Current location information and destination information, available task path, i.e. mobile robot need the path walked.It must take office It is engaged in behind path, the front environment of current time mobile robot path of motion can be known based on present image, and then determination is worked as The direction of preceding moment mobile robot, by the direction and task path of mobile robot, you can learn that current time moving machine Device people needs the orientation walked, i.e. target bearing, target bearing can be straight trip, left-hand rotation or right-hand rotation.After determining target bearing, The model in corresponding orientation can be directly chosen based on target bearing as current path model, such as target bearing is straight trip, then Choosing road straight trip model is current path model, in another example target bearing is to turn left, then it is current for choosing road left-hand rotation model Path model.
Based on any of the above-described embodiment, after step 130, further includes: based on optimum position information, optimization image, optimization Action message and optimization mark are trained action planning model.
Specifically, during mobile robot automatically walk, current location information, present image and base be can recorde It is instructed in the current action that current location information and present image obtain and mobile robot is based on current action instruction walking Result.After mobile robot end of run, using the current location information of record as optimum position information, present image is made To optimize image, as optimization action command, the result based on current action instruction walking is optimization mark for current action instruction. Further, optimum position information is that mobile robot is passed during automatically walk by the positioning being installed in mobile robot The location information for the mobile robot that induction device obtains, optimization image be mobile robot during automatically walk by being installed in The image for being used to characterize environment in front of mobile robot walking path that vision sensing equipment in mobile robot obtains, optimization Action command is action command of the former action planning model based on optimum position information and optimization image output, and optimization mark is used for Indicate the result based on optimization action command along intended path walking and avoidance.
Based on above-mentioned optimum position information, optimization image, optimization action command and optimization mark to action planning model into Row iteration tuning can further increase the accuracy rate of action planning model, make up action planning model loophole.Especially moving Mobile robot, which deviates intended path, can perhaps will deviate from corresponding to when intended path or avoidance failure in the case that avoidance fails Optimum position information, optimization image, optimization action command and optimization mark respectively as loophole location information, loophole image, Loophole action command and loophole mark.Herein, loophole location information is that mobile robot is deviateing intended path or avoidance mistake The location information of the mobile robot obtained during losing by the orientation sensing device being installed in mobile robot, loophole image It is that mobile robot is filled during deviateing intended path or avoidance failure by the visual sensing being installed in mobile robot The image for being used to characterize environment in front of mobile robot path of motion of acquisition is set, loophole action command is that former target follows model Based on the action command for leading to deviate intended path or avoidance failure that loophole location information and loophole image export, loophole mark Know to deviate intended path or avoidance failure.It is identified based on loophole location information, loophole image, loophole action command and loophole Update is trained to action planning model, can effectively be patched a leak, the performance of action planning model is further increased.
Current mobile robot control method mostly uses greatly photopic vision sensing device to acquire image, however visible light Vision sensing equipment can only use on daytime, can not especially carry out the work at night, therefore under the environment of light conditions difference Limit working range and the working time of mobile robot.In this regard, being based on any of the above-described embodiment, in this method, vision is passed Induction device includes infrared camera;Accordingly, present image includes current thermal-induced imagery.
Specifically, infrared camera is the camera for recording the light issued by infrared light supply, and infrared camera is often applied In night vision monitoring field.The forward image that mobile robot walking path is acquired by setting infrared camera, is able to solve visible Light vision device can not be the problem of working at night.Accordingly, image, that is, thermal-induced imagery of infrared camera acquisition, currently Thermal-induced imagery refers to during for mobile robot control, the collected current time mobile robot row of infrared camera Walk the forward image in path.
It should be noted that the vision sensing equipment in the embodiment of the present invention can not only include infrared camera.Further include Photopic vision sensing device.Accordingly, present image may include current thermal-induced imagery, can also include that current RGB schemes The image of picture or extended formatting.Herein, the thermal-induced imagery at current thermal-induced imagery, that is, current time.
Method provided in an embodiment of the present invention solves existing mobile robot by photopic vision sensing device institute Limit, can only the problem of working by day, by the way that infrared camera is arranged, so that mobile robot is by time and light without being limited, Expand application range and the working time of mobile robot.
Based on any of the above-described embodiment, in this method, orientation sensing device is GNSS sensor.GNSS(Global Navigation Satellite System, Global Navigation Satellite System) be can at the earth's surface or terrestrial space anyly Point provides the space base radio-navigation positioning system of round-the-clock 3 dimension coordinate and speed and temporal information for user.Pass through GNSS sensor can be realized simple road topology and draw, and carry out low precision positioning in real time.It is high-precision compared to general Sensory perceptual system is spent, the cost using GNSS sensor is cheaper, and method is easier, and will not be because of the slightly lower shadow of positioning accuracy Ring the control precision of mobile robot.
Based on any of the above-described embodiment, Fig. 2 be another embodiment of the present invention provides mobile robot control method stream Journey schematic diagram, as shown in Fig. 2, this method comprises the following steps:
Step 210, sample collection.
It collects great amount of samples location information, sample image, sample action information and sample path mark and is used as path planning Training sample.Wherein, sample positioning information, sample image and sample action information are to manually control mobile robot along meter It is obtained during drawing path walking, sample path mark is used to indicate whether mobile robot walks along intended path, example Such as when mobile robot normally travel in intended path, sample path mark is positive, when mobile robot deviates plan road When diameter, sample path mark is negative.In addition, the value of sample path mark can be determined according to preset walking rule, example Such as complete intended path traveling time is shorter or the deviation of sample positioning information and intended path is smaller, then sample path identifies Value it is bigger, complete intended path traveling time is longer or the deviation of sample positioning information and intended path is bigger, then sample The value of ID of trace route path is smaller.
Meanwhile it collecting great amount of samples image, sample action information and sample avoidance mark and being used as avoidance training sample.Its In, sample image and sample action information are obtained during manually controlling Mobile Robot Obstacle Avoidance, sample avoidance Mark is used to indicate the whether successful avoidance of mobile robot, such as when mobile robot does not knock barrier, sample avoidance Mark is positive, and when mobile robot knocks barrier, sample avoidance mark is negative.
It should be noted that training sample is round-the-clock training sample herein, wherein the acquisition of sample image is to pass through What the Visible Light Camera and infrared camera being installed in mobile robot were completed, accordingly sample image includes sample RGB image With sample thermal-induced imagery.
Then execute step 220.
Step 220, model training.
In order to improve trained speed and precision, mean value is carried out to each sample image.
Subsequently, based on removing sample image and sample positioning information, sample action information and sample path mark after mean value Knowledge is trained the initial model of several different structures, and selects the best instruction of accuracy rate highest, effect Initial model after white silk is as path planning model.In the training process of path planning model, it is fixed that sample can be based further on Path planning training sample is divided into straight trip training sample, left-hand rotation training sample and turned right by position information and sample image trains sample This, and then by being trained to obtain road straight trip model, road left-hand rotation model and road right-hand rotation model to initial model.
Meanwhile based on go the sample image after mean value and sample action information and sample avoidance mark to several differences The initial model of structure is trained, and selects the initial model after a best training of accuracy rate highest, effect As Obstacle avoidance model.
Then execute step 230.
Step 230, iteration tuning.
After obtaining the action planning model comprising path planning model and Obstacle avoidance model, judge whether to need to advise movement It draws model and carries out tuning iteration.
If necessary to carry out tuning iteration, then by action planning model use mobile robot autonomous process In, obtain online optimum position information, optimization image, optimization action message and optimization mark.Herein, optimum position information is The mobile robot that mobile robot is obtained during automatically walk by the orientation sensing device being installed in mobile robot Location information, optimization image be visual sensing of the mobile robot during automatically walk by being installed in mobile robot The image for being used to characterize environment in front of mobile robot walking path that device obtains, optimization action command are former action planning mould Action command of the type based on optimum position information and optimization image output, optimization mark are used to indicate based on optimization action command edge The result of intended path walking and avoidance.Step 220 is then executed, based on optimum position information, optimization image, optimization movement letter Breath and optimization mark are trained action planning model, realize the iteration optimization of action planning model.It is possible to further base Path plan model is trained in optimum position information, optimization image, optimization action message and path optimizing mark, is based on Optimization image, optimization action message and optimization avoidance mark are trained Obstacle avoidance model.
If you do not need to carrying out tuning iteration, 240 are thened follow the steps.
Step 240, system deployment.
Orientation sensing device, Visible Light Camera and infrared camera are installed in mobile robot, mobile robot is completed Control system deployment.Herein, orientation sensing device is GNSS sensor, it is seen that light camera is Haikang USB camera, infrared phase Machine is flir one.It is based respectively on orientation sensing device and vision sensing equipment obtains current location information and present image, and Current location information and present image are input to action planning model, obtain the current action letter of action planning model output Breath, based on current action information control mobile robot autonomous and avoidance.
Method provided in an embodiment of the present invention directly obtains current location information by orientation sensing device, without high-precision The sensory perceptual system of degree realizes simply that reduce research staff applies threshold, location efficiency without a large amount of priori knowledge It is high.In addition, obtaining current action information by action planning model trained in advance, transfer ability is strong, can adapt to most of Scene has a wide range of application, and stability is high.
Based on any of the above-described embodiment, this method is tested.Experiment scene is substation, and size is about 800m* 600m.In experimentation, progress data collection task first, using Haikang gunlock camera and infrared camera to whole station daytime The data at road data and whole station night are acquired and train, and obtain road straight trip model, road left-hand rotation model, road and turn right Model and Obstacle avoidance model.The experimental results showed that method based on the embodiment of the present invention controls mobile robot, day Between and night can realize normal walking and avoidance, and there is good robustness to the variation of illumination, environment.
Based on any of the above-described embodiment, this method is tested.Experiment scene is substation, and size is about 400m* 200m.In experimentation, progress data collection task first, using IP Camera and infrared camera to whole station road on daytime Data and the data at whole station night are acquired and train, and obtain road straight trip model, road left-hand rotation model, road right-hand rotation model And Obstacle avoidance model.The experimental results showed that method based on the embodiment of the present invention controls mobile robot, in the daytime and Night can realize normal walking and avoidance, and have good robustness to the variation of illumination, environment.
Based on any of the above-described embodiment, this method is tested.Experiment scene is substation, and size is about 200m* 100m.In experimentation, progress data collection task first, using Haikang USB camera and infrared camera to whole station daytime The data at road data and whole station night are acquired and train, and obtain road straight trip model, road left-hand rotation model, road and turn right Model and Obstacle avoidance model.The experimental results showed that method based on the embodiment of the present invention controls mobile robot, day Between and night can realize normal walking and avoidance, and there is good robustness to the variation of illumination, environment.
Based on any of the above-described embodiment, Fig. 3 is that the structure of control device for moving robot provided in an embodiment of the present invention is shown It is intended to, as shown in figure 3, the device includes acquiring unit 310, planning unit 320 and control unit 330;
Wherein, acquiring unit 310 is for being based respectively on the orientation sensing device and visual sensing installed in mobile robot Device obtains current location information and present image;
Planning unit 320 is used to the current location information and the present image being input to action planning model, obtains Take the current action information of the action planning model output;Wherein, the action planning model be based on sample positioning information, What sample image, sample action information and sample identification training obtained;
Control unit 330 is used to control the mobile robot based on the current action information.
Device provided in an embodiment of the present invention directly obtains current location information by orientation sensing device, without high-precision The sensory perceptual system of degree realizes simply that reduce research staff applies threshold, location efficiency without a large amount of priori knowledge It is high.In addition, obtaining current action information by action planning model trained in advance, transfer ability is strong, can adapt to most of Scene has a wide range of application, and stability is high.
Based on any of the above-described embodiment, planning unit 320 includes that path planning subelement, avoidance subelement and compound son are single Member;
Wherein, path planning subelement is used to the current location information and the present image being input to the movement Path planning model in plan model obtains the first action message of the path planning model output;Wherein, the path Plan model is based on the sample positioning information, the sample image, the sample action information and sample path mark instruction It gets;
Avoidance subelement is used to for the present image to be input to the Obstacle avoidance model in the action planning model, obtains institute State the second action message of Obstacle avoidance model output;Wherein, the Obstacle avoidance model is based on the sample image, the sample action What information and the mark training of sample avoidance obtained;
Compound subelement is used to obtain current action information based on first action message and second action message.
Based on any of the above-described embodiment, path planning subelement includes choosing module and path planning module;
Wherein, it chooses module to be used to be based on the current location information and the present image, from the path planning mould Current path model is chosen in road straight trip model, road left-hand rotation model and the road right-hand rotation model that type includes;
Path planning module is used to the current location information and the present image being input to the current path mould Type obtains first action message of the current path model output.
Based on any of the above-described embodiment, chooses module and is specifically used for:
Based on the current location information and destination information, task path is obtained;
Based on the task path and the present image, target bearing is obtained;
Based on the target bearing, from road straight trip model, the road left-hand rotation model and the road right-hand rotation mould The current path model is chosen in type.
Based on any of the above-described embodiment, which further includes optimization unit;
Optimize unit to be used for based on optimum position information, optimization image, optimization action message and optimization mark to described dynamic It is trained as plan model.
Based on any of the above-described embodiment, in the device, the vision sensing equipment includes infrared camera;Accordingly, described Present image includes current thermal-induced imagery.
Based on any of the above-described embodiment, in the device, the orientation sensing device is GNSS sensor.
Fig. 4 is the entity structure schematic diagram of electronic equipment provided in an embodiment of the present invention, as shown in figure 4, the electronic equipment It may include: processor (processor) 401,402, memory communication interface (Communications Interface) (memory) 403 and communication bus 404, wherein processor 401, communication interface 402, memory 403 pass through communication bus 404 Complete mutual communication.Processor 401 can call the meter that is stored on memory 403 and can run on processor 401 Calculation machine program, to execute the mobile robot control method of the various embodiments described above offer, for example, be based respectively on mobile machine The orientation sensing device and vision sensing equipment installed on people obtain current location information and present image;Described it will work as prelocalization Information and the present image are input to action planning model, obtain the current action information of the action planning model output; Wherein, the action planning model is trained based on sample positioning information, sample image, sample action information and sample identification It arrives;The mobile robot is controlled based on the current action information.
In addition, the logical order in above-mentioned memory 403 can be realized by way of SFU software functional unit and conduct Independent product when selling or using, can store in a computer readable storage medium.Based on this understanding, originally The technical solution of the inventive embodiments substantially part of the part that contributes to existing technology or the technical solution in other words It can be embodied in the form of software products, which is stored in a storage medium, including several fingers It enables and using so that a computer equipment (can be personal computer, server or the network equipment etc.) executes the present invention respectively The all or part of the steps of a embodiment the method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk Etc. the various media that can store program code.
The embodiment of the present invention also provides a kind of non-transient computer readable storage medium, is stored thereon with computer program, The computer program is implemented to carry out the mobile robot control method of the various embodiments described above offer when being executed by processor, such as It include: to be based respectively on the orientation sensing device installed in mobile robot and vision sensing equipment acquisition current location information and work as Preceding image;The current location information and the present image are input to action planning model, obtain the action planning mould The current action information of type output;Wherein, the action planning model is dynamic based on sample positioning information, sample image, sample Make information and sample identification training obtains;The mobile robot is controlled based on the current action information.
The apparatus embodiments described above are merely exemplary, wherein described, unit can as illustrated by the separation member It is physically separated with being or may not be, component shown as a unit may or may not be physics list Member, it can it is in one place, or may be distributed over multiple network units.It can be selected according to the actual needs In some or all of the modules achieve the purpose of the solution of this embodiment.Those of ordinary skill in the art are not paying creativeness Labour in the case where, it can understand and implement.
Through the above description of the embodiments, those skilled in the art can be understood that each embodiment can It realizes by means of software and necessary general hardware platform, naturally it is also possible to pass through hardware.Based on this understanding, on Stating technical solution, substantially the part that contributes to existing technology can be embodied in the form of software products in other words, should Computer software product may be stored in a computer readable storage medium, such as ROM/RAM, magnetic disk, CD, including several fingers It enables and using so that a computer equipment (can be personal computer, server or the network equipment etc.) executes each implementation Method described in certain parts of example or embodiment.
Finally, it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although Present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: it still may be used To modify the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features; And these are modified or replaceed, technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution spirit and Range.

Claims (10)

1. a kind of mobile robot control method characterized by comprising
It is based respectively on the orientation sensing device installed in mobile robot and vision sensing equipment obtains current location information and works as Preceding image;
The current location information and the present image are input to action planning model, it is defeated to obtain the action planning model Current action information out;Wherein, the action planning model is based on sample positioning information, sample image, sample action letter What breath and sample identification training obtained;
The mobile robot is controlled based on the current action information.
2. the method according to claim 1, wherein described by the current location information and the present image It is input to action planning model, the current action information of the action planning model output is obtained, specifically includes:
The current location information and the present image are input to the path planning model in the action planning model, obtained Take the first action message of the path planning model output;Wherein, the path planning model is positioned based on the sample What information, the sample image, the sample action information and sample path mark training obtained;
The present image is input to the Obstacle avoidance model in the action planning model, obtains the of Obstacle avoidance model output Two action messages;Wherein, the Obstacle avoidance model is based on the sample image, the sample action information and sample avoidance mark What training obtained;
Current action information is obtained based on first action message and second action message.
3. according to the method described in claim 2, it is characterized in that, described by the current location information and the present image The path planning model being input in the action planning model obtains the first movement letter of the path planning model output Breath, specifically includes:
Based on the current location information and the present image, the road that includes from path planning model straight trip model, Current path model is chosen in road left-hand rotation model and road right-hand rotation model;
The current location information and the present image are input to the current path model, obtain the current path mould First action message of type output.
4. according to the method described in claim 3, it is characterized in that, be based on the current location information and the present image, Current path is chosen from road straight trip model, road left-hand rotation model and the road right-hand rotation model that the path planning model includes Model specifically includes:
Based on the current location information and destination information, task path is obtained;
Based on the task path and the present image, target bearing is obtained;
Based on the target bearing, from road straight trip model, the road left-hand rotation model and the road right-hand rotation model Choose the current path model.
5. the method according to claim 1, wherein described control the movement based on the current action information Robot, later further include:
The action planning model is instructed based on optimum position information, optimization image, optimization action message and optimization mark Practice.
6. the method according to any one of claims 1 to 5, which is characterized in that the vision sensing equipment includes infrared Camera;Accordingly, the present image includes current thermal-induced imagery.
7. the method according to any one of claims 1 to 5, which is characterized in that the orientation sensing device is GNSS biography Sensor.
8. a kind of control device for moving robot characterized by comprising
Acquiring unit obtains currently for being based respectively on the orientation sensing device installed in mobile robot and vision sensing equipment Location information and present image;
Planning unit, for the current location information and the present image to be input to action planning model, described in acquisition The current action information of action planning model output;Wherein, the action planning model is based on sample positioning information, sample graph As, sample action information and sample identification training obtain;
Control unit, for controlling the mobile robot based on the current action information.
9. a kind of electronic equipment, which is characterized in that including processor, communication interface, memory and bus, wherein processor leads to Believe that interface, memory complete mutual communication by bus, processor can call the logical order in memory, to execute Method as described in claim 1 to 7 is any.
10. a kind of non-transient computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer The method as described in claim 1 to 7 is any is realized when program is executed by processor.
CN201910247676.3A 2019-03-29 2019-03-29 Mobile robot control method and device Active CN109901589B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910247676.3A CN109901589B (en) 2019-03-29 2019-03-29 Mobile robot control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910247676.3A CN109901589B (en) 2019-03-29 2019-03-29 Mobile robot control method and device

Publications (2)

Publication Number Publication Date
CN109901589A true CN109901589A (en) 2019-06-18
CN109901589B CN109901589B (en) 2022-06-07

Family

ID=66954012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910247676.3A Active CN109901589B (en) 2019-03-29 2019-03-29 Mobile robot control method and device

Country Status (1)

Country Link
CN (1) CN109901589B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7364773B2 (en) 2020-02-12 2023-10-18 ファナック株式会社 Teaching system and robot control device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103901888A (en) * 2014-03-18 2014-07-02 北京工业大学 Robot autonomous motion control method based on infrared and sonar sensors
CN107479368A (en) * 2017-06-30 2017-12-15 北京百度网讯科技有限公司 A kind of method and system of the training unmanned aerial vehicle (UAV) control model based on artificial intelligence
CN107818333A (en) * 2017-09-29 2018-03-20 爱极智(苏州)机器人科技有限公司 Robot obstacle-avoiding action learning and Target Searching Method based on depth belief network
CN108245384A (en) * 2017-12-12 2018-07-06 清华大学苏州汽车研究院(吴江) Binocular vision apparatus for guiding blind based on enhancing study
CN108320051A (en) * 2018-01-17 2018-07-24 哈尔滨工程大学 A kind of mobile robot dynamic collision-free planning method based on GRU network models
CN108960432A (en) * 2018-06-22 2018-12-07 深圳市易成自动驾驶技术有限公司 Decision rule method, apparatus and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103901888A (en) * 2014-03-18 2014-07-02 北京工业大学 Robot autonomous motion control method based on infrared and sonar sensors
CN107479368A (en) * 2017-06-30 2017-12-15 北京百度网讯科技有限公司 A kind of method and system of the training unmanned aerial vehicle (UAV) control model based on artificial intelligence
CN107818333A (en) * 2017-09-29 2018-03-20 爱极智(苏州)机器人科技有限公司 Robot obstacle-avoiding action learning and Target Searching Method based on depth belief network
CN108245384A (en) * 2017-12-12 2018-07-06 清华大学苏州汽车研究院(吴江) Binocular vision apparatus for guiding blind based on enhancing study
CN108320051A (en) * 2018-01-17 2018-07-24 哈尔滨工程大学 A kind of mobile robot dynamic collision-free planning method based on GRU network models
CN108960432A (en) * 2018-06-22 2018-12-07 深圳市易成自动驾驶技术有限公司 Decision rule method, apparatus and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张海波 等: "基于路径识别的移动机器人视觉导航", 《中国图象图形学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7364773B2 (en) 2020-02-12 2023-10-18 ファナック株式会社 Teaching system and robot control device

Also Published As

Publication number Publication date
CN109901589B (en) 2022-06-07

Similar Documents

Publication Publication Date Title
CN111897332B (en) Semantic intelligent substation robot humanoid inspection operation method and system
KR102434580B1 (en) Method and apparatus of dispalying virtual route
Alonso et al. Accurate global localization using visual odometry and digital maps on urban environments
CN107655473B (en) Relative autonomous navigation system of spacecraft based on S L AM technology
CN110497901A (en) A kind of parking position automatic search method and system based on robot VSLAM technology
CN110160542A (en) The localization method and device of lane line, storage medium, electronic device
CN107990899A (en) A kind of localization method and system based on SLAM
KR102472767B1 (en) Method and apparatus of calculating depth map based on reliability
CN106796728A (en) Generate method, device, computer system and the mobile device of three-dimensional point cloud
CN107481292A (en) The attitude error method of estimation and device of vehicle-mounted camera
CN111788102A (en) Odometer system and method for tracking traffic lights
CN111958592A (en) Image semantic analysis system and method for transformer substation inspection robot
CN111968048B (en) Method and system for enhancing image data of less power inspection samples
JP2020123317A (en) Method and device for controlling travel of vehicle
US20220194436A1 (en) Method and system for dynamically updating an environmental representation of an autonomous agent
CN111182174B (en) Method and device for supplementing light for sweeping robot
CN110705385B (en) Method, device, equipment and medium for detecting angle of obstacle
CN108107897A (en) Real time sensor control method and device
CN110751123A (en) Monocular vision inertial odometer system and method
CN114127738A (en) Automatic mapping and positioning
Golovnin et al. Video processing method for high-definition maps generation
CN109901589A (en) Mobile robot control method and apparatus
WO2024036984A1 (en) Target localization method and related system, and storage medium
CN109977884A (en) Target follower method and device
CN111783611A (en) Unmanned vehicle positioning method and device, unmanned vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant