WO2019148467A1 - 定位方法、装置、机器人及计算机可读存储介质 - Google Patents

定位方法、装置、机器人及计算机可读存储介质 Download PDF

Info

Publication number
WO2019148467A1
WO2019148467A1 PCT/CN2018/075170 CN2018075170W WO2019148467A1 WO 2019148467 A1 WO2019148467 A1 WO 2019148467A1 CN 2018075170 W CN2018075170 W CN 2018075170W WO 2019148467 A1 WO2019148467 A1 WO 2019148467A1
Authority
WO
WIPO (PCT)
Prior art keywords
positioning
robot
surrounding environment
speed
running speed
Prior art date
Application number
PCT/CN2018/075170
Other languages
English (en)
French (fr)
Inventor
李连中
黄晓庆
徐慎华
张俭
邱胜林
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to PCT/CN2018/075170 priority Critical patent/WO2019148467A1/zh
Priority to JP2019561296A priority patent/JP7032440B2/ja
Priority to CN201880001300.8A priority patent/CN108885460B/zh
Publication of WO2019148467A1 publication Critical patent/WO2019148467A1/zh
Priority to US16/687,838 priority patent/US11292131B2/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching

Definitions

  • the present application relates to the field of visual navigation technology, and in particular, to a positioning method, device, robot, and computer readable storage medium.
  • VSLAM visual real-time positioning and map construction
  • One technical problem to be solved by some embodiments of the present application is to provide a positioning method, apparatus, robot, and computer readable storage medium to solve the above technical problems.
  • An embodiment of the present application provides a positioning method applied to a robot having an autonomous positioning navigation function, including: determining that positioning failure is performed by an image of a surrounding environment during traveling to reach a preset distance; and controlling the robot to decelerate Travel and locate through the surrounding environment during deceleration until the positioning is successful.
  • An embodiment of the present application provides a positioning device applied to a robot having an autonomous positioning navigation function, including a positioning module and a control module, and a positioning module for passing the surrounding during the process of reaching a preset distance
  • the environment image is positioned;
  • the control module is configured to control the robot to decelerate after the positioning module determines that the image of the surrounding environment fails to be located, and control the positioning module to perform positioning through the surrounding environment image during the deceleration driving until the positioning is successful.
  • An embodiment of the present application provides a robot including at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being at least one The processor executes to enable the at least one processor to perform the positioning method involved in any of the method embodiments of the present application.
  • One embodiment of the present application provides a computer readable storage medium storing computer instructions for causing a computer to perform the positioning method involved in any of the method embodiments of the present application.
  • the robot with the autonomous positioning navigation function performs the positioning operation by acquiring the surrounding environment image during the process of reaching the preset distance, and fails to determine the positioning through the surrounding environment image.
  • the control robot decelerates and continues to position through the surrounding environment image during the deceleration driving until the positioning is successful.
  • the robot can reasonably adjust its running speed according to the environment in which it is in the process of traveling, thereby accurately positioning the position, thereby being able to timely and accurately plan the navigation route and complete the work. .
  • FIG. 1 is a flow chart of a positioning method in a first embodiment of the present application
  • FIG. 2 is a schematic diagram of adjusting the running speed of the robot according to a cosine or sinusoidal speed curve in the first embodiment of the present application;
  • FIG. 3 is a schematic diagram of adjusting the running speed of the robot according to the trapezoidal speed curve in the first embodiment of the present application;
  • FIG. 4 is a schematic diagram of adjusting the running speed of the robot according to the S-type speed curve in the first embodiment of the present application
  • FIG. 5 is a flowchart of a positioning method in a second embodiment of the present application.
  • FIG. 6 is a schematic diagram of adjusting the running speed of the robot according to a cosine or sinusoidal speed curve in the second embodiment of the present application;
  • FIG. 7 is a schematic diagram of adjusting a running speed of a robot according to a trapezoidal speed curve in a second embodiment of the present application.
  • FIG. 8 is a schematic diagram of adjusting the running speed of the robot according to the S-type speed curve in the second embodiment of the present application.
  • Figure 9 is a block diagram showing a positioning device in a third embodiment of the present application.
  • Figure 10 is a block diagram showing the robot in the fourth embodiment of the present application.
  • the first embodiment of the present application relates to a positioning method, which is mainly applied to a robot having an autonomous positioning navigation function, and the specific process thereof is as shown in FIG. 1 .
  • step 101 during the journey to reach the preset distance, it is determined that the positioning failure by the surrounding environment image is determined.
  • the operation of positioning through the surrounding environment image can be specifically implemented by:
  • the robot needs to acquire the surrounding environment image according to a preset period (such as 5s per interval).
  • the processing unit (such as the CPU) in the robot matches the acquired surrounding environment image with the image in the pre-built map sample set. If the surrounding environment image does not match the image in the map sample set, the positioning failure is determined; The environment image matches any of the images in the map sample set to determine that the location was successful.
  • the image of the pre-built map sample set in the embodiment may be acquired by using a data crawling tool such as a web crawler, and may be updated periodically in a practical application in order to ensure the accuracy of the positioning, and the specific implementation manner. It is not described here, and those skilled in the art can set it as needed, and no limitation is made here.
  • the surrounding environment image for matching the image in the pre-built map sample set is specifically acquired by using an image capturing device (such as a camera) according to a preset period.
  • the image capturing device for collecting the image of the surrounding environment does not limit the setting position thereof, that is, the image capturing device may be disposed on the robot (for the robot working directly on the road surface) It can also be set on other objects (for a robot working on a moving object), and the specific implementation manner can be reasonably set by a person skilled in the art according to actual needs, and no limitation is made here.
  • the preset distance that the robot reaches in the present embodiment is set based on a visual positioning and a map positioning (VSLAM) autonomous positioning navigation technology, for example, for a certain robot, the preset The distance can be 10 meters, that is, within 10 meters, as long as an image of the surrounding environment that can match the image in the map sample set is collected, the current position of the robot can be located, and then positioned according to each preset distance.
  • VSLAM map positioning
  • step 102 the control robot is decelerated to travel, and is positioned by the surrounding environment image during the deceleration driving until the positioning is successful.
  • step 102 can be specifically implemented by:
  • the control robot decelerates from the first running speed (the normal working running speed set by the robot, and also the running speed when the first positioning succeeds) to the second running speed, and after the robot is decelerated to the second running speed, the image collecting device is reused to obtain the surrounding environment. Image and then position it.
  • first running speed the normal working running speed set by the robot, and also the running speed when the first positioning succeeds
  • the deceleration driving process is stopped, and the second running speed is continued as the running speed required for its operation.
  • the control robot further decelerates, such as decelerating from the second running speed to the third running speed, and decelerating to the third running speed. After that, the image acquisition device is reused to acquire the image of the surrounding environment, and then the positioning is performed until the positioning is successful, otherwise the deceleration is continued.
  • control robot deceleration driving operation is specifically implemented according to the speed curve.
  • the speed curve is specifically used to indicate the correspondence between the running speed and the time.
  • the speed curve may specifically be any one or any combination of the following: a cosine speed curve, a sinusoidal speed curve, a trapezoidal speed curve, and an S-type speed curve.
  • the speed of the robot in the deceleration driving cannot be reduced to a negative number (ie, less than 0). Therefore, in order to enable the process of controlling the deceleration of the robot according to the speed curve to be more in line with the actual use situation, in this embodiment,
  • the angle of the cosine value in the provided cosine velocity curve takes a value between [0, ⁇ ]
  • the angle of the sine value in the sinusoidal velocity curve takes a value between [ ⁇ , 2 ⁇ ].
  • the cosine velocity curve is drawn based on the cosine function
  • the sinusoidal velocity curve is drawn based on the sine function
  • the cosine function and the sine function can be replaced with each other, and the angle of the cosine value is [0, ⁇ ]
  • the cosine velocity curve drawn when the value is taken is the same as the sine velocity curve when the angle of the sine value is between [ ⁇ , 2 ⁇ ], so the robot is decelerated based on the cosine velocity curve or the sine velocity curve.
  • the obtained graphs are all shown in Fig. 2.
  • the cosine speed curve will be taken as an example and will be specifically described with reference to Fig. 2 .
  • the robot is decelerated based on the cosine speed curve, which is specifically implemented according to the following formula:
  • v 0 is the speed at which the robot runs at a constant speed
  • T is the cosine angle, which is between [0, ⁇ ] and includes the endpoint.
  • FIG. 2 shows that the robot patrols at a speed of v 0 (the first acceleration referred to in this embodiment) from a time when the positioning succeeds t 0 , and continues the positioning operation if the robot runs to the pre- If the distance from the distance D is not successfully located, the robot decelerates to v 1 (ie v 0 /2) with a cosine speed curve from t' 0 to t 1 , then runs at a constant speed and continues positioning during constant speed operation. Operation, if the robot is successfully positioned under the speed of v 1 , the positioning process is completed, if not, the deceleration is continued, and the positioning operation is performed until the positioning is successful.
  • v 0 the first acceleration referred to in this embodiment
  • FIG. 3 is a schematic diagram of adjusting the running speed of the robot according to the trapezoidal speed curve.
  • the manner of adjusting the running speed of the robot according to the trapezoidal speed curve is specifically adjusted in the order of uniform deceleration and uniform speed from the uniform running speed. Speed until the current position of the robot can be located based on the reacquired surrounding image.
  • FIG. 4 is a schematic diagram of adjusting the running speed of the robot according to the S-type speed curve.
  • the manner of adjusting the running speed of the robot according to the S-shaped speed curve is specifically according to the deceleration, uniform deceleration, and the constant deceleration from the uniform running speed. Acceleration, deceleration, and uniform speed adjust the running speed until the robot's current position can be located based on the reacquired surrounding image.
  • deceleration and deceleration refers to the deceleration motion according to the decreasing acceleration (the acceleration for controlling the running speed reduction), that is, the acceleration in the sub-process is not a fixed value, but is always reduced. small.
  • the above-mentioned acceleration and deceleration refers to a deceleration motion according to an increasing acceleration, that is, a slight acceleration process from the uniform deceleration process to the uniform velocity process.
  • acceleration values required to control the deceleration driving and the recovery of the running speed based on the above various speed curves can be reasonably set according to actual needs, and are not limited herein.
  • each deceleration operation specifically decelerates to the running speed, and those skilled in the art can pre-store a running speed adjustment table in the robot according to the type of the robot and the actual application of the robot, thereby facilitating the robot to meet each need.
  • the deceleration operation can be performed according to the pre-stored speed adjustment table. The specific implementation manner will not be repeated here, nor is it limited.
  • the positioning method provided by the embodiment can enable the robot to adjust its running speed reasonably according to the environment in which the robot is in the process of traveling, thereby accurately positioning the position thereof, and thereby being able to accurately locate the position thereof. Timely and accurate planning of navigation routes to complete the work.
  • a second embodiment of the present application relates to a positioning method.
  • the embodiment is further improved on the basis of the first embodiment.
  • the specific improvement is: after determining the positioning success, the control robot accelerates to the first running speed, and the specific process is shown in FIG. 5 .
  • the steps 501 to 503 are included, and the steps 501 and 502 are substantially the same as the steps 101 and 102 in the first embodiment, and are not described here.
  • the steps 501 to 503 are included, and the steps 501 and 502 are substantially the same as the steps 101 and 102 in the first embodiment, and are not described here.
  • the steps 501 and 502 are substantially the same as the steps 101 and 102 in the first embodiment, and are not described here.
  • step 503 the control robot is accelerated to the first operating speed.
  • the process of controlling the robot to accelerate to the first running speed may be specifically performed by:
  • the robot After determining that the positioning is successful, the robot obtains the running speed of the current time through its own speed detecting device, and as long as the running speed of the obtained current time is greater than or equal to 0, according to the recorded first running speed and the current running speed. And setting the time required to restore to the first running speed, calculating an acceleration value, and then controlling the acceleration of the robot to the first running speed according to the calculated acceleration value.
  • the reasonable running speed range of the normal working of the robot can be set. If the obtained running speed of the current time is still within the range of the reasonable running speed of the normal working of the robot, the first running speed can be restored. , you can continue to drive at the current speed.
  • the control robot accelerates to the first running speed.
  • the cosine velocity curve is drawn based on the cosine function
  • the sinusoidal velocity curve is drawn based on the sine function
  • the cosine function and the sine function can be replaced with each other, and the angle of the cosine value is [0, ⁇ ]
  • the cosine velocity curve drawn when the value is taken is the same as the sine velocity curve when the angle of the sine value is between [ ⁇ , 2 ⁇ ], so the robot is decelerated based on the cosine velocity curve or the sine velocity curve.
  • the obtained graphs are all shown in Fig. 6.
  • the cosine speed curve will be taken as an example and will be specifically described with reference to Fig. 6.
  • the robot is decelerated based on the cosine speed curve, which is specifically implemented according to the following formula:
  • v 0 is the speed at which the robot runs at a constant speed
  • T is the cosine angle, which is between [0, ⁇ ] and includes the endpoint.
  • FIG. 6 shows that the robot patrols at a speed of v 0 (the first acceleration referred to in this embodiment) from the time of a certain positioning success t 0 while continuing the positioning operation, if the robot runs to the pre- If the distance from the distance D is not successfully located, the robot decelerates to v 1 (ie v 0 /2) with a cosine speed curve from t' 0 to t 1 , then runs at a constant speed and continues to position during constant speed operation.
  • v 0 the first acceleration referred to in this embodiment
  • Fig. 7 is a schematic diagram of adjusting the running speed of the robot according to the trapezoidal speed curve.
  • the manner of adjusting the running speed of the robot according to the trapezoidal speed curve is specifically adjusted in the order of uniform deceleration and uniform speed from the uniform running speed.
  • Speed until the robot's current position can be located according to the re-acquired surrounding environment image.
  • the control robot accelerates to the process of v 0 , specifically adjusting the current operation of the robot according to the uniform acceleration mode. Speed until the initial running speed v 0 is restored.
  • FIG. 8 is a schematic diagram of adjusting the running speed of the robot according to the S-shaped speed curve.
  • the manner of adjusting the running speed of the robot according to the S-shaped speed curve is specifically according to the deceleration, uniform deceleration, and the constant deceleration from the uniform running speed.
  • the speed of acceleration and deceleration and uniform speed is adjusted until the position of the robot is located according to the re-acquired surrounding environment image. After the positioning is successful, the robot is accelerated to v 0 , specifically according to the acceleration.
  • the order of uniform acceleration and acceleration is used to adjust the running speed of the robot at the current moment until the initial running speed v 0 is restored.
  • deceleration is a process opposite to acceleration and deceleration
  • acceleration is a process opposite to deceleration
  • acceleration values required to control the deceleration driving and the recovery of the running speed based on the above various speed curves can be reasonably set according to actual needs, and are not limited herein.
  • the positioning method provided by the embodiment after determining the positioning success, is controlled to accelerate to the first running speed, that is, from the current running speed to the initial running speed, so that the robot can try to be at the preset time as much as possible. Finish the work within.
  • a third embodiment of the present application relates to a positioning device.
  • the positioning device is mainly applied to a robot having an autonomous positioning navigation function, and the specific structure is as shown in FIG.
  • the positioning device includes a positioning module 901 and a control module 902.
  • the positioning module 901 is configured to perform positioning by using a surrounding environment image during the process of driving to a preset distance.
  • the control module 902 is configured to control the deceleration driving after the positioning module 901 determines that the positioning of the image by the surrounding environment fails, and control the positioning module 901 to perform positioning through the surrounding environment image during the deceleration driving until the positioning is successful.
  • the positioning module 901 can perform the operation of positioning through the surrounding environment image in the following manner. Specifically: The surrounding environment image is matched with the image in the pre-built map sample set. If the surrounding environment image does not match the image in the map sample set, the positioning failure is determined; if the surrounding environment image matches any image in the map sample set, the positioning is determined to be successful. .
  • the surrounding environment image used for matching the image in the pre-built map sample set is specifically obtained by using the image collecting device according to a preset period, for example, every 5 s. Collect once.
  • the period in which the image acquisition device collects the surrounding environment image may be transmitted according to the actual environment of the robot. Factors such as performance are reasonably set, and there are no restrictions here.
  • the image capturing device for collecting the image of the surrounding environment does not limit its setting position, that is, the image capturing device may be disposed on the robot (for a robot working directly on the road surface), or may be disposed on other objects (for The specific working mode of the robot working on the moving object can be reasonably set according to actual needs, and no limitation is made here.
  • the positioning device provided in this embodiment cooperates with the positioning module and the control module, so that the robot equipped with the positioning device can be in accordance with the environment in which it is driven. Reasonably adjust your running speed to accurately locate the location, and then plan the navigation route in time and accurately to complete the work.
  • a fourth embodiment of the present application relates to a robot, and the specific structure is as shown in FIG.
  • the robot may be a device with an autonomous positioning navigation function such as a smart sweeping robot, a navigation robot, a drone, an unmanned vehicle.
  • the internal one specifically includes one or more processors 1001 and a memory 1002, and one processor 1001 is taken as an example in FIG.
  • each functional module in the positioning device involved in the foregoing embodiment is deployed on the processor 1001, and the processor 1001 and the memory 1002 may be connected through a bus or other manner. example.
  • the memory 1002 is a computer readable storage medium, and can be used to store a software program, a computer executable program, and a module, such as a program instruction/module corresponding to the positioning method involved in any method embodiment of the present application.
  • the processor 1001 executes various functional applications and data processing of the server by executing software programs, instructions, and modules stored in the memory 1002, that is, implementing the positioning method involved in any method embodiment of the present application.
  • the memory 1002 may include a storage program area and an storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may establish a history database, store a map sample set, and the like.
  • the memory 1002 may include a high speed random access memory, and may also include a readable and writable memory (RAM).
  • the memory 1002 can optionally include memory remotely located relative to the processor 1001, which can be connected to the terminal device over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the memory 1002 may store at least one instruction executed by the processor 1001, and the instruction is executed by the at least one processor 1001, so that the at least one processor 1001 can perform the positioning method according to any method embodiment of the present application to control the positioning.
  • Each of the functional modules in the device performs the positioning operation in the positioning method.
  • the positioning method provided in any embodiment of the present application.
  • the robot mentioned in this embodiment may also be a cloud intelligent robot, that is, a robot "brain” for performing processing operations. It’s in the cloud.
  • the cloud intelligent robot connects the robot body and the cloud “brain” with a safe and fast mobile network, making the intelligent computing capability of the cloud a convenient service, thereby greatly reducing the research and development costs and operating costs of the intelligent robot. And with the powerful computing power of the cloud, it is more convenient and fast to perform autonomous navigation and achieve rapid positioning.
  • a fifth embodiment of the present application is directed to a computer readable storage medium having stored therein computer instructions that enable a computer to perform the positioning method involved in any of the method embodiments of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

本申请涉及视觉导航技术领域,公开了一种定位方法、装置、机器人及计算机可读存储介质。本申请中,定位方法应用于具有自主定位导航功能的机器人,其包括:在行驶达到预设距离的过程中,确定通过周围环境图像进行定位失败;控制机器人减速行驶,并在减速行驶的过程中通过周围环境图像进行定位,直至定位成功。该定位方法,能够在机器人的行驶过程中,根据机器人所处的环境,合理调整机器人的运行速度,准确定位机器人所处的位置。

Description

定位方法、装置、机器人及计算机可读存储介质 技术领域
本申请涉及视觉导航技术领域,特别涉及一种定位方法、装置、机器人及计算机可读存储介质。
背景技术
随着科学技术,如传感器技术、人工智能算法的迅速发展,基于视觉的即时定位与地图构建(VSLAM)的机器人的自主定位导航技术取得了一定成果,从而进一步丰富了机器人的功能,使得机器人的应用范围不断扩大,比如在物流行业使用的自动送货机器人,正逐步代替人完成部分作业或全部作业,使越来越多人得到更多的个人休闲时间,以便享受生活。
但是,发明人发现现有技术中至少存在如下问题:由于具备自主定位导航功能的机器人并非仅在室内等路面平坦的地方行走,还需要在室外等地面情况复杂的环境中行走,比如铺设有鹅卵石的路面。当在起伏不平的路面上行走时,机器人会不断颠簸,从而导致用于采集周围环境图像的图像采集装置(如照相机)长时间处于曝光状态,无法拍摄到清晰的图像。由于拍摄到的周围环境图像不清晰,无法与预先构建的地图进行匹配,定位出机器人当前所处的位置,使得机器人不能及时、准确的规划导航路线,完成工作,同时也严重影响了用户的使用体验。
发明内容
本申请部分实施例所要解决的一个技术问题在于提供一种定位方法、装置、机器人及计算机可读存储介质,以解决上述技术问题。
本申请的一个实施例提供了一种定位方法,该定位方法应用于具有自主定位导航功能的机器人,包括:在行驶达到预设距离的过程中,确定通过周围环境图像进行定位失败;控制机器人减速行驶,并在减速行驶的过程中通过周围环境图像进行定位,直至定位成功。
本申请的一个实施例提供了一种定位装置,该定位装置应用于具有自主定位导航功能的机器人,包括定位模块和控制模块;定位模块,用于在行驶达到预设距离的过程中,通过周围环境图像进行定位;控制模块,用于在定位模 块确定通过周围环境图像定位失败后,控制机器人减速行驶,并在减速行驶过程中控制定位模块通过周围环境图像进行定位,直至定位成功。
本申请的一个实施例提供了一种机器人,该机器人包括至少一个处理器;以及与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行本申请任意方法实施例中涉及的定位方法。
本申请的一个实施例提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机指令,计算机指令用于使计算机执行本申请任意方法实施例中涉及的定位方法。
本申请实施例相对于现有技术而言,具有自主定位导航功能的机器人在行驶达到预设距离的过程中,通过获取周围环境图像进行定位操作,并在确定通过周围环境图像进行定位失败的时候,控制机器人减速行驶,并在减速行驶的过程中继续通过周围环境图像进行定位,直至定位成功。根据上述定位方法,使得机器人在行驶的过程中,能够根据其所处的环境,合理调整自己的运行速度,从而准确定位出其所处的位置,进而能够及时、准确的规划导航路线,完成工作。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1是本申请第一实施例中定位方法的流程图;
图2是本申请第一实施例中根据余弦或正弦速度曲线调整机器人的运行速度的示意图;
图3是本申请第一实施例中根据梯形速度曲线调整机器人的运行速度的示意图;
图4是本申请第一实施例中根据S型速度曲线调整机器人的运行速度的示意图;
图5是本申请第二实施例中定位方法的流程图;
图6是本申请第二实施例中根据余弦或正弦速度曲线调整机器人的运行速度的示意图;
图7是本申请第二实施例中根据梯形速度曲线调整机器人的运行速度的 示意图;
图8是本申请第二实施例中根据S型速度曲线调整机器人的运行速度的示意图;
图9是本申请第三实施例中定位装置的方框示意图;
图10是本申请第四实施例中机器人的方框示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请部分实施例进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请的第一实施例涉及一种定位方法,该定位方法主要应用于具有自主定位导航功能的机器人,其具体流程如图1所示。
在步骤101中,在行驶达到预设距离的过程中,确定通过周围环境图像进行定位失败。
具体的说,机器人在行驶达到预设距离的过程中,通过周围环境图像进行定位的操作具体可以通过以下方式实现:
首先,机器人需要按照预设周期(如每间隔5s)采集获得周围环境图像。
接着,机器人中的处理单元(如CPU)将获取到的周围环境图像与预先构建的地图样本集中的图像进行匹配,若周围环境图像与地图样本集中的图像均不匹配,确定定位失败;若周围环境图像与地图样本集中的任一图像匹配,确定定位成功。
另外,本实施例中所说的预先构建的地图样本集中的图像具体可以采用网络爬虫等数据抓取工具获取的,并且为了保证定位的准确性,在实际应用中可以定期更新,具体的实现方式此处不再赘述,本领域的技术人员可以根据需要合理设置,此处不做限制。
另外,在本实施例中,用于与预先构建的地图样本集中的图像进行匹配的周围环境图像,具体为利用图像采集装置(如摄像头)按照预设周期采集获得的。
另外,值得一提的是,在本实施例中,用于采集周围环境图像的图像采集装置不限制其设置位置,即图像采集装置可以是设置在机器人上的(针对直接在路面工作的机器人),也可以是设置在其他物体上的(针对在运动物体上工作的机器人),具体的实现方式,本领域的技术人员可以根据实际需要合理 设置,此处不做限制。
另外,在本实施例中所说的机器人在行驶达到的预设距离,具体是基于视觉的即时定位与地图构建(VSLAM)的自主定位导航技术来设置的,比如针对某一机器人,该预设距离可以为10米,即在10米内只要采集到一张能够与地图样本集中的图像精确匹配的周围环境图像,就可以定位出机器人当前所处的位置,然后根据每一段预设距离内定位出的具***置,绘制导航路线,从而实现自主导航。
需要说明的是,以上仅为举例说明,并不对本申请的技术方案及要保护的范围构成限定,在实际应用中,获取周围环境图像的方式以及根据获取到的环境图像与地图样本集中的图像进行匹配的具体实现方式,本领域的技术人员可以根据需要设置,此处不做限制。
在步骤102中,控制机器人减速行驶,并在减速行驶的过程中通过周围环境图像进行定位,直至定位成功。
具体的说,步骤102具体可以通过下述方式实现:
控制机器人从第一运行速度(机器人设置的正常工作运行速度,也是首次定位成功时的运行速度)减速到第二运行速度,在机器人减速至第二运行速度后,重新利用图像采集装置获取周围环境图像,然后进行定位。
若定位成功(重新获取的周围环境图像与地图样本集中的任一图像匹配),则停止减速行驶过程,以第二运行速度作为其工作所需的运行速度继续行驶。
若定位失败(重新获取的周围环境图像与地图样本集中的图像均不匹配),则控制机器人进一步减速行驶,如从第二运行速度减速到第三运行速度,并在机器人减速至第三运行速度后,重新利用图像采集装置获取周围环境图像,然后进行定位,直至定位成功,否则继续减速行驶。
需要说明的是,在本实施例中,上述控制机器人减速行驶操作,具体是根据速度曲线实现的。
其中,速度曲线具体是用于表示运行速度和时间的对应关系的。
另外,速度曲线具体可以为以下任意一种或任意组合:余弦速度曲线、正弦速度曲线、梯形速度曲线、S型速度曲线。
另外,在实际应用中,机器人在减速行驶中,其速度不可能降为负数(即小于0),因此,为了能够使根据速度曲线控制机器人减速行驶的过程更加符合实际使用情况,本实施例中提供的余弦速度曲线中的余弦值的角度在[0,π]之间 取值,正弦速度曲线中的正弦值的角度在[π,2π]之间取值。
为了便于理解根据上述4种速度曲线控制机器人减速行驶的实现方式,以下结合图2至图4进行具体说明。
需要说明的是,由于余弦速度曲线是基于余弦函数绘制的,正弦速度曲线是基于正弦函数绘制的,而余弦函数和正弦函数之间可以相互置换,且余弦值的角度在[0,π]之间取值时绘制出的余弦速度曲线,与正弦值的角度在[π,2π]之间取值时绘制出的正弦速度曲线是相同的,因此基于余弦速度曲线或正弦速度曲线控制机器人减速行驶得出的曲线图均以图2表示,为了便于说明,以下以余弦速度曲线为例,结合图2进行具体说明。
具体的说,基于余弦速度曲线控制机器人减速行驶,具体是根据如下公式实现的:
Figure PCTCN2018075170-appb-000001
其中,v 0为机器人匀速运行的速度,T为余弦角度,其取值在[0,π]之间,并且包括端点。
根据上述公式,可以得到图2。具体的,图2表示的为:从某次定位成功t 0时刻起,机器人以v 0(本实施例所说的第一加速度)的速度进行巡逻,同时继续进行定位操作,如果机器人运行至预设距离D的距离后没有定位成功,则机器人在t′ 0至t 1的时间内以余弦速度曲线减速至v 1(即v 0/2),然后匀速运行,并在匀速运行期间继续进行定位操作,如果机器人在v 1的速度运行下定位成功,则完成此定位处理,如果没有则继续减速,并进行定位操作,直至定位成功为止。
图3为根据梯形速度曲线调整机器人的运行速度的示意图,从图3可以看出,根据梯形速度曲线调整机器人的运行速度的方式,具体是从匀速运行速度起按照匀减速、匀速的次序调整运行速度,直到根据重新获取到的周围环境图像能够定位出机器人当前所处的位置为止。
图4为根据S型速度曲线调整机器人的运行速度的示意图,从图4可以看出,根据S型速度曲线调整机器人的运行速度的方式,具体是从匀速运行速度起按照减减速、匀减速、加减速、匀速的次序调整运行速度,直到根据重新获取到的周围环境图像能够定位出机器人当前所处的位置为止。
需要说明的是,上述所说的减减速是指按照不断减小的加速度(用于控制运行速度减小的加速度)进行的减速运动,即次过程中加速度不是一个固定值,而是在一直减小。
上述所说的加减速则是指按照不断增大的加速度进行的减速运动,即从匀减速过程到匀速过程的一个微微加速过程。
另外,需要说明的是,以上仅为举例说明,并不对本申请的技术方案及要保护的范围构成限定,在实际应用中,基于上述几种速度曲线控制机器人减速行驶的操作,本领域的技术人员可以根据其掌握的技术手段合理设置。
另外,基于上述各种速度曲线控制机器人减速行驶以及恢复运行速度过程中所需的加速度值可以根据实际需要合理设置,此处也不做限制。
另外,每一次减速操作具体要减速至的运行速度,本领域的技术人员可以根据机器人的类型以及机器人实际要应用的场合,预先在机器人内存储一个运行速度调整表,从而方便机器人在每次需要减速操作时,能够根据预存的速度调整表进行减速操作,具体的实现方式此处不再赘述,也不做限制。
与现有技术相比,本实施例提供的定位方法,能够使机器人在行驶的过程中,根据其所处的环境,合理调整自己的运行速度,从而准确定位出其所处的位置,进而能够及时、准确的规划导航路线,完成工作。
本申请的第二实施例涉及一种定位方法。本实施例在第一实施例的基础上做了进一步改进,具体改进之处为:在确定定位成功之后,控制机器人加速至第一运行速度,具体流程如图5所示。
具体的说,在本实施例中,包含步骤501至步骤503,其中,步骤501、步骤502分别与第一实施例中的步骤101、步骤102大致相同,此处不再赘述,下面主要介绍不同之处,未在本实施方式中详尽描述的技术细节,可参见第一实施例所提供的定位方法,此处不再赘述。
在步骤503中,控制机器人加速至第一运行速度。
具体的说,在确定定位成功之后,控制机器人加速至第一运行速度的过程中,具体可以通过以下方式完成:
比如,在确定定位成功之后,机器人通过自身的速度检测装置获取当前时刻的运行速度,只要获取到的当前时刻的运行速度大于或等于0,就根据记录的第一运行速度、当前时刻的运行速度以及设置的需要恢复到第一运行速度所需的时间,计算一个加速度值,然后按照计算所得的加速度值控制机器人加速至第一运行速度。
另外,也可以根据实际情况,设置机器人正常工作的合理运行速度取值范围,如果获取到的当前时刻的运行速度仍然在机器人正常工作的合理运行速度取值范围内,可以恢复到第一运行速度,也可以继续按照当前时刻的运行速度行驶。
如果获取到的当前时刻的运行速度不在机器人正常工作的合理运行速度取值范围内,则控制机器人加速至第一运行速度。
需要说明的是,以上仅为举例说明,并不对本申请的技术方案及要保护的范围构成限定,在实际应用中,本领域的技术人员可以根据实际需要,合理设置,此处不做限制。
为了便于理解根据余弦速度曲线、正弦速度曲线、梯形速度曲线、S型速度曲线4种速度曲线控制机器人减速行驶的实现方式,以下结合图6至图8进行具体说明。
需要说明的是,由于余弦速度曲线是基于余弦函数绘制的,正弦速度曲线是基于正弦函数绘制的,而余弦函数和正弦函数之间可以相互置换,且余弦值的角度在[0,π]之间取值时绘制出的余弦速度曲线,与正弦值的角度在[π,2π]之间取值时绘制出的正弦速度曲线是相同的,因此基于余弦速度曲线或正弦速度曲线控制机器人减速行驶得出的曲线图均以图6表示,为了便于说明,以下以余弦速度曲线为例,结合图6进行具体说明。
具体的说,基于余弦速度曲线控制机器人减速行驶,具体是根据如下公式实现的:
Figure PCTCN2018075170-appb-000002
其中,v 0为机器人匀速运行的速度,T为余弦角度,其取值在[0,π]之间,并且包括端点。
根据上述公式,可以得到图6。具体的,图6表示的为:从某次定位成功t 0时刻起,机器人以v 0(本实施例所说的第一加速度)的速度进行巡逻,同时继续进行定位操作,如果机器人运行至预设距离D的距离后没有定位成功,则机器人在t′ 0至t 1的时间内以余弦速度曲线减速至v 1(即v 0/2),然后匀速运行, 并在匀速运行期间继续进行定位操作,如果机器人在v 1的速度运行下定位成功,则机器人以余弦速度曲线升速至v 0;如果没有定位成功,则继续减速,并进行定位操作,直至定位成功,然后以余弦速度曲线升速至v 0为止。
需要说明的是,上述公式给出了一种极端的情况,即机器人在减速行驶,进行定位的过程中,直到运行速度降为0才定位成功,这种情况下,为了避免突然升速至v 0导致机器人失衡跌倒,可以先控制机器人从0加速至v 1,然后在从v 1升速至v 0
在实际应用中,本领域的技术人员可以预先设置,具体的实现方式此处不做限制。
图7为根据梯形速度曲线调整机器人的运行速度的示意图,从图7可以看出,根据梯形速度曲线调整机器人的运行速度的方式,具体是从匀速运行速度起按照匀减速、匀速的次序调整运行速度,直到根据重新获取到的周围环境图像能够定位出机器人当前所处的位置为止,在定位成功后,控制机器人加速至v 0的过程中,具体是按照匀加速的方式调整机器人当前时刻的运行速度,直至恢复初始运行速度v 0
图8为根据S型速度曲线调整机器人的运行速度的示意图,从图8可以看出,根据S型速度曲线调整机器人的运行速度的方式,具体是从匀速运行速度起按照减减速、匀减速、加减速、匀速的次序调整运行速度,直到根据重新获取到的周围环境图像能够定位出机器人当前所处的位置为止,在定位成功后,控制机器人加速至v 0的过程中,具体是按照减加速、匀加速、加加速的次序调整机器人当前时刻的运行速度,直至恢复初始运行速度v 0
需要说明的是,上述所说的减加速是与加减速相反的一个过程,加加速是与减减速相反的一个过程。
另外,需要说明的是,以上仅为举例说明,并不对本申请的技术方案及要保护的范围构成限定,在实际应用中,基于上述几种速度曲线控制机器人减速行驶的操作,本领域的技术人员可以根据其掌握的技术手段合理设置。
另外,基于上述各种速度曲线控制机器人减速行驶以及恢复运行速度过程中所需的加速度值可以根据实际需要合理设置,此处也不做限制。
与现有技术相比,本实施例提供的定位方法,在确定定位成功之后,通过控制机器人加速至第一运行速度,即从当前运行速度恢复至初始运行速度,使得机器人能够尽量在预设时间内完成工作。
本申请的第三实施例涉及一种定位装置。该定位装置主要应用于具有自 主定位导航功能的机器人,具体结构如图9所示。
如图9所示,定位装置包括定位模块901和控制模块902。
其中,定位模块901用于在行驶达到预设距离的过程中,通过周围环境图像进行定位。
控制模块902用于在定位模块901确定通过周围环境图像定位失败后,控制机器人减速行驶,并在减速行驶过程中控制定位模块901通过周围环境图像进行定位,直至定位成功。
具体的说,在本实施例中,装配有该定位装置的机器人在行驶达到预设距离的过程中,定位模块901可以采用下述方式来实现通过周围环境图像进行定位的操作,具体的:将周围环境图像与预先构建的地图样本集中的图像进行匹配,若周围环境图像与地图样本集中的图像均不匹配,确定定位失败;若周围环境图像与地图样本集中的任一图像匹配,确定定位成功。
另外,值得一提的是,在本实施例中,用于与预先构建的地图样本集中的图像进行匹配的周围环境图像,具体为利用图像采集装置按照预设周期采集获得的,比如每隔5s采集一次。
需要说明的是,以上仅为举例说明,并不对本申请的技术方案及要保护的范围构成限定,在实际应用中,图像采集装置采集周围环境图像的周期可以根据机器人实际所处的环境、传输性能等因素合理设置,此处不做限制。
另外,用于采集周围环境图像的图像采集装置不限制其设置位置,即图像采集装置可以是设置在机器人上的(针对直接在路面工作的机器人),也可以是设置在其他物体上的(针对在运动物体上工作的机器人),具体的实现方式,本领域的技术人员可以根据实际需要合理设置,此处不做限制。
另外,未在本实施方式中详尽描述的技术细节,可参见本申请任一实施例所提供的定位方法,此处不再赘述。
通过上述描述不难发现,本实施例中提供的定位装置,利用其包括的定位模块和控制模块相互配合,使得装配有该定位装置的机器人在行驶的过程中,能够根据其所处的环境,合理调整自己的运行速度,从而准确定位出其所处的位置,进而能够及时、准确的规划导航路线,完成工作。
以上所描述的装置实施例仅仅是示意性的,并不对本申请的保护范围构成限定,在实际应用中,本领域的技术人员可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的,此处不做限制。
本申请的第四实施例涉及一种机器人,具体结构如图10所示。
该机器人可以是如智能扫地机器人、导航机器人、无人机、无人汽车等具有自主定位导航功能的设备。其内部具体包括一个或多个处理器1001以及存储器1002,图10中以一个处理器1001为例。
在本实施例中,上述实施例中涉及到的定位装置中的各功能模块均部署在处理器1001上,处理器1001和存储器1002可以通过总线或其他方式连接,图10中以通过总线连接为例。
存储器1002作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本申请任意方法实施例中涉及的定位方法对应的程序指令/模块。处理器1001通过运行存储在存储器1002中的软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现本申请任意方法实施例中涉及的定位方法。
存储器1002可以包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需要的应用程序;存储数据区可建立历史数据库,用于存储地图样本集等。此外,存储器1002可以包括高速随机存取存储器,还可以包括可读写存储器(Random Access Memory,RAM)等。在一些实施例中,存储器1002可选包括相对于处理器1001远程设置的存储器,这些远程存储器可以通过网络连接至终端设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
在实际应用中,存储器1002中可以存储至少一个处理器1001执行的指令,指令被至少一个处理器1001执行,以使至少一个处理器1001能够执行本申请任意方法实施例涉及的定位方法,控制定位装置中的各个功能模块完成定位方法中的定位操作,未在本实施例中详尽描述的技术细节,可参见本申请任一实施例所提供的定位方法。
另外,值得一提的是,随着云计算技术的发展,为了进一步提升机器人的处理能力,本实施例中所说的机器人还可以是云端智能机器人,即用于进行处理操作的机器人“大脑”是位于云端的。
具体的说,云端智能机器人是利用安全快速的移动网络连接机器人躯体与云端“大脑”,使得云端的智能计算能力成为一种便捷的服务,从而极大地降低了智能机器人的研发成本与运营成本,并且利用云端的强大计算能力,可以更加方便快速的进行自主导航,实现快速定位。
需要说明的是,上述所说的两种类型的机器人仅为本实施例中的具体举例说明,并不对本申请的技术方案和要保护的范围构成限定,在实际应用中, 本领域的技术人员可以根据现有机器设备的发展情况,基于上述定位方法的实现流程进行实现,此处不做限制。
本申请的第五实施例涉及一种计算机可读存储介质,该计算机可读存储介质中存储有计算机指令,该计算机指令使计算机能够执行本申请任意方法实施例中涉及的定位方法。
本领域的普通技术人员可以理解,上述各实施例是实现本申请的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。

Claims (11)

  1. 一种定位方法,应用于具有自主定位导航功能的机器人,所述定位方法包括:
    在行驶达到预设距离的过程中,确定通过周围环境图像进行定位失败;
    控制所述机器人减速行驶,并在减速行驶的过程中通过周围环境图像进行定位,直至定位成功。
  2. 如权利要求1所述的定位方法,其中,所述确定通过周围环境图像进行定位失败,包括:
    按照预设周期采集获得所述周围环境图像;
    将所述周围环境图像与预先构建的地图样本集中的图像进行匹配;
    若所述周围环境图像与所述地图样本集中的图像均不匹配,确定定位失败。
  3. 如权利要求1或2所述的定位方法,其中,所述控制所述机器人减速行驶,并在减速行驶的过程中通过周围环境图像进行定位,直至定位成功,具体包括:
    控制所述机器人从第一运行速度减速到第二运行速度;
    在所述机器人减速至所述第二运行速度后,重新获取周围环境图像进行定位;
    若确定定位成功,则停止减速行驶过程;
    若确定定位失败,则控制所述机器人进一步减速行驶,并重新获取周围环境图像进行定位,直至定位成功。
  4. 如权利要求3所述的定位方法,其中,所述确定定位成功之后,所述定位方法还包括:
    获取当前时刻的运行速度;
    若获取到的当前时刻的所述运行速度大于或等于0,控制所述机器人加速至所述第一运行速度。
  5. 如权利要求1至4任意一项所述的定位方法,其中,所述控制所述机器人减速行驶,具体包括:
    根据速度曲线控制所述机器人减速行驶;
    其中,所述速度曲线用于表示运行速度和时间的对应关系。
  6. 如权利要5所述的定位方法,其中,所述速度曲线为以下任意一种或任 意组合:余弦速度曲线、正弦速度曲线、梯形速度曲线、S型速度曲线。
  7. 如权利要求6所述的定位方法,其中,所述余弦速度曲线中的余弦值的角度在[0,π]之间取值。
  8. 如权利要求6所述的定位方法,其中,所述正弦速度曲线中的正弦值的角度在[π,2π]之间取值。
  9. 一种定位装置,应用于具有自主定位导航功能的机器人,所述定位装置包括定位模块和控制模块;
    所述定位模块,用于在行驶达到预设距离的过程中,通过周围环境图像进行定位;
    所述控制模块,用于在所述定位模块确定通过所述周围环境图像定位失败后,控制所述机器人减速行驶,并在减速行驶过程中控制所述定位模块通过周围环境图像进行定位,直至定位成功。
  10. 一种机器人,包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1至8任意一项所述的定位方法。
  11. 一种计算机可读存储介质,存储有计算机指令,所述计算机指令用于使所述计算机执行权利要求1至8任意一项所述的定位方法。
PCT/CN2018/075170 2018-02-02 2018-02-02 定位方法、装置、机器人及计算机可读存储介质 WO2019148467A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2018/075170 WO2019148467A1 (zh) 2018-02-02 2018-02-02 定位方法、装置、机器人及计算机可读存储介质
JP2019561296A JP7032440B2 (ja) 2018-02-02 2018-02-02 測位方法、装置、ロボット及びコンピューター読み取り可能な記憶媒体
CN201880001300.8A CN108885460B (zh) 2018-02-02 2018-02-02 定位方法、装置、机器人及计算机可读存储介质
US16/687,838 US11292131B2 (en) 2018-02-02 2019-11-19 Localization method and apparatus, and robot and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/075170 WO2019148467A1 (zh) 2018-02-02 2018-02-02 定位方法、装置、机器人及计算机可读存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/687,838 Continuation US11292131B2 (en) 2018-02-02 2019-11-19 Localization method and apparatus, and robot and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2019148467A1 true WO2019148467A1 (zh) 2019-08-08

Family

ID=64325035

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/075170 WO2019148467A1 (zh) 2018-02-02 2018-02-02 定位方法、装置、机器人及计算机可读存储介质

Country Status (4)

Country Link
US (1) US11292131B2 (zh)
JP (1) JP7032440B2 (zh)
CN (1) CN108885460B (zh)
WO (1) WO2019148467A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115363478B (zh) * 2021-05-17 2024-06-07 尚科宁家(中国)科技有限公司 一种清洁机器人重定位失败的清洁方法和清洁机器人
CN113377104A (zh) * 2021-06-02 2021-09-10 北京布科思科技有限公司 基于差速模型的机器人位置控制方法、装置
CN115237113B (zh) * 2021-08-02 2023-05-12 达闼机器人股份有限公司 机器人导航的方法、机器人、机器人***及存储介质
CN113691734B (zh) * 2021-09-29 2023-04-07 深圳众为兴技术股份有限公司 自适应飞行拍摄控制方法、装置、设备及存储介质
CN114253290B (zh) * 2021-12-15 2024-03-19 成都飞机工业(集团)有限责任公司 一种飞机部件运输车自动循迹和精确定位的方法及***

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009144805A1 (ja) * 2008-05-29 2009-12-03 三菱電機株式会社 加減速制御装置
CN106200645A (zh) * 2016-08-24 2016-12-07 北京小米移动软件有限公司 自主机器人、控制装置和控制方法
CN107223244A (zh) * 2016-12-02 2017-09-29 深圳前海达闼云端智能科技有限公司 定位方法和装置
CN107246876A (zh) * 2017-07-31 2017-10-13 中北智杰科技(北京)有限公司 一种无人驾驶汽车自主定位与地图构建的方法及***
CN107357286A (zh) * 2016-05-09 2017-11-17 两只蚂蚁公司 视觉定位导航装置及其方法
CN107368071A (zh) * 2017-07-17 2017-11-21 纳恩博(北京)科技有限公司 一种异常恢复方法及电子设备

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02204807A (ja) * 1989-02-02 1990-08-14 Toshiba Corp 無人搬送車
JP2969174B1 (ja) * 1998-05-27 1999-11-02 建設省土木研究所長 車の自動合流制御方法及び装置
JP2000284830A (ja) * 1999-03-30 2000-10-13 Komatsu Ltd 無人走行制御装置
JP3670586B2 (ja) 2001-01-10 2005-07-13 Hoyaヘルスケア株式会社 レンズトレー
JP3876698B2 (ja) * 2001-11-28 2007-02-07 日産自動車株式会社 ワーク搬送装置及び搬送方法
US6678582B2 (en) 2002-05-30 2004-01-13 Kuka Roboter Gmbh Method and control device for avoiding collisions between cooperating robots
US20080300777A1 (en) * 2002-07-02 2008-12-04 Linda Fehr Computer-controlled power wheelchair navigation system
JP5444171B2 (ja) * 2010-09-03 2014-03-19 株式会社日立製作所 無人搬送車および走行制御方法
JP5288423B2 (ja) * 2011-04-11 2013-09-11 株式会社日立製作所 データ配信システム、及びデータ配信方法
US8972055B1 (en) * 2011-08-19 2015-03-03 Google Inc. Methods and systems for selecting a velocity profile for controlling a robotic device
US8755966B2 (en) * 2012-04-03 2014-06-17 Caterpillar Inc. System and method for controlling autonomous machine within lane boundaries during position uncertainty
US9081651B2 (en) 2013-03-13 2015-07-14 Ford Global Technologies, Llc Route navigation with optimal speed profile
JP6167622B2 (ja) * 2013-04-08 2017-07-26 オムロン株式会社 制御システムおよび制御方法
EP2952301B1 (en) 2014-06-05 2019-12-25 Softbank Robotics Europe Humanoid robot with collision avoidance and trajectory recovery capabilities
US9910441B2 (en) * 2015-11-04 2018-03-06 Zoox, Inc. Adaptive autonomous vehicle planner logic
US10838427B2 (en) * 2016-10-26 2020-11-17 The Charles Stark Draper Laboratory, Inc. Vision-aided inertial navigation with loop closure
US10949798B2 (en) * 2017-05-01 2021-03-16 Symbol Technologies, Llc Multimodal localization and mapping for a mobile automation apparatus
CN107065784A (zh) 2017-05-09 2017-08-18 杭州电子科技大学 用于直角坐标机器人在高速运动中实现在线多级调整方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009144805A1 (ja) * 2008-05-29 2009-12-03 三菱電機株式会社 加減速制御装置
CN107357286A (zh) * 2016-05-09 2017-11-17 两只蚂蚁公司 视觉定位导航装置及其方法
CN106200645A (zh) * 2016-08-24 2016-12-07 北京小米移动软件有限公司 自主机器人、控制装置和控制方法
CN107223244A (zh) * 2016-12-02 2017-09-29 深圳前海达闼云端智能科技有限公司 定位方法和装置
CN107368071A (zh) * 2017-07-17 2017-11-21 纳恩博(北京)科技有限公司 一种异常恢复方法及电子设备
CN107246876A (zh) * 2017-07-31 2017-10-13 中北智杰科技(北京)有限公司 一种无人驾驶汽车自主定位与地图构建的方法及***

Also Published As

Publication number Publication date
JP2020520006A (ja) 2020-07-02
US20200078944A1 (en) 2020-03-12
JP7032440B2 (ja) 2022-03-08
CN108885460B (zh) 2020-07-03
CN108885460A (zh) 2018-11-23
US11292131B2 (en) 2022-04-05

Similar Documents

Publication Publication Date Title
WO2019148467A1 (zh) 定位方法、装置、机器人及计算机可读存储介质
US11635760B2 (en) Autonomous path treatment systems and methods
WO2020181719A1 (zh) 无人机控制方法、无人机及***
CN107179768B (zh) 一种障碍物识别方法及装置
AU2019404207A1 (en) Collaborative autonomous ground vehicle
CN111958591A (zh) 一种语义智能变电站巡检机器人自主巡检方法及***
CN105929850A (zh) 一种具有持续锁定和跟踪目标能力的无人机***与方法
WO2021164738A1 (en) Area division and path forming method and apparatus for self-moving device and automatic working system
CN104111460A (zh) 自动行走设备及其障碍检测方法
CN109946564B (zh) 一种配网架空线路巡检数据采集方法及巡检***
CN104089649A (zh) 一种室内环境数据采集***及采集方法
CN113885580A (zh) 基于无人机实现自动化巡检风机的路径规划方法及***
CN111762519A (zh) 引导拣选机器人作业的方法、***和调度装置
WO2024146339A1 (zh) 路径规划方法、装置和起重机
CN114281100B (zh) 一种不悬停无人机巡检***及其方法
CN114360261B (zh) 车辆逆行的识别方法、装置、大数据分析平台和介质
JP2020021372A (ja) 情報処理方法および情報処理システム
CN109977884B (zh) 目标跟随方法和装置
CN110470307A (zh) 一种视障患者导航***和方法
EP4175455B1 (en) Autonomous machine having vision system for navigation and method of using same
CN109978174A (zh) 信息处理方法、信息处理装置及程序记录介质
WO2021159397A1 (zh) 车辆可行驶区域的检测方法以及检测装置
TW201727417A (zh) 分析路面曲度並結合資料記錄的自動行走建議系統及方法
CN103376803A (zh) 行李移动***及其方法
CN114941448B (zh) 灰浆清理方法、装置、***及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18903510

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019561296

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01/12/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18903510

Country of ref document: EP

Kind code of ref document: A1