WO2018129648A1 - 一种机器人及其以深度摄像头和避障***构建地图的方法 - Google Patents

一种机器人及其以深度摄像头和避障***构建地图的方法 Download PDF

Info

Publication number
WO2018129648A1
WO2018129648A1 PCT/CN2017/070718 CN2017070718W WO2018129648A1 WO 2018129648 A1 WO2018129648 A1 WO 2018129648A1 CN 2017070718 W CN2017070718 W CN 2017070718W WO 2018129648 A1 WO2018129648 A1 WO 2018129648A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
map
information
obstacle
data
Prior art date
Application number
PCT/CN2017/070718
Other languages
English (en)
French (fr)
Inventor
舒权文
Original Assignee
深圳市极思维智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市极思维智能科技有限公司 filed Critical 深圳市极思维智能科技有限公司
Priority to PCT/CN2017/070718 priority Critical patent/WO2018129648A1/zh
Publication of WO2018129648A1 publication Critical patent/WO2018129648A1/zh

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation

Definitions

  • the present invention relates to the field of robot technology, and in particular, to a robot and a method for constructing a map using a depth camera and an obstacle avoidance system.
  • Intelligent robots are used more and more in life. He can help humans to complete some activities in life and production. However, achieving flexible, efficient and intelligent movement of robots is an important step in robot intelligence. Therefore, the robot's autonomous navigation function can reflect the intelligence of a robot. Three key elements of map creation, positioning, and path-planned autonomous navigation. For example, if you want to go from A to B, then you must first know where A is. Second, you need to know where B is, and the relationship between A and B in the surrounding environment; Finally, You can reach B from A to the information according to the information. At present, the more common intelligent robots' instant positioning and map construction technologies include FastSLAM and VSLAM.
  • Fa stSLAM is generally implemented using a laser range finder or sonar, while VSLAM is implemented using a visual sensor. Due to the use of sensors such as lasers and sonars, FastSLAM does not recognize robots in special environments. Only the pre-judgment can be used to estimate the overall environment. VSLAM uses visual sensors, there are many medium-sized sensors on the market, and the principles are different. On the whole, using a visual sensor for autonomous navigation can effectively solve the problems generated by FastSLAM.
  • the present invention aims to solve the existing robotic construction map method, does not take into account the complex problems of ground suspension and home obstacles, and provides a robot and its construction with a depth camera and an obstacle avoidance system.
  • the map method combines the infrared ground detection module and the obstacle detection module as a supplement to the depth camera, and constructs a map, so that the constructed map has no blind spots and is more perfect.
  • a robot includes a depth camera and an obstacle avoidance system, and further includes a host computer, a terminal display, a lower position machine, an infrared ground detection module, an obstacle detection module, an odometer, and a gyroscope.
  • the upper computer is connected to the lower computer, and is configured to process data between the upper computer and the lower computer
  • the depth camera is connected to the upper computer for ingesting surrounding environment information.
  • the terminal display is connected to the upper computer, and is used for data display after processing by the upper computer.
  • the infrared inspecting module is connected to the lower computer for checking dangling data information of the surrounding environment.
  • the obstacle detection module is connected to the lower computer for checking edge data information of the surrounding environment.
  • the odometer and the gyroscope are connected to the lower computer for recording robot travel information.
  • the lower computer includes a determining module, and the determining module is configured to determine whether the walking path exists in the walking path Non-movable area.
  • the wireless module is further included, and the wireless module is connected to the upper computer, and the data processed by the upper computer is wirelessly propagated and received by the terminal.
  • a method for constructing a map by a robot using a depth camera and an obstacle avoidance system includes the following steps:
  • Step one the robot traverses the space to be walked, and establishes a map coordinate system corresponding to the walking space;
  • Step 2 the robot takes in the surrounding environment information of the walking space through the depth data camera, according to the walking surrounding environment information A preliminary map for establishing a walking space in the map coordinate system;
  • Step 3 The robot determines whether there is an inactive area in the walking path, and if not, returns to step two.
  • Step 4 The robot marks the data information of the non-movable area on the preliminary map.
  • the robot takes in the surrounding environment information of the walking space through the depth data camera, specifically, acquires the depth information z of the robot to the external obstacle object, calculates the depth of the first obstacle information according to the depth information Z, and obtains the depth of the first obstacle information.
  • the X and y of the image, X represents the distance from the external obstacle to the origin of the robot, and y represents the height of the external obstacle.
  • the robot determines whether there is an inactive area in the walking path, and the infrared detecting module including the robot determines whether there is an inactive area in the walking path according to the acquired ground detecting information, and if so, marks the preliminary map. Dangling data information for non-movable areas.
  • the robot determines whether there is an inactive area in the walking path, and the obstacle detecting module including the robot determines whether there is an inactive area in the walking path according to the acquired collision data, and if so, marks the initial map. Edge data information for the motion area.
  • the method further includes a step 4, determining whether the edge data information is a closed ring, and if not
  • step two then return to step two, and if so, complete the map construction.
  • step 5 is further included, and the constructed map information is displayed by the terminal display.
  • step 6 is further included, and the constructed map information is wirelessly transmitted by the wireless module.
  • a robot and a method for constructing a map thereof using a depth camera and an obstacle avoidance system are:
  • the method for constructing a map by using a depth camera, an infrared inspecting module and an obstacle detecting module of the robot in the invention realizes a map construction of the robot without a blind zone to the environment, so that the robot It can effectively avoid obstacles when walking according to the constructed map.
  • FIG. 1 is a block diagram showing the structure of a robot according to an embodiment of the present invention.
  • FIG. 2 is a flow chart of a method for constructing a map by a robot with a depth camera and an obstacle avoidance system according to an embodiment of the present invention.
  • FIG. 3 is another flow chart of a method for constructing a map by a robot with a depth camera and an obstacle avoidance system according to an embodiment of the present invention.
  • FIG. 4 is a third flow chart of a method for constructing a map of a robot using a depth camera and an obstacle avoidance system according to an embodiment of the present invention.
  • FIG. 5 is a fourth flow chart of a method for constructing a map by a robot using a depth camera and an obstacle avoidance system according to an embodiment of the present invention.
  • FIG. 6 is a vector diagram of a method for constructing a map by using a depth camera and an obstacle avoidance system according to an embodiment of the present invention to avoid obstacles ⁇ motion mode.
  • FIG. 7 is a structural diagram of obtaining a depth image by the camera of the present invention.
  • FIG. 8 is a schematic diagram of intercepting an effective angle and a rotation manner in a shooting angle of a depth camera.
  • an embodiment of the present invention provides a robot, including a host computer 1, a depth camera 2, a terminal display 3, a wireless module 4, a lower computer 5, an infrared ground detection module 6, and an obstacle detection module. 7, odometer 8, gyroscope 9, obstacle avoidance system 10;
  • the upper computer 1 is connected to the lower computer 5 for processing data between the upper computer 1 and the lower computer 2;
  • the upper computer and the lower computer of the robot use a serial port as a communication channel.
  • the depth camera 2 is connected to the upper computer 1 for ingesting surrounding environment information;
  • the sensor in the depth camera 2 generates a depth of field image stream at a rate of 30 frames per second.
  • We analyze the depth of field image by an existing image processing algorithm such as opencv, and find the common feature points in the depth of field image stream, and the same feature. Point stitching to create a solid image of the surrounding environment.
  • the feature points referred to herein can be understood as right angles, straight lines, arcs, and the like.
  • the processor of the host computer 1 is equipped with an operating system such as linux, the operating system is installed with a SLAM program, and the driver of the depth camera 2 is also installed in the operating system.
  • an operating system such as linux
  • the operating system is installed with a SLAM program
  • the driver of the depth camera 2 is also installed in the operating system.
  • the depth camera 2 transmits the depth of field image data to the host computer 1, and the SLAM program reads the data and analyzes and uses it.
  • the depth information Z of the robot to the external obstacle object is acquired, and X and y of the depth image of the first obstacle information are calculated and obtained according to the depth information Z, and X represents the distance of the external obstacle object to the robot origin, y Indicates the height of the external obstacle.
  • the robot of the present invention rotates the depth camera 2 a plurality of times, and the angle that can be captured is 110 degrees without rotating the cymbal. For more accuracy, we only read the information of the middle 90 degrees, 360 degrees. In the remaining angles, we rotate in four rotations, each rotation is 90 degrees. For each rotation, the above triangle formula obtains the values of X and y in the depth image of the new obstacle information until after 360 degrees. Add all image information to the built map. In this process, the desired angle can be precisely rotated according to the gyroscope.
  • the terminal display 3 is connected to the upper computer 1 for data display after processing by the upper computer 1;
  • the Slam algorithm in the host computer 1 is superimposed into a complete indoor map according to the built environment image.
  • a 2D map as an example.
  • the slam algorithm constructs the map, we vertically project the map into a plane, all edges. It is surrounded by dark brown dots, the obstacles are shown in red, and the blanks are shown in light gray. 3 ⁇ 4 This map will appear in outlines. Colors can be defined according to the user's freedom.
  • the infrared ground detecting module 6 is connected to the lower computer 5 for checking the dangling data information of the surrounding environment.
  • the infrared inspection module 6 is installed at the bottom of the robot, and each infrared inspection module 6 is provided by an infrared emission tube. And one or more infrared receiving tubes, the infrared detecting module 6 sends infrared through the infrared transmitting tube, the infrared receiving tube receives the AD value data, and then uses a common algorithm such as difference, mean, etc. to estimate the effective value of the current, and then The experimentally determined fixed set value is compared to determine whether it is left floating, and if it is floating, the dangling data information is marked.
  • a common algorithm such as difference, mean, etc.
  • the infrared inspecting modules 6 at the bottom of the machine, by marking the suspended data information and the position of the installation to determine which direction of the machine is floating, and after floating, we transmit the suspended data information to the lower computer 5 through the serial port.
  • the lower computer 5 is transmitted to the upper computer 1, and the upper computer 1 records the coordinate information of the ⁇ , and marks the floating data information in the map.
  • the obstacle detection module 7 is connected to the lower computer 5 for checking edge data information of the surrounding environment.
  • the obstacle detection module 7 is divided into two parts, an infrared wall inspection module and a mechanical collision module.
  • the mechanical collision module is equipped with two micro-actuators under the front outer casing of the robot. As soon as the robot is hit, the two micro-actuators are pressed and the system detects the corresponding edge data information.
  • the mechanical collision module is used to detect lateral stool feet, etc., and the detection effect is obvious in actual operation. After the collision is detected, the lower machine sends the edge data information immediately, then the host computer 1 immediately records the coordinate information of the frame and marks the edge data information in the map.
  • the working principle of the infrared wall inspection module is the same as that of the infrared inspection module 6.
  • the infrared wall inspection module can detect obstacle information.
  • a plurality of infrared wall inspection modules are installed around the robot. By reading the position of the infrared wall inspection module, it is possible to identify the obstacle in which direction the machine has an obstacle, mark the edge data information, and send it to the host computer 1 to mark the map.
  • the odometer 8 and the gyroscope 9 are connected to the lower computer 5 for recording robot travel information.
  • an odometer is necessarily required, and the odometer is a distance counter for measuring the walking of the machine.
  • the robot of the present invention uses a photoelectric detecting circuit added to the motor.
  • the machine includes the left wheel, the right wheel and the front wheel.
  • We can accurately calculate the distance traveled by the machine by detecting the pulse data of the left and right wheels and combining the number of pulses of the motor with the gearbox parameters of the wheel.
  • the distance measured by the odometer is in millimeters, and the left and right wheel mileage data is sent through the serial port, and the same time is also sent to the dynamic time, and the lower machine 5 can analyze the traveling state.
  • the accelerometer and the azimuth meter are integrated in the gyroscope 9, and the gyroscope 9 measures the actual
  • the data can be sent to the lower computer 5 through the serial port, and the lower computer 5 is sent to the upper computer 1 through the judgment to extract the relevant orientation data.
  • the orientation data extracted by the upper computer 1 is generally angle information, that is, the angle between the machine and the geomagnetism.
  • the host computer 1 receives the orientation data, the coordinates of the machine can be described by combining the mileage data.
  • ' ... is the relative vector of the two positions of the robot.
  • the robot judges that there are obstacles around, and the robot uses the motion of the figure to avoid obstacles.
  • the lower computer 5 includes a judging module, and the judging module is configured to determine whether a non-movable area exists in the walking path.
  • the judging module judges whether there is an immovable region in the walking path according to the dangling data information and the edge data information.
  • the robot further includes a wireless module 4, and the wireless module 4 is connected to the upper computer 1, and the data processed by the upper computer 1 is wirelessly transmitted.
  • the wireless module 4 is a channel for the robot to connect to the Internet.
  • the wireless driver is installed in the system of the host computer 1, and the map data is sent to the server, and the control information sent by the terminal can also be accepted.
  • the various types of information in the map construction have been described above.
  • the map is constructed to be displayed on the terminal.
  • the host computer 1 sends the constructed map to the server through the image stream, and the server forwards it to the terminal. The whole process is very fast. , then the customer will see the actual map of the machine at the terminal, the actual location of the machine. Similarly, after the terminal operates, the corresponding command is sent to the robot direction, and the robot receives the command by wireless, and then parses and executes it.
  • the obstacle avoidance system 10 is connected to the lower computer 5 for data transmission between the obstacle avoidance system 10 and the lower computer 5.
  • the robot of the present invention is mainly used for a service robot, especially a sweeping robot, and is suitable for indoor use, and is suitable for a complex indoor environment such as glass, glare, stairs, stool legs, etc.
  • a service robot especially a sweeping robot
  • a complex indoor environment such as glass, glare, stairs, stool legs, etc.
  • each floor can only be constructed separately. map.
  • the method for constructing a map by a robot with a depth camera and an obstacle avoidance system includes the following steps:
  • step S101 the robot traverses the space to be walked, and establishes a map coordinate system corresponding to the walking space;
  • step S102 the robot takes in the surrounding environment information of the walking space through the depth data camera, according to the walking surrounding environment information A preliminary map for establishing a walking space in the map coordinate system;
  • Step S103 the robot determines whether there is an inactive area in the walking path, and if not, returns to the step
  • step S102 if yes, proceed to step S104;
  • Step S104 The robot marks the data information of the non-movable area on the preliminary map.
  • the method for constructing a map by a robot with a depth camera and an obstacle avoidance system includes the following steps:
  • Step S201 the robot traverses the space to be walked, and establishes a map coordinate system corresponding to the walking space;
  • Step S202 the robot takes in the surrounding environment information of the walking space through the depth data camera, according to the walking surrounding environment information A preliminary map for establishing a walking space in the map coordinate system;
  • Step S203 The robot determines whether there is an inactive area in the walking path, and if not, returns to the step
  • step S202 if yes, proceed to step S204;
  • step S204 the robot marks the data information of the non-movable area on the preliminary map
  • Step S205 displaying the constructed map information by the terminal display
  • Step S206 The constructed map information is wirelessly transmitted by the wireless module.
  • a method for constructing a map by a robot with a depth camera and an obstacle avoidance system includes the following steps: [0084] step S301, the robot traverses the space to be walked, and establishes a map coordinate system corresponding to the walking space; [0085] step S302, the robot takes in the surrounding environment information of the walking space through the depth data camera, according to the walking surrounding environment information A preliminary map for establishing a walking space in the map coordinate system;
  • Step S303 the infrared inspecting module of the robot determines whether there is an inoperable area in the traveling path according to the acquired local inspection information; if not, the process returns to step S302, and if yes, the process proceeds to step S304.
  • Step S304 marking the edge data information of the non-movable area on the preliminary map.
  • a method for constructing a map by a robot with a depth camera and an obstacle avoidance system includes the following steps:
  • Step S401 the robot traverses the space to be walked, and establishes a map coordinate system corresponding to the walking space;
  • Step S402 the robot takes in the surrounding environment information of the walking space through the depth data camera, according to the walking surrounding environment information A preliminary map for establishing a walking space in the map coordinate system;
  • Step S403 the obstacle detecting module of the robot determines whether there is an inactive area in the walking path according to the acquired collision data; if not, returns to step S402, and if yes, proceeds to step S404.
  • step S404 marking edge data information of the non-movable area on the preliminary map
  • Step S405 determining whether the edge data information is a closed loop, and if not, returning to step 2, and if yes, proceeding to the next step;
  • Step S406 the map construction is completed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

一种机器人及其以深度摄像头和避障***构建地图的方法,其中,机器人包括上位机(1)、深度摄像头(2)、终端显示器(3)、下位机(5)、红外线地检模块(6)、障碍物检测模块(7)、里程计(8)、陀螺仪(9)。通过深度摄像头(2)、红外线地检模块(6)以及障碍物检测模块(7)的结合来构建地图,实现了机器人对环境的无盲区地图构建,使得机器人在根据构建的地图行走时能有效避开障碍物体。

Description

一种机器人及其以深度摄像头和避障***构建地图的方 法
技术领域
[0001] 本发明涉及机器人技术领域, 尤其涉及一种机器人及其以深度摄像头和避障系 统构建地图的方法。
背景技术
[0002] 智能机器人在生活中应用越来越广泛, 他能帮助人类完成生活与生产中的一些 活动。 然而, 实现机器人灵活、 高效、 智能的移动, 是机器人智能化重要的一 步。 所以, 机器人自主导航功能能体现一款机器人的智能化程度。 地图创建、 定位及路径规划式自主导航的三个关键要素。 打个比方, 你要从 A第去往 B地, 那么, 你首先要知道 A地在哪里; 其次, 你要知道 B地在哪里, 以及 A地和 B地在 周围环境中的关系; 最后, 你才能根据信息实现从 A地到达 B地。 目前, 较为常 见的智能机器人即吋定位与地图构建技术包括 FastSLAM和 VSLAM两个大类。 Fa stSLAM—般使用激光测距仪或者声呐来实现, 而 VSLAM则使用视觉传感器来实 现。 FastSLAM由于使用激光、 声呐等传感器, 在特殊的环境下机器人无法识别 , 只能通过预判来估计整个环境情况。 VSLAM使用视觉传感器, 市面上有很多 中视觉传感器, 原理也不尽相同, 整体来说, 用视觉传感器来做自主导航能有 效地客服 FastSLAM中产生的问题。
[0003] 我们简述常见的机器人是如何实现自主导航的。 首先, 需要根据先验的环境, 结合当前机器人的位置信息以及传感器输入的信息, 准确的描述机器人的位姿 过程。 其主要包括相对位置、 绝对位置, 绝对位置主要使用导航信标, 主动或 者被动识别, 地图匹配或者卫星导航技术进行定位, 定位的精确度较高。 但是 家用机器人在家居环境下, 我们跟需要的是相对位置。 相对位置是通过测试机 器人相对于初始位置的距离和方向来确定机器人的当前位置, 通常, 我们就称 之为导航算法。 但现在自主导航算法中往往忽视了家居环境中的地面有悬空和 家居障碍物这些复杂的情况, 所以没有考虑到这些复杂情况而构建了的室内地 面上的环境地图, 在移动吋会受到很大的限制。
技术问题
[0004] 我们简述常见的机器人是如何实现自主导航的。 首先, 需要根据先验的环境, 结合当前机器人的位置信息以及传感器输入的信息, 准确的描述机器人的位姿 过程。 其主要包括相对位置、 绝对位置, 绝对位置主要使用导航信标, 主动或 者被动识别, 地图匹配或者卫星导航技术进行定位, 定位的精确度较高。 但是 家用机器人在家居环境下, 我们跟需要的是相对位置。 相对位置是通过测试机 器人相对于初始位置的距离和方向来确定机器人的当前位置, 通常, 我们就称 之为导航算法。 但现在自主导航算法中往往忽视了家居环境中的地面有悬空和 家居障碍物这些复杂的情况, 所以没有考虑到这些复杂情况而构建了的室内地 面上的环境地图, 在移动吋会受到很大的限制。
问题的解决方案
技术解决方案
[0005] 针对现有技术的不足, 旨在解决现有的机器人构建地图方法吋没有考虑到地面 有悬空和家居障碍物这些复杂问题, 提供了一种机器人及其以深度摄像头和避 障***构建地图的方法, 将红外线地检模块和障碍物检测模块结合作为深度摄 像头的补充, 而构建地图, 使得构建的地图无盲区, 更加完善。
[0006] 为实现上述目的, 本发明采用如下技术方案:
[0007] 一种机器人, 包括深度摄像头和避障***, 还包括上位机、 终端显示器、 下位 机、 红外线地检模块、 障碍物检测模块、 里程计、 陀螺仪。
[0008] 所述上位机与所述下位机连接, 用于处理所述上位机与所述下位机之间的数据
[0009] 所述深度摄像头与所述上位机连接, 用于摄取周围环境信息。
[0010] 所述终端显示器与所述上位机连接, 用于所述上位机处理后的数据显示。
[0011] 所述红外线地检模块与所述下位机连接, 用于检査周围环境的悬空数据信息。
[0012] 所述障碍物检测模块与所述下位机连接, 用于检査周围环境的边缘数据信息。
[0013] 所述里程计、 陀螺仪与所述下位机连接, 用于记录机器人行程信息。
[0014] 所述下位机包括判断模块, 所述判断模块用于机器人判断行走路径中是否存在 不可运动区域。
[0015] 还包括无线模块, 所述无线模块与所述上位机连接, 用于所述上位机处理后的 数据无线传播出去与接收来自终端的控制信号。
[0016] 为实现上述目的, 本发明采用如下技术方案。
[0017] 一种机器人以深度摄像头和避障***构建地图的方法, 包括以下步骤:
[0018] 步骤一, 机器人遍历待行走空间, 建立与行走空间相对应的地图坐标系; [0019] 步骤二, 机器人通过深度数据摄像头摄取行走空间周围环境信息, 依据所述行 走周围环境信息在所述地图坐标系中建立行走空间的初步地图;
[0020] 步骤三, 机器人判断行走路径中是否存在不可运动区域, 若否, 则返回步骤二
, 若是, 则进入步骤四;
[0021] 步骤四, 机器人在初步地图上标记不可运动区域的数据信息。
[0022] 所述步骤二中, 机器人通过深度数据摄像头摄取行走空间周围环境信息, 具体 地, 获取机器人到外障碍物体的深度信息 z, 根据深度信息 Z利用三角公式计算 并获得首次障碍信息的深度图像的 X和 y, X表示外障碍物体到机器人原点的距 离, y表示外障碍物体的高度。
[0023] 所述步骤三中, 机器人判断行走路径中是否存在不可运动区域, 包括机器人的 红外线地检模块根据获取地检信息判断行走路径中是否存在不可运动区域, 若 是, 则在初步地图上标记不可运动区域的悬空数据信息。
[0024] 所述步骤三中, 机器人判断行走路径中是否存在不可运动区域, 包括机器人的 障碍物检测模块根据获取碰撞数据判断行走路径中是否存在不可运动区域, 若 是, 则在初步地图上标记不可运动区域的边缘数据信息。
[0025] 作为优选方案, 还包括步骤四, 判断所述边缘数据信息是否为封闭的环, 若否
, 则返回步骤二, 若是, 则完成地图构建。
[0026] 作为优选方案, 还包括步骤五, 将构建的地图信息由终端显示器显示出来。
[0027] 作为优选方案, 还包括步骤六, 将构建的地图信息由无线模块无线传播出去。
发明的有益效果
有益效果
[0028] 本发明所阐述的一种机器人及其以深度摄像头和避障***构建地图的方法, 其 有益效果在于:
[0029] 与现有技术相比, 本发明的一种机器人用深度摄像头、 红外线地检模块以及障 碍物检测模块结合来构建地图的方法, 实现了机器人对环境的无盲区的地图构 建, 使得机器人在根据构建的地图行走吋能有效避幵障碍物体。
对附图的简要说明
附图说明
[0030] 图 1是本发明实施例的机器人的结构框图。
[0031] 图 2是本发明实施例的机器人以深度摄像头和避障***构建地图的方法的流程 图。
[0032] 图 3是本发明实施例的机器人以深度摄像头和避障***构建地图的方法的另一 流程图。
[0033] 图 4是本发明实施例的机器人以深度摄像头和避障***构建地图的方法的第三 种流程图。
[0034] 图 5是本发明实施例的机器人以深度摄像头和避障***构建地图的方法的第四 种流程图。
[0035] 图 6是本发明实施例的机器人以深度摄像头和避障***构建地图的方法吋避幵 障碍物吋运动方式的向量图。
[0036] 图 7是本发明摄像头获得深度图像的结构图。
[0037] 图 8是深度摄像头拍摄角度中截取有效角度及旋转方式示意图。
实施该发明的最佳实施例
本发明的最佳实施方式
[0038] 下面结合附图与具体实施例来对本发明作进一步描述。
[0039] 参考图 1所示, 本发明实施例提供了一种机器人, 包括上位机 1、 深度摄像头 2 、 终端显示器 3、 无线模块 4、 下位机 5、 红外线地检模块 6、 障碍物检测模块 7、 里程计 8、 陀螺仪 9、 避障*** 10;
[0040] 所述上位机 1与所述下位机 5连接, 用于处理所述上位机 1与所述下位机 2之间的 数据;
[0041] 机器人的上位机与下位机采用串口做为通讯通道。 [0042] 所述深度摄像头 2与所述上位机 1连接, 用于摄取周围环境信息;
[0043] 深度摄像头 2中的传感器以每秒 30帧的速度生成景深图像流, 我们通过 opencv 等已有的图像处理算法分析景深图像, 找出景深图像流中共同的特征点, 将相 同的特征点拼接建成实吋的周围环境图像。 这里指的特征点可以理解成直角、 直线、 圆弧等。
[0044] 上位机 1的处理器装有 linux等操作***, 操作***安装有 SLAM程序, 操作系 统中也安装有深度摄像头 2的驱动。
[0045] 深度摄像头 2将景深图像数据传输到上位机 1中, SLAM程序读取数据后, 进行 分析和使用。
[0046] 具体地, 获取机器人到外障碍物体的深度信息 Z, 根据深度信息 Z利用三角公 式计算并获得首次障碍信息的深度图像的 X和 y, X表示外障碍物体到机器人原 点的距离, y表示外障碍物体的高度。
[0047] 参照图 7所示, 本发明机器人的深度摄像头 2采用分辨率为 320*200, 并且深度 摄像头 2到分辨率最大值吋的角度是 110度, 因些三角公式为&= xl lO, 利用 y=Zxcosa, y表示以 Z为斜边, x=Zxsina
[0048] 参照图 8所示, 本发明机器人分多次旋转深度摄像头 2, 不转动吋, 能够摄得的 角度是 110度, 为了更精确, 我们只读取中间的 90度的信息, 360度中的剩余角 度, 我们分四次旋转, 每次旋转 90度, 每旋转一次, 上述的三角公式获取一次 新的障碍物信息的深度图像中的 X和 y的值, 直到转完了 360度后则将所有图像信 息加入到构建的地图当中。 此过程中根据陀螺仪可以精确旋转所需的角度。
[0049] 所述终端显示器 3与所述上位机 1连接, 用于所述上位机 1处理后的数据显示;
[0050] 上位机 1中的 Slam算法根据构建环境图像去叠加成完整的室内地图, 现在我们 就构建 2D的地图为例, 当 slam算法构建好地图后, 我们将地图垂直投影成平面 , 所有边缘用棕深色的点围成, 障碍物用红色表示, 空白的用浅灰色表示, ¾ 么这个室内就会出现以轮廓描述的地图。 颜色可以根据用户自由去定义。
[0051] 所述红外线地检模块 6与所述下位机 5连接, 用于检査周围环境的悬空数据信息
[0052] 红外线地检模块 6安装在机器人底部, 每个红外线地检模块 6由一个红外发射管 和一个或多个红外接收管组成的电路, 红外线地检模块 6通过控制红外发送管发 送红外, 红外接收管接收 AD值数据, 再利用差分, 均值等常见算法估算当吋的 有效值, 再与实验测得的固定设定值相比较, 来判断是否为悬空, 如果是悬空 , 则标记悬空数据信息。
[0053] 优选的, 机器底部有多组红外线地检模块 6, 通过标记悬空数据信息以及安装 的位置去判断机器哪个方向出现悬空, 悬空后我们将悬空数据信息通过串口上 传给下位机 5。 下位机 5再传给上位机 1, 上位机 1再记下此吋的坐标信息, 并在 地图中标记悬空数据信息。
[0054] 所述障碍物检测模块 7与所述下位机 5连接, 用于检査周围环境的边缘数据信息
[0055] 障碍物检测模块 7分为两个部分, 红外墙检模块和机械碰撞模块。 机械碰撞模 块在机器人的前外壳下面安装有两个微动幵关, 只要机器人被撞上, 两个微动 幵关就会被压下, ***就会检测到相应的边缘数据信息。 机械碰撞模块是为了 检测横向的凳脚等, 在实际运行中检测效果明显。 在检测到碰撞吋, 下位机 5立 马上发此边缘数据信息, 那么上位机 1就会立马记下此吋的坐标信息, 并将边缘 数据信息标记在地图中。 红外墙检模块的工作原理和红外线地检模块 6是一样, 也是采用一个红外发射管和多个红外接收管, 通过读取的 AD值来判断是否发生 事件, 这里我就不再累述。 红外墙检模块可以检测障碍物信息。 机器人的周围 装有多个红外墙检模块, 通过读取红外墙检模块的位置, 可以识别机器在什么 方位有障碍物, 标记此边缘数据信息, 并上发给上位机 1标记进地图。
[0056] 当上位机 1判断全部标记的边缘数据信息为封闭的环吋, 则完成地图构建。
[0057] 所述里程计 8、 陀螺仪 9与所述下位机 5连接, 用于记录机器人行程信息。
[0058] 在构建 slam的算法中, 一定需要的是里程计, 里程计也就是测量机器行走的距 离计数器, 本发明机器人采用的是在电机上增加光电检测电路。 机器中包括左 轮、 右轮和前轮, 我们通过检测左右轮的脉冲数据再结合电机的脉冲数与轮子 的变速箱参数可以精确的计算出机器行进的距离。 里程计测量的距离是以毫米 为单位, 再通过串口上发左右轮里程数据, 同吋还将动态吋间上发, 下位机 5才 能分析出行进的状态。 加速度计和方位计集成在陀螺仪 9中, 陀螺仪 9测量的实 吋数据能够通过串口发给下位机 5, 下位机 5经过判断提取相关方位数据上发给 上位机 1。 上位机 1分析提取的方位数据一般都是角度信息, 也就是机器所处与 地磁的夹角。 当上位机 1接到方位数据后, 通过结合里程数据就能实吋的描述出 机器的坐标。
[0059] 参考图 6所示, 里程计 8、 陀螺仪 9组建的向量图:
[0060] [数]
1.
为起始位置与障碍物的向量;
[0061] [数]
为运动位置与障碍物的向量;
[0062] [数]
■ -→*■
' … 为机器人两个位置的相对向量。
[0063] 机器人在判断到周边有障碍物吋, 机器人采用如图的运动方式来避幵障碍物。
[0064] 所述下位机 5包括判断模块, 所述判断模块用于机器人判断行走路径中是否存 在不可运动区域。
[0065] 判断模块根据悬空数据信息和边缘数据信息来判断行走路径中是否存在不可运 动区域。
[0066] 机器人还包括无线模块 4, 所述无线模块 4与所述上位机 1连接, 用于所述上位 机 1处理后的数据无线传播出去。
[0067] 无线模块 4是机器人连接互联网的通道, 在上位机 1***中安装无线驱动, 再将 地图数据上发到服务器, 也可以接受终端发送的控制信息。 在前面已经描述了 地图构建中的各类信息, 地图构建好要在终端显示, 上位机 1将构建好的地图通 过图片流的方式往服务器发送数据, 服务器再转发至终端, 整个过程是非常快 , 那么客户在终端就会观看到机器的实吋地图, 机器的实吋位置。 同样, 终端 进行操作以后, 会将相应的指令往机器人方向发送, 机器人借助无线接收指令 , 然后解析并执行。
[0068] 避障*** 10与下位机 5连接, 用于避障*** 10与下位机 5之间的数据传送。
[0069] 本发明的机器人主要用于服务机器人, 特别是扫地机器人, 适合于室内, 适合 存在玻璃、 强光、 楼梯、 凳脚等复杂室内环境, 对于复式楼, 只能单独的构建 每一楼层地图。
[0070] 参考图 2所示, 本发明实施例的机器人以深度摄像头和避障***构建地图的方 法包括以下步骤:
[0071] 步骤 S101, 机器人遍历待行走空间, 建立与行走空间相对应的地图坐标系; [0072] 步骤 S102, 机器人通过深度数据摄像头摄取行走空间周围环境信息, 依据所述 行走周围环境信息在所述地图坐标系中建立行走空间的初步地图;
[0073] 步骤 S103, 机器人判断行走路径中是否存在不可运动区域, 若否, 则返回步骤
S102, 若是, 则进入步骤 S104;
[0074] 步骤 S104, 机器人在初步地图上标记不可运动区域的数据信息。
[0075]
[0076] 参考图 3所示, 本发明实施例的机器人以深度摄像头和避障***构建地图的方 法包括以下步骤:
[0077] 步骤 S201 , 机器人遍历待行走空间, 建立与行走空间相对应的地图坐标系; [0078] 步骤 S202, 机器人通过深度数据摄像头摄取行走空间周围环境信息, 依据所述 行走周围环境信息在所述地图坐标系中建立行走空间的初步地图;
[0079] 步骤 S203 , 机器人判断行走路径中是否存在不可运动区域, 若否, 则返回步骤
S202, 若是, 则进入步骤 S204;
[0080] 步骤 S204, 机器人在初步地图上标记不可运动区域的数据信息;
[0081] 步骤 S205 , 将构建的地图信息由终端显示器显示出来;
[0082] 步骤 S206 , 将构建的地图信息由无线模块无线传播出去。
[0083] 参考图 4所示, 本发明实施例的机器人以深度摄像头和避障***构建地图的方 法包括以下步骤: [0084] 步骤 S301, 机器人遍历待行走空间, 建立与行走空间相对应的地图坐标系; [0085] 步骤 S302, 机器人通过深度数据摄像头摄取行走空间周围环境信息, 依据所述 行走周围环境信息在所述地图坐标系中建立行走空间的初步地图;
[0086] 步骤 S303, 机器人的红外线地检模块根据获取地检信息判断行走路径中是否存 在不可运动区域; 若否, 则返回步骤 S302,若是, 则进入步骤 S304。
[0087] 步骤 S304,在初步地图上标记不可运动区域的边缘数据信息。
[0088] 参考图 5所示, 本发明实施例的机器人以深度摄像头和避障***构建地图的方 法包括以下步骤:
[0089] 步骤 S401 , 机器人遍历待行走空间, 建立与行走空间相对应的地图坐标系; [0090] 步骤 S402, 机器人通过深度数据摄像头摄取行走空间周围环境信息, 依据所述 行走周围环境信息在所述地图坐标系中建立行走空间的初步地图;
[0091] 步骤 S403, 机器人的障碍物检测模块根据获取碰撞数据判断行走路径中是否存 在不可运动区域; 若否, 则返回步骤 S402,若是, 则进入步骤 S404。
[0092] 步骤 S404,在初步地图上标记不可运动区域的边缘数据信息;
[0093] 步骤 S405,判断所述边缘数据信息是否为封闭的环, 若否, 则返回步骤二, 若 是, 则进入下一步;
[0094] 步骤 S406,则完成地图构建。
[0095] 上述的本发明实施例的上述机器人以深度摄像头和避障***构建地图的方法与 上述的机器人是基于同一构思, 这里就不再对每个部件一一解释。
[0096] 以上所述, 仅是本发明较佳实施例而已, 并非对本发明的技术范围作任何限制 , 故凡是依据本发明的技术实质对以上实施例所作的任何细微修改、 等同变化 与修饰, 均仍属于本发明技术方案的范围内。

Claims

权利要求书
[权利要求 1] 一种机器人, 包括深度摄像头和避障***, 其特征在于: 还包括上位 机、 终端显示器、 下位机、 红外线地检模块、 障碍物检测模块、 里程 计、 陀螺仪;
所述上位机与所述下位机连接, 用于处理所述上位机与所述下位机之 间的数据;
所述深度摄像头与所述上位机连接, 用于摄取周围环境信息; 所述终端显示器与所述上位机连接, 用于所述上位机处理后的数据显 示;
所述红外线地检模块与所述下位机连接, 用于检査周围环境的悬空数 据信息;
所述障碍物检测模块与所述下位机连接, 用于检査周围环境的边缘数 据信息;
所述里程计、 陀螺仪与所述下位机连接, 用于记录机器人行程信息。
[权利要求 2] 根据权利要求 1所述所述的机器人, 其特征在于: 所述下位机包括判 断模块, 所述判断模块用于机器人判断行走路径中是否存在不可运动 区域。
[权利要求 3] 根据权利要求 1所述所述的机器人, 其特征在于: 还包括无线模块, 所述无线模块与所述上位机连接, 用于所述上位机处理后的数据无线 传播出去与接收来自终端的控制信号。
[权利要求 4] 一种机器人以深度摄像头和避障***构建地图的方法, 包括以下步骤 步骤一, 机器人遍历待行走空间, 建立与行走空间相对应的地图坐标 系;
步骤二, 机器人通过深度数据摄像头摄取行走空间周围环境信息, 依 据所述行走周围环境信息在所述地图坐标系中建立行走空间的初步地 图;
步骤三, 机器人判断行走路径中是否存在不可运动区域, 若否, 则返 回步骤二, 若是, 则进入步骤四;
步骤四, 机器人在初步地图上标记不可运动区域的数据信息。
根据权利要求 4所述的机器人以深度摄像头和避障***构建地图的方 法, 其特征在于: 所述步骤二中, 机器人通过深度数据摄像头摄取行 走空间周围环境信息, 具体地, 获取机器人到外障碍物体的深度信息
Z, 根据深度信息 Z利用三角公式计算并获得首次障碍信息的深度图 像的 X和 y, X表示外障碍物体到机器人原点的距离, y表示外障碍物 体的高度。
根据权利要求 4所述的机器人以深度摄像头和避障***构建地图的方 法, 其特征在于: 所述步骤三中, 机器人判断行走路径中是否存在不 可运动区域, 包括机器人的红外线地检模块根据获取地检信息判断行 走路径中是否存在不可运动区域, 若是, 则在初步地图上标记不可运 动区域的悬空数据信息。
根据权利要求 4所述的机器人以深度摄像头和避障***构建地图的方 法, 其特征在于: 所述步骤三中, 机器人判断行走路径中是否存在不 可运动区域, 包括机器人的障碍物检测模块根据获取碰撞数据判断行 走路径中是否存在不可运动区域, 若是, 则在初步地图上标记不可运 动区域的边缘数据信息。
根据权利要求 7所述的机器人以深度摄像头和避障***构建地图的方 法, 其特征在于: 还包括步骤四, 判断所述边缘数据信息是否为封闭 的环, 若否, 则返回步骤二, 若是, 则完成地图构建。
根据权利要求 4所述的机器人以深度摄像头和避障***构建地图的方 法, 其特征在于: 还包括步骤五, 将构建的地图信息由终端显示器显 示出来。
根据权利要求 4所述的机器人以深度摄像头和避障***构建地图的方 法, 其特征在于: 还包括步骤六, 将构建的地图信息由无线模块无线 传播出去。
PCT/CN2017/070718 2017-01-10 2017-01-10 一种机器人及其以深度摄像头和避障***构建地图的方法 WO2018129648A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/070718 WO2018129648A1 (zh) 2017-01-10 2017-01-10 一种机器人及其以深度摄像头和避障***构建地图的方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/070718 WO2018129648A1 (zh) 2017-01-10 2017-01-10 一种机器人及其以深度摄像头和避障***构建地图的方法

Publications (1)

Publication Number Publication Date
WO2018129648A1 true WO2018129648A1 (zh) 2018-07-19

Family

ID=62839188

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/070718 WO2018129648A1 (zh) 2017-01-10 2017-01-10 一种机器人及其以深度摄像头和避障***构建地图的方法

Country Status (1)

Country Link
WO (1) WO2018129648A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210280A (zh) * 2019-03-01 2019-09-06 北京纵目安驰智能科技有限公司 一种超视距感知方法、***、终端和存储介质
CN112327878A (zh) * 2020-11-25 2021-02-05 珠海市一微半导体有限公司 一种基于tof摄像头的障碍物分类避障控制方法
CN112605999A (zh) * 2020-12-22 2021-04-06 杭州北冥星眸科技有限公司 基于红外线深度摄像头技术的机器人障碍检测方法及***
WO2021103987A1 (zh) * 2019-11-29 2021-06-03 深圳市杉川机器人有限公司 扫地机器人控制方法、扫地机器人及存储介质
CN113093759A (zh) * 2021-04-08 2021-07-09 中国科学技术大学 基于多传感器信息融合的机器人编队构造方法及***
WO2022240256A1 (ko) * 2021-05-14 2022-11-17 (주)로보티즈 자율 주행 로봇을 위한 거리 변환 지도 기반의 반응형 네비게이션

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8510039B1 (en) * 2010-10-05 2013-08-13 The Boeing Company Methods and apparatus for three-dimensional localization and mapping
CN104865965A (zh) * 2015-05-20 2015-08-26 深圳市锐曼智能装备有限公司 机器人用深度摄像头与超声波结合的避障控制方法及***
CN105487535A (zh) * 2014-10-09 2016-04-13 东北大学 一种基于ros的移动机器人室内环境探索***与控制方法
CN105676845A (zh) * 2016-01-19 2016-06-15 中国人民解放军国防科学技术大学 一种智能安保服务机器人复杂环境避障方法及安保服务机器人
CN105798922A (zh) * 2016-05-12 2016-07-27 中国科学院深圳先进技术研究院 一种家庭服务机器人
CN106855411A (zh) * 2017-01-10 2017-06-16 深圳市极思维智能科技有限公司 一种机器人及其以深度摄像头和避障***构建地图的方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8510039B1 (en) * 2010-10-05 2013-08-13 The Boeing Company Methods and apparatus for three-dimensional localization and mapping
CN105487535A (zh) * 2014-10-09 2016-04-13 东北大学 一种基于ros的移动机器人室内环境探索***与控制方法
CN104865965A (zh) * 2015-05-20 2015-08-26 深圳市锐曼智能装备有限公司 机器人用深度摄像头与超声波结合的避障控制方法及***
CN105676845A (zh) * 2016-01-19 2016-06-15 中国人民解放军国防科学技术大学 一种智能安保服务机器人复杂环境避障方法及安保服务机器人
CN105798922A (zh) * 2016-05-12 2016-07-27 中国科学院深圳先进技术研究院 一种家庭服务机器人
CN106855411A (zh) * 2017-01-10 2017-06-16 深圳市极思维智能科技有限公司 一种机器人及其以深度摄像头和避障***构建地图的方法

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210280A (zh) * 2019-03-01 2019-09-06 北京纵目安驰智能科技有限公司 一种超视距感知方法、***、终端和存储介质
CN110210280B (zh) * 2019-03-01 2024-04-19 北京纵目安驰智能科技有限公司 一种超视距感知方法、***、终端和存储介质
WO2021103987A1 (zh) * 2019-11-29 2021-06-03 深圳市杉川机器人有限公司 扫地机器人控制方法、扫地机器人及存储介质
CN112327878A (zh) * 2020-11-25 2021-02-05 珠海市一微半导体有限公司 一种基于tof摄像头的障碍物分类避障控制方法
CN112327878B (zh) * 2020-11-25 2022-06-10 珠海一微半导体股份有限公司 一种基于tof摄像头的障碍物分类避障控制方法
CN112605999A (zh) * 2020-12-22 2021-04-06 杭州北冥星眸科技有限公司 基于红外线深度摄像头技术的机器人障碍检测方法及***
CN112605999B (zh) * 2020-12-22 2022-01-18 杭州北冥星眸科技有限公司 基于红外线深度摄像头技术的机器人障碍检测方法及***
CN113093759A (zh) * 2021-04-08 2021-07-09 中国科学技术大学 基于多传感器信息融合的机器人编队构造方法及***
WO2022240256A1 (ko) * 2021-05-14 2022-11-17 (주)로보티즈 자율 주행 로봇을 위한 거리 변환 지도 기반의 반응형 네비게이션
KR20220154996A (ko) * 2021-05-14 2022-11-22 (주)로보티즈 자율 주행 로봇을 위한 거리 변환 지도 기반의 반응형 네비게이션
KR102551275B1 (ko) * 2021-05-14 2023-07-04 (주)로보티즈 자율 주행 로봇을 위한 거리 변환 지도 기반의 반응형 네비게이션

Similar Documents

Publication Publication Date Title
WO2018129648A1 (zh) 一种机器人及其以深度摄像头和避障***构建地图的方法
CN106855411A (zh) 一种机器人及其以深度摄像头和避障***构建地图的方法
US11865708B2 (en) Domestic robotic system
WO2021254367A1 (zh) 机器人***及定位导航方法
EP3507572B1 (en) Apparatus and method for providing vehicular positioning
Park et al. A BIM and UWB integrated mobile robot navigation system for indoor position tracking applications
WO2017020641A1 (zh) 基于光电扫描的室内移动机器人位姿测量***及测量方法
CN111149072A (zh) 用于机器人导航的磁力计
KR20160146379A (ko) 이동 로봇 및 그 제어방법
WO2020151663A1 (zh) 车辆定位装置、***、方法以及车辆
WO2019037668A1 (zh) 自移动机器人及其行走方法、显示障碍物分布的方法
US11465275B2 (en) Mobile robot and method of controlling the same and mobile robot system
CN113168180B (zh) 移动设备及其对象检测方法
WO2013049597A1 (en) Method and system for three dimensional mapping of an environment
JP2014085940A (ja) 平面検出装置およびそれを備えた自律移動装置
US20240042621A1 (en) Autonomous working system, method and computer readable recording medium
TWI739255B (zh) 移動式機器人
WO2019001237A1 (zh) 一种移动电子设备以及该移动电子设备中的方法
KR101853127B1 (ko) 구동형 마킹 시스템, 구동형 마킹 장치의 제어방법 및 컴퓨터 판독 가능한 기록매체
CN103472434A (zh) 一种机器人声音定位方法
CN112308033A (zh) 一种基于深度数据的障碍物碰撞警告方法及视觉芯片
Poomarin et al. Automatic docking with obstacle avoidance of a differential wheel mobile robot
Silver et al. Arc carving: obtaining accurate, low latency maps from ultrasonic range sensors
KR20200027069A (ko) 로봇 청소기 및 그 로봇 청소기의 제어 방법
KR102318841B1 (ko) 구동형 마킹 시스템, 구동형 마킹 장치의 제어방법 및 컴퓨터 판독 가능한 기록매체

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17891109

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC , EPO FORM 1205A DATED 18.11.2019.

122 Ep: pct application non-entry in european phase

Ref document number: 17891109

Country of ref document: EP

Kind code of ref document: A1