WO2022127450A1 - 电梯空间状态判断方法、装置、设备及存储介质 - Google Patents

电梯空间状态判断方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2022127450A1
WO2022127450A1 PCT/CN2021/129662 CN2021129662W WO2022127450A1 WO 2022127450 A1 WO2022127450 A1 WO 2022127450A1 CN 2021129662 W CN2021129662 W CN 2021129662W WO 2022127450 A1 WO2022127450 A1 WO 2022127450A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
elevator
dimensional
map
connected domain
Prior art date
Application number
PCT/CN2021/129662
Other languages
English (en)
French (fr)
Inventor
刘勇
张涛
黄寅
吴翔
陈俊伟
Original Assignee
深圳市普渡科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市普渡科技有限公司 filed Critical 深圳市普渡科技有限公司
Publication of WO2022127450A1 publication Critical patent/WO2022127450A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B1/00Control systems of elevators in general
    • B66B1/02Control systems without regulation, i.e. without retroactive action
    • B66B1/06Control systems without regulation, i.e. without retroactive action electric
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B1/00Control systems of elevators in general
    • B66B1/34Details, e.g. call counting devices, data transmission from car to control system, devices giving information to the control system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B5/00Applications of checking, fault-correcting, or safety devices in elevators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present application relates to the field of robotics, and in particular, to a method, device, device, and computer-readable storage medium for judging an elevator space state.
  • an intelligent mobile robot rides an elevator autonomously, it needs to sense whether the area in front of the robot is the elevator entrance, whether the elevator car door is open, and whether the open elevator interior space is sufficient to accommodate it.
  • the inventor realized that in the traditional solution, the robot perceives the elevator space state as the environment of a common scene, that is, as a common obstacle recognition.
  • the elevator has a certain speciality. Once the elevator space state is abnormal, the robot rashly triggers The operation of the robot entering the elevator will be dangerous and the safety is low.
  • the present application provides a method and device for judging an elevator space state, and a storage medium for computer equipment.
  • a method for judging an elevator space state including:
  • the robot If the robot is aimed at the elevator entrance, obtain the image information of the area in front of the robot and the projection information of the robot itself;
  • the spatial state of the elevator is determined.
  • a device for judging an elevator space state including:
  • a first judgment module for judging whether the robot is aligned with the elevator entrance
  • an acquisition module configured to acquire the image information of the area in front of the robot and the projection information of the robot itself if the robot is aimed at the elevator entrance;
  • the second judging module is used for judging the space state of the elevator according to the image information of the area in front of the robot and the projection information of the robot itself.
  • a robot comprising a memory and a processor, the memory having computer-readable instructions stored thereon, the computer-readable instructions being executable on the processor, and the processor for executing the computer
  • the instruction is readable, the steps of the above-mentioned elevator space state judgment method are realized.
  • One or more computer-readable storage media storing computer-readable instructions, when the computer-readable instructions are executed by one or more processors, implement the steps of the above-mentioned elevator space state determination method.
  • FIG. 1 is a schematic flowchart of a method for judging an elevator space state in an embodiment of the present application
  • Fig. 2 is a schematic flow chart of step S30 in Fig. 1;
  • Fig. 3 is a schematic flow chart of step S35 in Fig. 2;
  • Fig. 4 is a schematic flow chart of step S10 in Fig. 1;
  • Fig. 5 is a template schematic diagram of several-shaped template in the embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an elevator space state judging device in an embodiment of the present application.
  • FIG. 7 is another schematic structural diagram of the elevator space state judgment device in the embodiment of the present application.
  • the embodiments of the present application provide a method for judging an elevator space state and a corresponding elevator space state judging device, which are applied to various intelligent robots with mobile functions.
  • the embodiments of the present application correspondingly provide a method for judging the space state of an elevator, which will be described in detail below.
  • FIG. 1 is a schematic flowchart of an embodiment of a method for judging an elevator space state in the application, including the following steps:
  • robots are usually required to take elevators.
  • the robot In order to ensure the accuracy and improve the safety of the robot when taking the elevator, the robot needs to be aligned with the elevator entrance. After judging that the elevator can enter, the robot is controlled to move into the elevator according to the aligned elevator entrance to complete the ride. Therefore, when the robot needs to take the elevator, it is necessary to first determine whether the robot is aligned with the elevator entrance.
  • S30 Determine the spatial state of the elevator according to the image information of the area in front of the robot and the projection information of the robot itself.
  • steps S20-S30 after judging that the robot is aligned with the elevator entrance, in order to improve the safety of the robot entering the elevator, further safety judgment is required, and the image information of the area in front of the robot and the projection of the robot itself will be further obtained.
  • the image information of the area in front of the robot can reflect the obstacles in the area in front of the robot, and since the robot has been aligned with the elevator entrance, the image information of the current area in front of the robot can reflect the situation of the elevator entrance area.
  • the projection information of the robot itself can reflect the size of the robot itself. Therefore, according to the image information of the area in front of the robot and the projection information of the robot itself, it can be further determined whether the robot encounters obstacles in the process of entering the elevator. It can judge the space state of the elevator, so as to accurately judge whether the elevator can enter, and effectively improve the safety of the robot entering the elevator.
  • the image information of the mouth area includes the two-dimensional obstacle map corresponding to the obstacle in front of the robot
  • the projection information of the robot itself includes the two-dimensional projection of the robot.
  • the two-dimensional obstacle map corresponding to the obstacle in front of the robot and The two-dimensional projection of the robot determines the spatial state of the elevator, which will be described below.
  • the image information of the elevator entrance area includes a two-dimensional obstacle map corresponding to the obstacle in front of the robot, and the projection information of the robot itself includes the two-dimensional projection of the robot.
  • Step S30 that is, according to the image information of the area in front of the robot and the projection information of the robot itself, to determine the spatial state of the elevator, which specifically includes the following steps:
  • the obtained two-dimensional obstacle map is binarized to obtain the target binary map.
  • the two-dimensional obstacle map is a top view formed by two-dimensional projection of point cloud data of obstacles in front of the robot. It can be seen that the top view reflects the distribution of obstacles in front of the robot. At this time, the robot is aligned with the elevator entrance, so the top view reflects the elevator situation in front of the robot. It should be noted that, in some embodiments, the two-dimensional obstacle map may also be a top view formed by two-dimensional projection of other three-dimensional images of obstacles in front of the robot.
  • a point cloud data collector may be arranged at a preset position of the robot, and the point cloud data collector is used to collect point cloud data of the object in front of the robot.
  • the point cloud data collector uses an RGBD camera and a lidar as the point cloud data collector to collect point cloud data corresponding to the object in front of the robot, which is not specifically limited.
  • the point cloud data of the obstacles in front of the robot can be obtained in real time by using the point cloud data collector arranged on the robot.
  • the point cloud data of the obstacle is a set of vectors of the obstacle in a three-dimensional coordinate system.
  • the point cloud data of the obstacle reflects the spatial coordinate position of the obstacle in front of the robot, that is Said, can reflect the spatial information of obstacles inside the elevator.
  • the point cloud data collector After the point cloud data collector is used to collect the point cloud data of the robot facing the object in front, the point cloud data can be projected into the plane, that is, the point cloud data can be projected in two dimensions to form a top view, so as to obtain two-dimensional obstacles.
  • the two-dimensional obstacle map reflects the two-dimensional relationship of obstacles in front of the robot.
  • the two-dimensional obstacle map is binarized to obtain the target binary map.
  • the binarization processing of the two-dimensional obstacle map is to convert the gray value of the points on the two-dimensional obstacle map to 0 or 255, so as to obtain a binarization that can reflect the overall and local characteristics of the image. image.
  • the target binary map corresponding to the two-dimensional obstacle map may be obtained in various ways, which are not specifically limited in this application, and are not described one by one. Specifically, in the embodiment of the present application, in the two-dimensional obstacle map, the pixel positions with obstacle information are marked as "0", and the positions without obstacle information are marked as "1", so as to obtain the above-mentioned target binary map .
  • S32 Mark the connected domain of the target binary graph to obtain each connected domain of the target binary graph.
  • Connected Component refers to the image region (Region, Blob) composed of foreground pixels with the same pixel value and adjacent positions in the target binary image.
  • Connected area labeling refers to finding and marking each connected domain in the target binary graph, so as to facilitate the regional division of each different area in the target binary graph, so that obstacle areas and non-obstruction area.
  • the circumscribed circle refers to the circle that intersects all the vertices of the polygon; the inscribed circle, also known as the inscribed circle, refers to the circle that is tangent to all sides of the polygon. It can be understood that, according to the specific obstacles in front of the robot, each connected domain of the target binary graph is usually an irregular polygon. In this embodiment of the present application, it is necessary to first determine the inscribed circle of each connected domain in the target binary graph, so that Determine whether the position corresponding to each connected domain has enough position for the robot to stand or pass.
  • the radius of the inscribed circle of each connected domain the largest inscribed circle in each connected domain is determined, and the smallest circumscribed circle of the two-dimensional projection of the robot and the radius of the smallest circumscribed circle are determined.
  • the minimum circumscribed circle of the two-dimensional projection of the robot reflects the position area range that the robot itself needs to occupy. Therefore, in the embodiment of the present application, by comparing the radius of the maximum inscribed circle of each connected domain, with the minimum of the two-dimensional projection of the robot The radius of the circumscribed circle can determine the space where the robot can stand or move in the elevator.
  • S35 Determine the space state of the elevator according to the final connected domain.
  • the region where the two-dimensional projection of the robot body is located is used as the initial region, and region growth is performed with the target connected region to obtain the final connected region.
  • the process of developing a region into a larger region, and region growing can segment connected regions with the same characteristics. It can be seen that the obtained final connected domain reflects the obstacles and non-obstacles in front of the robot inside and outside the elevator. It can be seen that through the final connected domain, it can reflect whether there are obstacles from the current position to the target connected domain. Therefore, the space state of the elevator can be judged according to the final connected domain and the two-dimensional projection of the robot, that is, to determine whether the elevator can enter the state.
  • the embodiment of the present application provides a method for judging the space state of an elevator.
  • the two-dimensional obstacle map is first binarized to obtain a target binary map, wherein the two-dimensional obstacle
  • the object map is a top view formed by two-dimensional projection of the point cloud data of the obstacles in front of the robot; mark the connected domain of the target binary image to obtain each connected domain of the target binary image; compare the maximum inscribed circle of each connected domain , and the radius of the smallest circumscribed circle projected by the robot in two dimensions to obtain the target connected domain where the largest inscribed circle whose radius is greater than the preset threshold is located, so as to determine the position range where the robot can stand in the elevator;
  • the area where the projection is located is used as the initial area, and the area is grown with the target connected area to obtain the final connected area. According to the final connected area and the two-dimensional projection of the robot to judge the space state of the elevator, it can be further determined whether there are obstacles in the process diagram of entering the
  • the spatial state of the elevator can be determined according to the image information of the area in front of the robot and the projection information of the robot itself, and there may be other implementations, which are not limited in this application, and will not be described in detail.
  • step S35 the spatial state of the elevator is determined according to the final connected domain, which specifically includes the following steps:
  • S351 In the final connected domain, determine a target position whose distance from the center of the initial area satisfies a preset distance.
  • the embodiment of the present application provides a specific method based on the final connected domain and the two-dimensional projection of the robot.
  • the way of judging the space state of the elevator is to first determine the target position whose distance from the center of the initial area satisfies the preset distance in the final connected domain.
  • determining the target position whose distance from the center of the initial area satisfies the preset distance means that, in the final connected domain, determining the position of the Euclidean distance from the center of the initial area as the target position.
  • the target position refers to the position that is farthest from the center of the initial region in Euclidean distance and in the final connected domain, indicating that the actual position corresponding to the target position is a position that the robot can go to. It should be noted that the target position may also be another preset position in the final connected domain, which is not specifically limited. It should be noted that the target position is a determined position based on the final connected domain. Therefore, the target position is a position in the image, and the target position corresponds to an actual position.
  • the target position After determining the target position, it will judge whether the actual position corresponding to the target position is in the elevator. If the actual position corresponding to the target position is in the elevator, it means that the robot can move into the elevator without encountering obstacles. , if the actual position corresponding to the target position is not in the elevator, it means that the robot cannot move into the elevator because there is no space in the elevator or there are obstacles in front of it. It should be noted that, after determining the target position, whether the actual position corresponding to the target position is in the elevator can be judged according to the distance between the target position and the initial area center, and the distance between the initial area center and the elevator door, which will not be described in detail here.
  • the embodiment of the present application provides a method for judging the spatial state of the elevator according to the final connected domain and the two-dimensional projection of the robot, which improves the practicability of the solution.
  • step S10 the embodiment of the present application needs to first determine whether the robot is aligned with the elevator entrance.
  • the embodiment of the present application provides a variety of methods for judging whether the robot is aligned with the elevator entrance, as shown in FIG. 4 .
  • judging whether the robot is aimed at the elevator entrance specifically includes the following steps:
  • the elevator entrance is generally composed of a "ji"-shaped groove. Based on this feature, the application first constructs a two-dimensional "ji"-shaped template diagram, and the top view of the elevator entrance can be shown in Figure 4. shown.
  • S12 Perform shape matching between the two-dimensional obstacle map and the two-dimensional glyph template map.
  • the 2D obstacle map After obtaining the pre-built 2D glyph template map, the 2D obstacle map will be matched with the 2D glyph template map.
  • the obstacle map is obtained by converting the image data of the elevator entrance. Therefore, when the shape of the two-dimensional obstacle map and the two-dimensional glyph template map should match, when the shapes of the two-dimensional obstacle map and the two-dimensional glyph template map are different. When they match, it can be said that the robot is not aligned with the elevator door.
  • the shape matching error degree between the shape of the elevator frame area in the two-dimensional obstacle map and the two-dimensional glyph template map is judged; when the shape matching error degree is less than a preset error threshold, then It is judged that the shapes of the two-dimensional obstacle map and the two-dimensional glyph template map are matched; when the shape matching error degree is greater than or equal to a preset error threshold, then it is judged that the two-dimensional obstacle map and the two-dimensional Dimensional glyph template graph shapes are mismatched.
  • the shape of the elevator frame area in the two-dimensional obstacle map is a "ji" shape
  • the shape of the elevator frame area in the 2D obstacle map is a "ji"-like shape, it means that the elevator frame area in the 2D obstacle map is similar to the 2D glyph.
  • the shape matching error of the shape of the glyph template map is also low, then it is judged that the two-dimensional obstacle map matches the shape of the two-dimensional glyph template map.
  • the shape of the elevator frame area in the two-dimensional obstacle map is a triangle, it means that the two The shape matching error of the elevator frame area in the 2D obstacle map and the 2D glyph template map is relatively high, and the shape of the 2D obstacle map and the 2D glyph template map are judged to be mismatched. It should be noted that, in the embodiment of the present application, there may be various manners for determining the degree of error of shape matching, which are not specifically limited and are not described one by one. After the two-dimensional obstacle map is obtained, the elevator frame area is identified according to the image recognition algorithm, so that shape recognition is performed to determine the shape of the elevator frame area and subsequent shape matching is performed, which will not be described in detail here.
  • the robot when the forward direction obtained by the positioning of the robot is used as the initial direction of the rotation adjustment, when it does not match, the robot is rotated within a limited range of rotation direction.
  • the robot is aimed at the elevator entrance.
  • the fine-tuning can be performed according to the above-mentioned shape matching result.
  • an elevator space state judging device is provided, and the elevator space state judging device is in one-to-one correspondence with the elevator space state judging method in the above embodiment.
  • the elevator space state judgment device includes a first judgment module 100 , an acquisition module 101 and a second judgment module 102 .
  • the detailed description of each functional module is as follows:
  • a first judgment module for judging whether the robot is aligned with the elevator entrance
  • an acquisition module configured to acquire the image information of the area in front of the robot and the projection information of the robot itself if the robot is aimed at the elevator entrance;
  • the second judging module is used for judging the space state of the elevator according to the image information of the area in front of the robot and the projection information of the robot itself.
  • the image information of the elevator entrance area includes the two-dimensional obstacle map corresponding to the obstacle in front of the robot
  • the projection information of the robot itself includes the two-dimensional projection of the robot
  • the second judgment module is specifically used for:
  • the space state of the elevator is determined according to the final connected domain.
  • the second judgment module is also used for:
  • the second judgment module is also used for:
  • the position with the farthest Euclidean distance from the center of the initial region is determined as the target position.
  • the first judgment module is specifically used for:
  • first judgment module 100 is specifically used for:
  • the shape matching error degree is less than a preset error threshold, it is determined that the shapes of the two-dimensional obstacle map and the two-dimensional glyph template map are matched;
  • the shape matching error degree is greater than or equal to a preset error threshold, it is determined that the shapes of the two-dimensional obstacle map and the two-dimensional glyph template map do not match.
  • the second judgment module is further specifically used for:
  • the robot When it is not matched, the robot is rotated and fine-tuned within a limited rotation direction variation range, so that the robot is aligned with the elevator door.
  • the embodiment of the present application provides a device for judging the space state of an elevator.
  • the two-dimensional obstacle map is firstly subjected to binarization processing to obtain a target binary map, wherein the two-dimensional obstacle
  • the object map is a top view formed by two-dimensional projection of the point cloud data of the obstacles in front of the robot; mark the connected domain of the target binary image to obtain each connected domain of the target binary image; compare the maximum inscribed circle of each connected domain and the radius of the smallest circumscribed circle projected by the robot in two dimensions to obtain the target connected domain where the largest inscribed circle whose radius is greater than the preset threshold is located, so as to determine the position range where the robot can stand in the elevator;
  • the area where the projection is located is used as the initial area, and the area is grown with the target connected area to obtain the final connected area. According to the final connected area and the two-dimensional projection of the robot to judge the space state of the elevator, it can be further determined whether there are obstacles in the process
  • each module in the above-mentioned elevator space state judging device can be implemented by software, hardware and combinations thereof.
  • the above modules may be embedded in or independent of the processor in the computer device in the form of hardware, or may be stored in the memory in the computer device in the form of software, so that the processor can call and execute operations corresponding to the above modules.
  • an elevator space state judging device may be a server, a controller integrated in a robot, or a robot, and its internal structure diagram may be as shown in FIG. 7 .
  • the elevator space state judging device includes a processor, a memory, a network interface and a database connected through a system bus. Wherein, the processor of the elevator space state judging device is used to provide calculation and control capabilities.
  • the memory of the elevator space state judging device includes volatile and non-volatile storage media and internal memory.
  • the non-volatile storage medium stores an operating system, computer readable instructions and a database.
  • the computer readable instructions implement a method for judging an elevator space state when executed by a processor.
  • a device for judging an elevator space state comprising a memory and a processor, wherein the memory stores computer-readable instructions, the computer-readable instructions can be executed on the processor, and the processor The following steps are implemented when the computer readable instructions are executed:
  • the robot If the robot is aimed at the elevator entrance, obtain the image information of the area in front of the robot and the projection information of the robot itself;
  • the spatial state of the elevator is determined.
  • the processor implements the following steps when executing the computer-readable instructions:
  • the processor implements the following steps when executing the computer-readable instructions:
  • the position with the farthest Euclidean distance from the center of the initial region is determined as the target position.
  • the processor implements the following steps when executing the computer-readable instructions:
  • the processor implements the following steps when executing the computer-readable instructions:
  • the shape matching error degree is less than a preset error threshold, it is determined that the shapes of the two-dimensional obstacle map and the two-dimensional glyph template map are matched;
  • the shape matching error degree is greater than or equal to a preset error threshold, it is determined that the shapes of the two-dimensional obstacle map and the two-dimensional glyph template map do not match.
  • the two-dimensional obstacle map is a top view formed by two-dimensional projection of point cloud data of obstacles in front of the robot.
  • the processor further implements the following steps when executing the computer-readable instructions:
  • the robot When it is not matched, the robot is rotated and fine-tuned within a limited rotation direction variation range, so that the robot is aligned with the elevator door.
  • one or more computer-readable storage media having computer-readable instructions stored thereon, one or more of which are implemented when executed by a processor The following steps:
  • the robot If the robot is aimed at the elevator entrance, obtain the image information of the area in front of the robot and the projection information of the robot itself;
  • the spatial state of the elevator is determined.
  • the image information of the elevator entrance area includes a two-dimensional obstacle map corresponding to an obstacle in front of the robot
  • the projection information of the robot itself includes a two-dimensional projection of the robot
  • the computer-readable instruction is executed by a When executed by the one or more processors, the one or more processors are caused to perform the following steps:
  • the space state of the elevator is determined according to the final connected domain.
  • the computer-readable instructions when executed by one or more processors, cause the one or more processors to perform the following steps:
  • the computer-readable instructions when executed by one or more processors, cause the one or more processors to perform the following steps:
  • the position with the farthest Euclidean distance from the center of the initial region is determined as the target position.
  • the computer-readable instructions when executed by one or more processors, cause the one or more processors to perform the following steps:
  • the computer-readable instructions when executed by one or more processors, cause the one or more processors to perform the following steps:
  • the shape matching error degree is less than a preset error threshold, it is determined that the shapes of the two-dimensional obstacle map and the two-dimensional glyph template map are matched;
  • the shape matching error degree is greater than or equal to a preset error threshold, it is determined that the shapes of the two-dimensional obstacle map and the two-dimensional glyph template map do not match.
  • the two-dimensional obstacle map is a top view formed by two-dimensional projection of point cloud data of obstacles in front of the robot.
  • the computer-readable instructions when executed by one or more processors, cause the one or more processors to further perform the following steps:
  • the robot When it is not matched, the robot is rotated and fine-tuned within a limited rotation direction variation range, so that the robot is aligned with the elevator door.
  • Nonvolatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in various forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Road (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

电梯空间状态判断方法、装置和计算机可读存储介质,方法部分包括:判断机器人是否对准电梯口(S10);若所述机器人对准电梯口,则获取所述机器人前方区域的图像信息以及所述机器人自身的投影信息(S20);根据所述机器人前方区域的图像信息以及机器人自身的投影信息,判断所述电梯的空间状态(S30)。

Description

电梯空间状态判断方法、装置、设备及存储介质
本申请要求于2020年12月17日提交中国专利局、申请号为202011494737.5、申请名称为“电梯空间状态判断方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及机器人技术领域,尤其涉及一种电梯空间状态判断方法、装置、设备及计算机可读存储介质。
背景技术
智能移动机器人在自主乘坐电梯过程中,需要感知机器人前方区域是否为电梯口、电梯轿门是否打开以及打开后的电梯内部空间是否足够容纳。发明人意识到,传统的方案中,机器人将电梯空间状态的感知当成普通场景的环境进行,也就是当成普通的障碍物识别,然而电梯具有一定特殊性,一旦电梯空间状态存在异常,机器人贸然触发机器人进入电梯的操作会存在一定危险,安全性较低。
发明内容
本申请提供一种电梯空间状态判断方法、装置及计算机设备存储介质。
第一方面,提供了一种电梯空间状态判断方法,包括:
判断机器人是否对准电梯口;
若所述机器人对准电梯口,则获取所述机器人前方区域的图像信息以及所述机器人自身的投影信息;
根据所述机器人前方区域的图像信息以及机器人自身的投影信息,判断所述电梯的空间状态。
第二方面,提供了一种电梯空间状态判断装置,包括:
第一判断模块,用于判断机器人是否对准电梯口;
获取模块,用于若所述机器人对准电梯口,则获取所述机器人前方区域的图像信息以及所述机器人自身的投影信息;
第二判断模块,用于根据所述机器人前方区域的图像信息以及机器人自身的投影信息,判断所述电梯的空间状态。
第三方面,提供一种机器人,包括存储器和处理器,所述存储器上存储有计算机可读指令,所述计算机可读指令可在所述处理器上运行,所述处理器用于执行所述计算机可读指令时实现上述电梯空间状态判断方法的步骤。
一个或多个存储有计算机可读指令的计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时实现上述电梯空间状态判断方法的步骤。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例中电梯空间状态判断方法的一流程示意图;
图2是图1中步骤S30的一流程示意图;
图3是图2中步骤S35的一流程示意图;
图4是图1中步骤S10的一流程示意图;
图5是本申请实施例中几字形模板的一模板示意图;
图6是本申请实施例中电梯空间状态判断装置的一结构示意图;
图7是本申请实施例中电梯空间状态判断装置的另一结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供了一种电梯空间状态判断方法和相应的电梯空间状态判断装置,应用于具有移动功能的各种各样的智能机器人中,为提高该智能机器人在进电梯时的安全性,本申请实施例对应提供了一种电梯空间状态判断方法,下面以详细描述。
请参阅图1,图1为本申请中一种电梯空间状态判断方法的一个实施例流程示意图,包括如下步骤:
S10:判断机器人是否对准电梯口。
现如今,智能机器人应用于餐厅、办公室,展厅等各种应用场景中,以通过机器人完成某些特定的工作内容,在此应用场景中,通常需要机器人搭乘电梯。为了确保机器人搭乘电梯过程中,准确无误和提高搭乘安全性,需要使得机器人对准电梯口,在判断电梯可进入之后,控制机器人按照对准的电梯口移动至电梯内,以完成搭乘工作。因此,在机器人需搭乘电梯时,需先判断机器人是否对准了电梯口。
S20:若机器人对准电梯口,则获取机器人前方区域的图像信息以及机器人自身的投影信息。
S30:根据机器人前方区域的图像信息以及机器人自身的投影信息,判断电梯的空间状态。
对于步骤S20-S30,在判断机器人对准电梯口后,为了提高机器人进入电梯的安全性,还需进一步做安全性判断,会进一步获取所述机器人前方区域的图像信息以及所述机器人自身的投影信息。其中,所述机器人前方区域的图像信息,可以体现机器人前方区域的障碍物等情况,而又由于机器人已经对准电梯口,因此,当前机器人前方区域的图像信息,能反应电梯口区域的情况。所述机器人自身的投影信息,能反应机器人自身的大小,因此,根据所述机器人前方区域的图像信息以及机器人自身的投影信息,便可进一步确定机器人进入电梯内过程中是否存在碰到障碍物,能判断电梯的空间状态,从而准确地判断电梯是否可以进入,有效地提高了机器人进入电梯的安全性。
需要说明的是,本申请实施例中,根据所述机器人前方区域的图像信息以及机器人自身的投影信息,判断所述电梯的空间状态,可以有多种实施方式,其中一种实施方式中,电梯口区域的图像信息包括所述机器人前方障碍物对应的二维障碍物图,所述机器人自身的投影信息包括机器人二维投影,具体可以根据所述机器人前方障碍物对应的二维障碍物图和机器人二维投影,判断所述电梯的空间状态,下面进行描述。
在一实施例中,如图2所示,所述电梯口区域的图像信息包括所述机器人前方障碍物对应的二维障碍物图,所述机器人自身的投影信息包括机器人二维投影,步骤S30中,也即根据所述机器人前方区域的图像信息以及机器人自身的投影信息,判断所述电梯的空间状态,具体包括如下步骤:
S31:对二维障碍物图进行二值化处理,以获取目标二值图。
在判断机器人对准电梯口之后,对获取的二维障碍物图进行二值化处理,以获取目标二值图。其中,作为一个示例,该二维障碍物图为机器人前方障碍物的点云数据(point cloud data)进行二维投影后形成的俯视图,可见,该俯视图反映了机器人前方障碍物的分布情况,由于此时机器人对准了电梯口,因此该俯视图反映机器人前方的电梯情况。需要说明的是,在一些实施方式中,该二维障碍物图也可以是为机器人前方障碍物的其他三维图像进行二维投影后形成的俯视图。
本发实施例中,可以在机器人预设位置布置点云数据采集器,该点云数据采集器用于采集机器人正对前方物体的点云数据。作为一个示例,该点云数据采集器采用RGBD相机和激光雷达作为点云数据采集器,去采集机器人正对前方物体相应的点云数据,具体不做限定。可见,利用机器人上所布置的点云数据采集器,便可实时的获取到机器人前方障碍物的点云数据。需要说明的是,障碍物的点云数据,是该障碍物在一个三维坐标***中的一组向量的集合,障碍物的点云数据反映了机器人前面障碍物在空间上的坐标位置,也就是说,可以反映电梯内部的障碍物的空间信息。
在利用点云数据采集器采集到机器人正对前方物体的点云数据之后,便可将该点云数据投影至平面中,也即对点云数据进行二维投影形成俯视图,从而得到二维障碍物图,该二维障碍物图反映了机器人前方障碍物的二维关系。
在得到该二维障碍物图,对二维障碍物图进行二值化处理,以获取目标二值图。可以理解,二维障碍物图的二值化处理,是值将二维障碍物图上的点的灰度值转化为为0或255,从而得到能将反映图像整体和局部特征的二值化图像。需要说明的是,在本申请实施例中,可以有多种方式获取二维障碍物图对应的目标二值图,具体本申请不做限定,也不一一说明。具体地,本申请实施例中,将该二维障碍物图中,有障碍物信息的像素位置标记为“0”,无障碍物信息的位置标记为“1”,从而得到上述目标二值图。
S32:对目标二值图进行连通域标记,以获取目标二值图的各连通域。
在得到目标二值图之后,为进一步分析机器人前方障碍物的情况,包括电梯口前和电梯内部的状态,需先对目标二值图进行连通域标记,以获取该目标二值图的各连通域。可以理解,连通域(Connected Component),是指目标二值图中具有相同像素值且位置相邻的前景像素点组成的图像区域(Region,Blob)。连通区域标记,是指将目标二值图中的各个连通域找出并标记,便于将目标二值图中的各个不同区域进行区域划分,从而可以依据各个不同的连通域划分出障碍物区域和非障碍物区域。
S33:比较各连通域的最大内接圆的半径,与机器人二维投影的最小外接圆的半径,以获取半径大于预设阈值的最大内接圆所在的目标连通域。
外接圆是指与多边形各顶点都相交的圆;内接圆,也称内切圆,是指与多边形各边都相切的圆。可以理解,依据机器人前面的具体障碍物情况,目标二值图的各连通域通常为不规则的多边形,本申请实施例需先确定该目标二值图中,各连通域的内接圆,从而确定,各连通域对应的位置是否具有足够的位置给机器人站立或通过。具体地,依据各连通域的内接圆的半径,确定出各连通域中的最大内接圆,并且确定机器人二维投影的最小外接圆,和该最小外接圆的半径。机器人二维投影的最小外接圆,反应了该机器人自身所需占用的位置面积范围,因此,本申请实施例中,通过比较各连通域的最大内接圆的半径,与机器人二维投影的最小外接圆的半径,便可确定出机器人在电梯内可站立的空间或可移动的空间。
需要说明的是,本申请实施例可以有多种确定各连通域中的最大内接圆的方式,以及确定机器人二维投影的最小外接圆的方式,具体本申请不做限定,也不一一说明。
接着比较各连通域的最大内接圆的半径,与机器人二维投影的最小外接圆的半径,获取各连通域中,半径大于预设阈值的最大内接圆所在的目标连通域。
S34:将机器人二维投影所在区域作为初始区域,与目标连通域进行区域生长,以获取最终连通域。
S35:根据最终连通域判断电梯的空间状态。
本发实施例中,将机器人本体二维投影所在区域作为初始区域,与目标连通域进行区域生长,以获取最终连通域,需要说明的是,区域生长(region growing)是指将成组的像素或区域发展成更大区域的过程,区域生长能将具有相同特征的连通区域分割出来。可见,得到的最终连通域反映了电梯内和电梯外,机器人前面的障碍物和非障碍物情况,可见,通过该最终连通域,可以反应了机器人从当前位置至目标连通域所在位置是否存在障碍物,因此,可根据最终连通域和机器人二维投影判断电梯的空间状态,也即确定电梯是否可进入状态。
可见,本申请实施例提供了一种电梯空间状态判断方法,当判断机器人对准电梯口后,先是对二维障碍物图进行二值化处理,以获取目标二值图,其中,二维障碍物图为机器人前方障碍物的点云数据进行二维投影后形成的俯视图;对目标二值图进行连通域标记,以获取目标二值图的各连通域;比较各连通域的最大内接圆的半径,与机器人二维投影的最小外接圆的半径,以获取半径大于预设阈值的最大内接圆所在的目标连通域,从而确定电梯内机器人可站立的位置范围;随后,将机器人二维投影所在区域作为初始区域,与目标连通域进行区域生长,以获取最终连通域,根据最终连通域和机器人二维投影判断电梯的空间状态,便可进一步确定进入电梯内过程图中是否存在障碍物,准确地判断电梯是否可以进入,有效地提高了机器人进入电梯的安全性。
需要说明的是,根据所述机器人前方区域的图像信息以及机器人自身的投影信息,判断所述电梯的空间状态,还可以有其他实施方式,本申请不做限定,也不一一详细说明。例如,可以不对二维障碍物图进行二值化处理,而是直接用根据二维障碍物图和机器人自身的投影信息进行直接判定。
在一实施例中,如图3所示,步骤S35中,也即根据最终连通域判断电梯的空间状态,具体包括如下步骤:
S351:在最终连通域内,确定与初始区域中心的距离满足预设距离的目标位置。
如前述,在得到最终连通域之后,便知道了机器人当前位置与前方电梯障碍物的情况,为了进一步确定电梯的空间状态,本申请实施例提供了一种具体根据最终连通域和机器人二维投影,判断电梯的空间状态的方式,先是在最终连通域内,确定与初始区域中心的距离满足预设距离的目标位置。作为一个示例,在最终连通域内,确定与初始区域中心的距离满足预设距离的目标位置,指的是,在最终连通域内,确定与初始区域中心的欧式距离的位置作为目标位置。该目标位置指的是离初始区域中心欧式距离最远且在该最终连通域的位置,说明该目标位置对应的实际位置是机器人可前往的位置。需要说明的是,该目标位置还可以是其他预设的在该最终连通域的位置,具体不做限定。需要说明的是,目标位置是基于最终连通域的所确定的位置,因此,该目标位置是图像中的位置,而该目标位置对应的某个实际位置。
S352:判断目标位置是否在电梯内。
S353:当目标位置对应的实际位置在电梯内,则判断电梯的空间状态为可进入状态。
S354:当目标位置对应的实际位置不在电梯内,则判断电梯的空间状态为不可进入状态。
确定出该目标位置之后,会判断该目标位置对应的实际位置是否在电梯内,如果该在目标位置对应的实际位置在电梯内,说明此时机器人可以移动到电梯内且不会遇到障碍物,如果在目标位置对应的实际位置不在电梯内,说明此时机器人由于电梯内无空间,或者前面有障碍物不可以移动到电梯内。需要说明的是,在确定目标位置之后,目标位置对应的实际位置是否在电梯内,可以依据目标位置与初始区域中心的距离,以及初始区域中心与 电梯门口的距离判断,这里不详细描述。
在可选实施例中,判断该在目标位置对应的实际位置在电梯内具体是该目标位置对应的实际位置是不是全部位于电梯内,如果该目标位置对应的实际位置全部位于电梯内,则表明该目标位置对应的实际位置在电梯内,如果该目标位置对应的实际位置只有局部或者均不位于电梯内,则表明该目标位置对应的实际位置不在电梯内。
可见,本申请实施例提供了一种具体地,根据最终连通域和机器人二维投影,判断电梯的空间状态的方式,提高了方案的可实施性。
需要说明的是,在步骤S10中提及,本申请实施例需先判断机器人是否对准电梯口,本申请实施例提供了多种判断机器人是否对准电梯口的方式,如图4所示,在一实施例中,步骤S10中,判断机器人是否对准电梯口,具体包括如下步骤:
S11:获取预先构建的二维几字形模板图。
可以理解,如图5所示,电梯口一般由一个“几”字形凹槽构成,基于这一特点,本申请先构建一个二维的“几”字形模板图,电梯口的俯视图可如图4所示。
S12:将二维障碍物图与二维几字形模板图进行形状匹配。
S13:当为匹配时,则判断机器人为已对准电梯口。
S14:当为不匹配时,则判断机器人为未对准电梯口。
在获取预先构建的二维几字形模板图之后,会将二维障碍物图与二维几字形模板图进行形状匹配,可以理解,如果机器人当前对准电梯口,那么此时获取的前面二维障碍物图是电梯口的图像数据转化得到,因此,当二维障碍物图与二维几字形模板图的形状应当相匹配,当二维障碍物图与二维几字形模板图的形状为不相匹配时,则可以说明机器人为未对准电梯口。在一实施例中,判断所述二维障碍物图中电梯框区域的形状与所述二维几字形模板图的形状匹配误差度;当所述形状匹配误差度小于预设误差阈值时,则判断所述二维障碍物图与所述二维几字形模板图形状为匹配;当所述形状匹配误差度大于或等于预设误差阈值时,则判断所述二维障碍物图与所述二维几字形模板图形状为不匹配。例如,若二维障碍物图中电梯框区域的形状为“几”字形,则说明该二维障碍物图中电梯框区域与二维几字形模板图形状的形状匹配误差度较低,则判断二维障碍物图与二维几字形模板图形状匹配,若二维障碍物图中电梯框区域的形状为类“几”字形,则说明该二维障碍物图中电梯框区域与二维几字形模板图进行形状的形状匹配误差度也较低,则判断二维障碍物图与二维几字形模板图形状匹配,若二维障碍物图中电梯框区域的形状为三角形,则说明该二维障碍物图中电梯框区域与二维几字形模板图进行形状的形状匹配误差度较高,则判断二维障碍物图与二维几字形模板图形状为不匹配。其中,需要说明的是,本申请实施例,可以有多种确定形状匹配误差度的方式,具体不做限定,也不一一描述。在得到二维障碍物图后,以依据图像识别算法识别出电梯框区域,从而进行形状识别以确定该电梯框区域的形状并进行后续的形状匹配,具体这里不详细描述。
需要说明的是,在一实施例中,当以机器人定位获取的正向朝向作为为旋转调整的初始方向,当为不匹配时,在限定的旋转方向变化范围内对机器人进行旋转微调,以使机器人对准电梯口。其中,在微调时,可以依据上述形状匹配结果相应微调。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
在一实施例中,提供一种电梯空间状态判断装置,该电梯空间状态判断装置与上述实施例中电梯空间状态判断方法一一对应。如图6所示,该电梯空间状态判断装置包括第一判断模块100、获取模块101和第二判断模块102。各功能模块详细说明如下:
第一判断模块,用于判断机器人是否对准电梯口;
获取模块,用于若所述机器人对准电梯口,则获取所述机器人前方区域的图像信息以及所述机器人自身的投影信息;
第二判断模块,用于根据所述机器人前方区域的图像信息以及机器人自身的投影信息,判断所述电梯的空间状态。
进一步地,所述电梯口区域的图像信息包括所述机器人前方障碍物对应的二维障碍物图,所述机器人自身的投影信息包括机器人二维投影,所述第二判断模块具体用于:
对所述二维障碍物图进行二值化处理,以获取目标二值图;
对所述目标二值图进行连通域标记,以获取所述目标二值图的各连通域;
比较所述各连通域的最大内接圆的半径,与所述机器人二维投影的最小外接圆的半径,以获取半径大于预设阈值的最大内接圆所在的目标连通域;
将所述机器人二维投影所在区域作为初始区域,与所述目标连通域进行区域生长,以获取最终连通域;
根据所述最终连通域判断所述电梯的空间状态。
进一步地,所述第二判断模块还用于:
在所述最终连通域内,确定与所述初始区域中心的距离满足预设距离的目标位置;
当所述目标位置在所述电梯内,则判断所述电梯的空间状态为可进入状态;
当所述目标位置为非在所述电梯内,则判断所述电梯的空间状态为不可进入状态。
进一步地,所述第二判断模块还用于:
在所述最终连通域内,确定与所述初始区域中心的欧式距离最远的位置作为所述目标位置。
进一步地,所述第一判断模块,具体用于:
获取预先构建的二维几字形模板图;
将所述二维障碍物图与所述二维几字形模板图进行形状匹配;
当为匹配时,则判断所述机器人为已对准电梯口;
当为不匹配时,则判断所述机器人为未对准电梯口。
进一步地,所述第一判断模块100,具体用于:
判断所述二维障碍物图中电梯框区域的形状与所述二维几字形模板图的形状匹配误差度;
当所述形状匹配误差度小于预设误差阈值时,则判断所述二维障碍物图与所述二维几字形模板图形状为匹配;
当所述形状匹配误差度大于或等于预设误差阈值时,则判断所述二维障碍物图与所述二维几字形模板图形状为不匹配。
在一实施例中,所述第二判断模块还具体用于:
当为不匹配时,在限定的旋转方向变化范围内对所述机器人进行旋转微调,以使机器人对准所述电梯口。
可见,本申请实施例提供了一种电梯空间状态判断装置,当判断机器人对准电梯口后,先是对二维障碍物图进行二值化处理,以获取目标二值图,其中,二维障碍物图为机器人前方障碍物的点云数据进行二维投影后形成的俯视图;对目标二值图进行连通域标记,以获取目标二值图的各连通域;比较各连通域的最大内接圆的半径,与机器人二维投影的最小外接圆的半径,以获取半径大于预设阈值的最大内接圆所在的目标连通域,从而确定电梯内机器人可站立的位置范围;随后,将机器人二维投影所在区域作为初始区域,与目标连通域进行区域生长,以获取最终连通域,根据最终连通域和机器人二维投影判断电梯的空间状态,便可进一步确定进入电梯内过程图中是否存在障碍物,准确地判断电梯是否可以进入,有效地提高了机器人进入电梯的安全性。
关于电梯空间状态判断装置的具体限定可以参见上文中对于电梯空间状态判断方法的限定,在此不再赘述。上述电梯空间状态判断装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中, 也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种电梯空间状态判断装置,该电梯空间状态判断装置可以是服务器或者是集成在机器人内部的控制器,或者是指机器人,其内部结构图可以如图7所示。该电梯空间状态判断装置包括通过***总线连接的处理器、存储器、网络接口和数据库。其中,该电梯空间状态判断装置的处理器用于提供计算和控制能力。该电梯空间状态判断装置的存储器包括易失性和非易失性存储介质、内存储器。该非易失性存储介质存储有操作***、计算机可读指令和数据库。该计算机可读指令被处理器执行时以实现一种电梯空间状态判断方法。
在一个实施例中,提供了一种电梯空间状态判断装置,包括存储器和处理器,所述存储器上存储有计算机可读指令,所述计算机可读指令可在所述处理器上运行,处理器执行计算机可读指令时实现以下步骤:
判断机器人是否对准电梯口;
若所述机器人对准电梯口,则获取所述机器人前方区域的图像信息以及所述机器人自身的投影信息;
根据所述机器人前方区域的图像信息以及机器人自身的投影信息,判断所述电梯的空间状态。
在一实施例中,所述处理器执行所述计算机可读指令时实现如下步骤:
在所述最终连通域内,确定与所述初始区域中心的距离满足预设距离的目标位置;
当所述目标位置对应的实际位置在所述电梯内,则判断所述电梯的空间状态为可进入状态;
当所述目标位置对应的实际位置不在所述电梯内,则判断所述电梯的空间状态为不可进入状态。
在一实施例中,所述处理器执行所述计算机可读指令时实现如下步骤:
在所述最终连通域内,确定与所述初始区域中心的欧式距离最远的位置作为所述目标位置。
在一实施例中,所述处理器执行所述计算机可读指令时实现如下步骤:
获取预先构建的二维几字形模板图;
将所述二维障碍物图与所述二维几字形模板图进行形状匹配;
当为匹配时,则判断所述机器人为已对准电梯口;
当为不匹配时,则判断所述机器人为未对准电梯口。
在一实施例中,所述处理器执行所述计算机可读指令时实现如下步骤:
判断所述二维障碍物图中电梯框区域的形状与所述二维几字形模板图的形状匹配误差度;
当所述形状匹配误差度小于预设误差阈值时,则判断所述二维障碍物图与所述二维几字形模板图形状为匹配;
当所述形状匹配误差度大于或等于预设误差阈值时,则判断所述二维障碍物图与所述二维几字形模板图形状为不匹配。
在一实施例中,所述二维障碍物图为所述机器人前方障碍物的点云数据进行二维投影后形成的俯视图。
在一实施例中,所述处理器执行所述计算机可读指令时还实现如下步骤:
当为不匹配时,在限定的旋转方向变化范围内对所述机器人进行旋转微调,以使机器人对准所述电梯口。
在一个实施例中,提供了一个或多个存储有计算机可读指令的计算机可读存储介质,其上存储有计算机可读指令,所述计算机可读指令一个或多个被处理器执行时实现以下步 骤:
判断机器人是否对准电梯口;
若所述机器人对准电梯口,则获取所述机器人前方区域的图像信息以及所述机器人自身的投影信息;
根据所述机器人前方区域的图像信息以及机器人自身的投影信息,判断所述电梯的空间状态。
在一实施例中,所述电梯口区域的图像信息包括所述机器人前方障碍物对应的二维障碍物图,所述机器人自身的投影信息包括机器人二维投影,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
对所述二维障碍物图进行二值化处理,以获取目标二值图;
对所述目标二值图进行连通域标记,以获取所述目标二值图的各连通域;
比较所述各连通域的最大内接圆的半径,与所述机器人二维投影的最小外接圆的半径,以获取半径大于预设阈值的最大内接圆所在的目标连通域;
将所述机器人二维投影所在区域作为初始区域,与所述目标连通域进行区域生长,以获取最终连通域;
根据所述最终连通域判断所述电梯的空间状态。
在一实施例中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
在所述最终连通域内,确定与所述初始区域中心的距离满足预设距离的目标位置;
当所述目标位置对应的实际位置在所述电梯内,则判断所述电梯的空间状态为可进入状态;
当所述目标位置对应的实际位置不在所述电梯内,则判断所述电梯的空间状态为不可进入状态。
在一实施例中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
在所述最终连通域内,确定与所述初始区域中心的欧式距离最远的位置作为所述目标位置。
在一实施例中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
获取预先构建的二维几字形模板图;
将所述二维障碍物图与所述二维几字形模板图进行形状匹配;
当为匹配时,则判断所述机器人为已对准电梯口;
当为不匹配时,则判断所述机器人为未对准电梯口。
在一实施例中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
判断所述二维障碍物图中电梯框区域的形状与所述二维几字形模板图的形状匹配误差度;
当所述形状匹配误差度小于预设误差阈值时,则判断所述二维障碍物图与所述二维几字形模板图形状为匹配;
当所述形状匹配误差度大于或等于预设误差阈值时,则判断所述二维障碍物图与所述二维几字形模板图形状为不匹配。
在一实施例中,所述二维障碍物图为所述机器人前方障碍物的点云数据进行二维投影后形成的俯视图。
在一实施例中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
当为不匹配时,在限定的旋转方向变化范围内对所述机器人进行旋转微调,以使机器人对准所述电梯口。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (18)

  1. 一种电梯空间状态判断方法,包括:
    判断机器人是否对准电梯口;
    若所述机器人对准电梯口,则获取所述机器人前方区域的图像信息以及所述机器人自身的投影信息;
    根据所述机器人前方区域的图像信息以及机器人自身的投影信息,判断所述电梯的空间状态。
  2. 如权利要求1所述的电梯空间状态判断方法,所述电梯口区域的图像信息包括所述机器人前方障碍物对应的二维障碍物图,所述机器人自身的投影信息包括机器人二维投影,所述根据所述机器人前方区域的图像信息以及机器人自身的投影信息,判断所述电梯的空间状态,包括:
    对所述二维障碍物图进行二值化处理,以获取目标二值图;
    对所述目标二值图进行连通域标记,以获取所述目标二值图的各连通域;
    比较所述各连通域的最大内接圆的半径,与所述机器人二维投影的最小外接圆的半径,以获取半径大于预设阈值的最大内接圆所在的目标连通域;
    将所述机器人二维投影所在区域作为初始区域,与所述目标连通域进行区域生长,以获取最终连通域;
    根据所述最终连通域判断所述电梯的空间状态。
  3. 如权利要求2所述的电梯空间状态判断方法,所述根据所述最终连通域判断所述电梯的空间状态,包括:
    在所述最终连通域内,确定与所述初始区域中心的距离满足预设距离的目标位置;
    当所述目标位置对应的实际位置在所述电梯内,则判断所述电梯的空间状态为可进入状态;
    当所述目标位置对应的实际位置不在所述电梯内,则判断所述电梯的空间状态为不可进入状态。
  4. 如权利要求3所述的电梯空间状态判断方法,所述在所述最终连通域内,确定与所述初始区域中心的距离满足预设距离的目标位置,包括:
    在所述最终连通域内,确定与所述初始区域中心的欧式距离最远的位置作为所述目标位置。
  5. 如权利要求1-4任一项所述的电梯空间状态判断方法,所述判断机器人是否对准电梯口,包括:
    获取预先构建的二维几字形模板图;
    将所述二维障碍物图与所述二维几字形模板图进行形状匹配;
    当为匹配时,则判断所述机器人为已对准电梯口;
    当为不匹配时,则判断所述机器人为未对准电梯口。
  6. 如权利要求5所述的电梯空间状态判断方法,所述将所述二维障碍物图与所述二维几字形模板图进行形状匹配,包括:
    判断所述二维障碍物图中电梯框区域的形状与所述二维几字形模板图的形状匹配误差度;
    当所述形状匹配误差度小于预设误差阈值时,则判断所述二维障碍物图与所述二维几字形模板图形状为匹配;
    当所述形状匹配误差度大于或等于预设误差阈值时,则判断所述二维障碍物图与所述二维几字形模板图形状为不匹配。
  7. 根据权利要求1-4任一项所述电梯空间状态判断方法,所述二维障碍物图为所述机 器人前方障碍物的点云数据进行二维投影后形成的俯视图。
  8. 根据权利要求5所述电梯空间状态判断方法,当为不匹配时,所述方法还包括:
    在限定的旋转方向变化范围内对所述机器人进行旋转微调,以使机器人对准所述电梯口。
  9. 一种电梯空间状态判断装置,包括:
    第一判断模块,用于判断机器人是否对准电梯口;
    获取模块,用于若所述机器人对准电梯口,则获取所述机器人前方区域的图像信息以及所述机器人自身的投影信息;
    第二判断模块,用于根据所述机器人前方区域的图像信息以及机器人自身的投影信息,判断所述电梯的空间状态。
  10. 一种机器人,包括存储器和处理器,所述存储器上存储有计算机可读指令,所述计算机可读指令可在所述处理器上运行,所述处理器用于执行所述计算机可读指令时实现如下步骤:
    判断机器人是否对准电梯口;
    若所述机器人对准电梯口,则获取所述机器人前方区域的图像信息以及所述机器人自身的投影信息;
    根据所述机器人前方区域的图像信息以及机器人自身的投影信息,判断所述电梯的空间状态。
  11. 如权利要求10所述的机器人,所述电梯口区域的图像信息包括所述机器人前方障碍物对应的二维障碍物图,所述机器人自身的投影信息包括机器人二维投影,所述处理器用于执行所述计算机可读指令时实现如下步骤:
    所述根据所述机器人前方区域的图像信息以及机器人自身的投影信息,判断所述电梯的空间状态,包括:
    对所述二维障碍物图进行二值化处理,以获取目标二值图;
    对所述目标二值图进行连通域标记,以获取所述目标二值图的各连通域;
    比较所述各连通域的最大内接圆的半径,与所述机器人二维投影的最小外接圆的半径,以获取半径大于预设阈值的最大内接圆所在的目标连通域;
    将所述机器人二维投影所在区域作为初始区域,与所述目标连通域进行区域生长,以获取最终连通域;
    根据所述最终连通域判断所述电梯的空间状态。
  12. 如权利要求11所述的机器人,所述处理器用于执行所述计算机可读指令时实现如下步骤:
    所述根据所述最终连通域判断所述电梯的空间状态,包括:
    在所述最终连通域内,确定与所述初始区域中心的距离满足预设距离的目标位置;
    当所述目标位置对应的实际位置在所述电梯内,则判断所述电梯的空间状态为可进入状态;
    当所述目标位置对应的实际位置不在所述电梯内,则判断所述电梯的空间状态为不可进入状态。
  13. 如权利要求12所述的机器人,所述处理器用于执行所述计算机可读指令时实现如下步骤:
    所述在所述最终连通域内,确定与所述初始区域中心的距离满足预设距离的目标位置,包括:
    在所述最终连通域内,确定与所述初始区域中心的欧式距离最远的位置作为所述目标位置。
  14. 如权利要求10-13任一项所述的机器人,所述处理器用于执行所述计算机可读指 令时实现如下步骤:
    所述判断机器人是否对准电梯口,包括:
    获取预先构建的二维几字形模板图;
    将所述二维障碍物图与所述二维几字形模板图进行形状匹配;
    当为匹配时,则判断所述机器人为已对准电梯口;
    当为不匹配时,则判断所述机器人为未对准电梯口。
  15. 如权利要求14所述的机器人,所述处理器用于执行所述计算机可读指令时实现如下步骤:
    所述将所述二维障碍物图与所述二维几字形模板图进行形状匹配,包括:
    判断所述二维障碍物图中电梯框区域的形状与所述二维几字形模板图的形状匹配误差度;
    当所述形状匹配误差度小于预设误差阈值时,则判断所述二维障碍物图与所述二维几字形模板图形状为匹配;
    当所述形状匹配误差度大于或等于预设误差阈值时,则判断所述二维障碍物图与所述二维几字形模板图形状为不匹配。
  16. 根据权利要求10-13任一项所述的机器人,所述二维障碍物图为所述机器人前方障碍物的点云数据进行二维投影后形成的俯视图。
  17. 根据权利要求14所述的电梯空间状态判断装置,所述处理器用于执行所述计算机可读指令时还实现如下步骤:
    当为不匹配时,在限定的旋转方向变化范围内对所述机器人进行旋转微调,以使机器人对准所述电梯口。
  18. 一个或多个存储有计算机可读指令的计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时实现如下步骤:
    判断机器人是否对准电梯口;
    若所述机器人对准电梯口,则获取所述机器人前方区域的图像信息以及所述机器人自身的投影信息;
    根据所述机器人前方区域的图像信息以及机器人自身的投影信息,判断所述电梯的空间状态。
PCT/CN2021/129662 2020-12-17 2021-11-10 电梯空间状态判断方法、装置、设备及存储介质 WO2022127450A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011494737.5 2020-12-17
CN202011494737.5A CN114648689A (zh) 2020-12-17 2020-12-17 电梯空间状态判断方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022127450A1 true WO2022127450A1 (zh) 2022-06-23

Family

ID=81990577

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/129662 WO2022127450A1 (zh) 2020-12-17 2021-11-10 电梯空间状态判断方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN114648689A (zh)
WO (1) WO2022127450A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117361259A (zh) * 2023-12-07 2024-01-09 成都越凡创新科技有限公司 一种用于检测机器人异常移动的方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190345000A1 (en) * 2018-05-08 2019-11-14 Thyssenkrupp Elevator Corporation Robotic destination dispatch system for elevators and methods for making and using same
CN110900603A (zh) * 2019-11-29 2020-03-24 上海有个机器人有限公司 一种通过几何特征识别电梯的方法、介质、终端和装置
CN111136648A (zh) * 2019-12-27 2020-05-12 深圳市优必选科技股份有限公司 一种移动机器人的定位方法、定位装置及移动机器人
CN111153300A (zh) * 2019-12-31 2020-05-15 深圳优地科技有限公司 一种机器人乘梯方法、***、机器人及存储介质
CN111847152A (zh) * 2020-06-30 2020-10-30 深圳优地科技有限公司 一种机器人乘梯确定方法、装置、电子设备和介质
CN111984008A (zh) * 2020-07-30 2020-11-24 深圳优地科技有限公司 一种机器人控制方法、装置、终端和存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190345000A1 (en) * 2018-05-08 2019-11-14 Thyssenkrupp Elevator Corporation Robotic destination dispatch system for elevators and methods for making and using same
CN110900603A (zh) * 2019-11-29 2020-03-24 上海有个机器人有限公司 一种通过几何特征识别电梯的方法、介质、终端和装置
CN111136648A (zh) * 2019-12-27 2020-05-12 深圳市优必选科技股份有限公司 一种移动机器人的定位方法、定位装置及移动机器人
CN111153300A (zh) * 2019-12-31 2020-05-15 深圳优地科技有限公司 一种机器人乘梯方法、***、机器人及存储介质
CN111847152A (zh) * 2020-06-30 2020-10-30 深圳优地科技有限公司 一种机器人乘梯确定方法、装置、电子设备和介质
CN111984008A (zh) * 2020-07-30 2020-11-24 深圳优地科技有限公司 一种机器人控制方法、装置、终端和存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117361259A (zh) * 2023-12-07 2024-01-09 成都越凡创新科技有限公司 一种用于检测机器人异常移动的方法
CN117361259B (zh) * 2023-12-07 2024-04-26 成都越凡创新科技有限公司 一种用于检测机器人异常移动的方法

Also Published As

Publication number Publication date
CN114648689A (zh) 2022-06-21

Similar Documents

Publication Publication Date Title
CN111160302B (zh) 基于自动驾驶环境的障碍物信息识别方法和装置
US10997438B2 (en) Obstacle detection method and apparatus
WO2018219054A1 (zh) 一种车牌识别方法、装置及***
US9690977B2 (en) Object identification using 3-D curve matching
CN111191600A (zh) 障碍物检测方法、装置、计算机设备和存储介质
WO2021134325A1 (zh) 基于无人驾驶技术的障碍物检测方法、装置和计算机设备
US10380767B2 (en) System and method for automatic selection of 3D alignment algorithms in a vision system
US20170083760A1 (en) Human detection system for construction machine
US8599257B2 (en) Vehicle detection device, vehicle detection method, and vehicle detection program
CN104636724B (zh) 一种基于目标一致性的车载相机快速行人车辆检测方法
Youjin et al. A robust lane detection method based on vanishing point estimation
CN114529837A (zh) 建筑物轮廓提取方法、***、计算机设备及存储介质
WO2022127450A1 (zh) 电梯空间状态判断方法、装置、设备及存储介质
CN112597846B (zh) 车道线检测方法、装置、计算机设备和存储介质
CN117611590B (zh) 一种缺陷轮廓复合检测方法、装置、设备及存储介质
WO2020087322A1 (zh) 车道线识别方法和装置、车辆
US20210042536A1 (en) Image processing device and image processing method
JP2020177648A (ja) 遺留物誤検出の認識方法、装置及び画像処理装置
CN110322508B (zh) 一种基于计算机视觉的辅助定位方法
CN117593727A (zh) 基于三维点云的障碍物检测方法、装置、介质及设备
US20150063637A1 (en) Image recognition method and robot
CN114943954B (zh) 一种车位检测方法、装置和***
CN112529953B (zh) 电梯的空间状态判断方法、装置及存储介质
JP7201706B2 (ja) 画像処理装置
Guo et al. Drivable road region detection based on homography estimation with road appearance and driving state models

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21905379

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21905379

Country of ref document: EP

Kind code of ref document: A1