WO2020244414A1 - Obstacle detection method, device, storage medium, and mobile robot - Google Patents

Obstacle detection method, device, storage medium, and mobile robot Download PDF

Info

Publication number
WO2020244414A1
WO2020244414A1 PCT/CN2020/092276 CN2020092276W WO2020244414A1 WO 2020244414 A1 WO2020244414 A1 WO 2020244414A1 CN 2020092276 W CN2020092276 W CN 2020092276W WO 2020244414 A1 WO2020244414 A1 WO 2020244414A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
detection
shadow
obstacle
marking
Prior art date
Application number
PCT/CN2020/092276
Other languages
French (fr)
Chinese (zh)
Inventor
李芃桦
何小嵩
Original Assignee
杭州海康机器人技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康机器人技术有限公司 filed Critical 杭州海康机器人技术有限公司
Publication of WO2020244414A1 publication Critical patent/WO2020244414A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Definitions

  • This application relates to the field of robotics, and in particular to a method, device, storage medium, and mobile robot for detecting obstacles.
  • Obstacle avoidance generally means that the mobile robot effectively avoids obstacles according to a certain method according to the collected obstacle status information, and finally reaches the target point.
  • the obstacle status information can be sensed by the sensor during the walking process of the AGV.
  • Mobile robots usually use sensors to obtain information about the surrounding environment, including the size, shape, and location of obstacles.
  • the mainstream perception and obstacle avoidance technical solutions of mobile robots are mainly divided into three categories according to the sensors used.
  • the first type is the use of active sensors, such as laser, radar or ultrasonic, etc.
  • the second type is the use of passive sensors, such as binocular cameras or depth cameras, etc.
  • the third type is the combination of active and passive sensors, such as Obstacle avoidance solutions that use a combination of camera and radar, etc.
  • mobile robots can also achieve obstacle avoidance through artificial intelligence methods such as genetic algorithms, neural network algorithms and fuzzy algorithms.
  • the technical classification is mainly divided into: the use of stereo vision and the use of deep learning.
  • the embodiments of the present application provide a method, a device, a storage medium, and a mobile robot for detecting obstacles to realize obstacle detection.
  • the specific technical solutions are as follows:
  • an embodiment of the present application provides a method for detecting an obstacle, the method including:
  • An object area and a shadow area are detected in the detection area, where the object area is an area whose chromaticity difference from the reference area exceeds a first threshold, and the shadow area is a color difference from the reference area.
  • the object area is marked as a suspected obstacle.
  • the method further includes:
  • mark the object area through the following steps:
  • mark the shaded area through the following steps:
  • the step of merging the first partial areas with overlapping pixels, and marking the merged area as an object area; and/or, separately marking the first partial areas without pixel overlap as an object area include:
  • the second partial area with overlapping pixels is merged, and the merged area is marked as a shaded area; and/or, the second partial area without overlapping pixels is separately marked as a shaded area
  • the steps include:
  • the step of performing size condition filtering on the object area marked as a suspected obstacle, and marking the object area retained after filtering as an obstacle includes:
  • the object area is marked as the obstacle.
  • the method further includes:
  • the method further includes:
  • an embodiment of the present application provides an obstacle detection device, wherein the device includes:
  • the acquisition module is used to acquire the traveling area image captured by the downward tilt camera device installed in the AGV;
  • a first determining module configured to determine the lower part of the travel area image as a reference area
  • a second determining module configured to determine the remaining part of the travel area image located above the reference area as a detection area
  • the first detection module is configured to detect an object area and a shadow area in the detection area, wherein the object area is an area whose chromaticity difference with the reference area exceeds a first threshold, and the shadow area is An area whose chromaticity difference from the reference area is less than the first threshold and whose brightness is lower than the reference area;
  • the second detection module is configured to detect whether the positional relationship between the object area and the shadow area matches the light projection direction when it is detected that the object area and the shadow area are simultaneously present in the detection area;
  • the first marking module is configured to mark the object area as a suspected obstacle when the positional relationship between the object area and the shadow area matches the light projection direction.
  • the device further includes:
  • the second marking module is used to perform size condition filtering on the object area marked as the suspected obstacle, and mark the object area remaining after filtering as an obstacle.
  • the first detection module includes:
  • the first dividing unit is used to divide the detection area into several sub-detection areas in columns, wherein some pixels of adjacent sub-detection areas overlap;
  • the second dividing unit is used to divide the reference area into a number of sub-reference areas in columns, wherein some pixels of adjacent sub-reference areas overlap;
  • the first comparison unit is configured to compare the chromaticity difference between each of the sub-detection areas and the corresponding sub-reference areas in the same column, and filter out the first part of the sub-detection area whose chromaticity difference is greater than the first threshold area;
  • the first marking unit is used to merge the first partial areas with pixel overlap and mark the merged area as an object area; and/or separately mark the first partial areas without pixel overlap as an object area.
  • the first detection module further includes:
  • the screening unit is configured to screen out the second partial area in the sub-detection area where the chromaticity difference is less than the first threshold and the brightness is less than the second threshold;
  • the second marking unit is used to merge the second partial area with overlapping pixels, and mark the merged area as a shaded area; and/or separately mark the second partial area without pixel overlap as a shade area.
  • the first marking unit includes:
  • the first marking subunit is used to merge the first partial areas where the pixels overlap, and mark the merged area larger than the third threshold as an object area;
  • the second marking subunit is used for marking the first partial area with no overlapping pixels and an area larger than the third threshold as an object area.
  • the second marking unit includes:
  • the third marking subunit is used for merging the second partial area where the pixels overlap, and marking the area whose merged area is greater than the fourth threshold as a shaded area;
  • the fourth marking subunit is used to individually mark the second partial area where there is no pixel overlap and the area is greater than the fourth threshold as a shaded area.
  • the first marking module includes:
  • a first calculation unit for calculating the ratio of the area of the shadow area to the area of the object area of the suspected obstacle
  • the third marking unit is used for marking the object area as the obstacle when the ratio is greater than a fifth threshold.
  • the device further includes:
  • a third determining module configured to determine the coordinates of the projection point on the left boundary and the coordinates of the projection point on the right boundary of the object area marked as the obstacle;
  • the third detection module is configured to detect whether the AGV's trajectory passes through the position range between the coordinates of the left boundary projection point and the coordinates of the right boundary projection point;
  • the planning module is used to generate for the AGV to make the travel trajectory avoid the left boundary when it is detected that the trajectory of the AGV passes through the position range between the coordinates of the projection point on the left boundary and the coordinates of the projection point on the right boundary.
  • the planning obstacle avoidance strategy of the position range between the projection point coordinates and the right boundary projection point coordinates are used to generate for the AGV to make the travel trajectory avoid the left boundary when it is detected that the trajectory of the AGV passes through the position range between the coordinates of the projection point on the left boundary and the coordinates of the projection point on the right boundary.
  • the device further includes:
  • the fourth determining module is configured to determine when it is detected that the shadow area and the object area do not exist at the same time in the detection area, or when only one of the shadow area and the object area is searched There are no obstacles in the detection area.
  • the embodiments of the present application provide a non-transitory computer-readable storage medium, the non-transitory computer-readable storage medium stores instructions, which when executed by a processor causes the processor to execute any of the above The described method of detecting obstacles.
  • an embodiment of the present application provides a mobile robot, including a processor and a downward tilt camera device, the downward tilt camera device tilts down at a preset angle at the front end of the mobile robot, and the processor is used for Perform any of the obstacle detection methods described above.
  • the preset angle of the downward tilt of the downward tilt camera device is 8° to 12°.
  • the embodiments of the present application also provide a computer program, which when executed by a processor causes the processor to execute any of the obstacle detection methods described above.
  • the image of the traveling area captured by the down-tilt camera device installed in the AGV is acquired, the lower part of the image of the traveling area is determined as the reference area, and the rest of the image of the traveling area located above the reference area is determined. Part of it is determined as the detection area.
  • the object area and the shadow area are detected in the detection area.
  • the object area is the area whose chromaticity difference from the reference area exceeds the first threshold, and the shadow area is the chromaticity difference from the reference area. The area that is less than the first threshold and whose brightness is lower than the reference area.
  • the object area and the shadow area are judged in the image of the traveling area, and the position information of the suspected obstacle is efficiently judged in the object area according to the relationship between the object area and the shadow area, and the light projection direction. Obstacle detection, and the judgment method is simpler, and the calculation efficiency is higher.
  • FIG. 1 shows a flowchart of a method for detecting obstacles provided by an embodiment of the present application
  • FIG. 2 shows a schematic diagram of determining a reference area of a detection area provided by an embodiment of the present application
  • FIG. 3 shows a schematic diagram of a specific process of a method for detecting obstacles provided by an embodiment of the present application
  • FIG. 4 shows a schematic diagram of the installation and field of view of the monocular camera provided by the embodiment of the present application
  • FIG. 5 shows a specific schematic diagram of a method for dividing a sub-detection area and a sub-reference area provided by an embodiment of the present application
  • Fig. 6a shows a specific schematic diagram of photographing a traveling area image of a suspended object provided by an embodiment of the present application
  • FIG. 6b shows a specific schematic diagram of the position of the AGV and the suspended object corresponding to FIG. 6a provided by the embodiment of the present application;
  • FIG. 6c shows a schematic diagram of merging the first partial area in the traveling area image provided by an embodiment of the present application
  • Fig. 7a shows another specific schematic diagram of photographing a traveling area image of a suspended object provided by an embodiment of the present application
  • FIG. 7b shows a specific schematic diagram of the position of the AGV and the suspended object corresponding to FIG. 7a provided by the embodiment of the present application;
  • Fig. 8 shows a schematic diagram of an obstacle detection device provided by an embodiment of the present application.
  • the embodiment of the present application provides a method for detecting obstacles.
  • obstacles with a certain height will have a certain area of shadow.
  • the light and shadow phenomenon is collected by the downward tilt camera device of the mobile robot.
  • the moving area image of the mobile robot in the moving area, and the reference area and the detection area are selected from the moving area image.
  • Determine the shadow area and the object area in the detection area through image analysis, and determine whether the object area is a suspected obstacle according to the positional relationship between the shadow area and the object area.
  • the size conditions of the determined suspected obstacles are screened, and the suspected obstacles that pass the screening are determined as obstacles.
  • the application field of this application is mainly the field of robotics, and the applicable environment is indoor, factory environment or storage environment with uniform illumination.
  • the detection of obstacles is realized.
  • the calculation method is simple and the calculation amount is small. See Figure 1, the detailed steps are as follows:
  • the AGV mobile robot
  • the camera of the downward tilt camera device can tilt downward toward the ground.
  • the downward-tilting camera device takes an image of the mobile robot in the travel area.
  • the travel area generally refers to the area where the mobile robot advances by itself, that is, the possible area where the mobile robot is about to avoid obstacles.
  • the shooting range of the down-tilt camera device is mainly based on the road surface in the forward direction of the mobile robot.
  • the mobile robot When the mobile robot is advancing in the travel area, it collects images of the travel area through the carried down-tilt camera device, obtains the travel area image, and stores the collected travel area image.
  • the downward tilting camera device may be a monocular camera, etc.
  • the traveling area image may be a single photo.
  • the travel area image captured by default includes the ground, and the ground is generally located in the lower part of the travel area image. Therefore, in an alternative implementation, as shown in FIG. 2, the travel An area with a preset size at the bottom of the area image is used as a reference area.
  • the detection area is the remaining part of the traveling image above the reference area.
  • the object area is an area whose chromaticity difference from the reference area exceeds a first threshold
  • the shaded area is an area whose chromaticity difference from the reference area is less than the first threshold and whose brightness is lower than the reference area.
  • it is detected in the detection area whether there is a part with a large chromaticity difference from the reference area, wherein when the chromaticity difference is greater than a preset chromaticity difference threshold, it is determined that the chromaticity difference is relatively large. Compare the color of the detection area with the reference area.
  • the color difference between the detection area and the reference area is calculated.
  • the chromaticity difference is compared with the first threshold.
  • the specific value of the first threshold can be preset to distinguish the colors of the detection area from the reference area.
  • the detection area with the chromaticity difference higher than the first threshold is marked as the object area, that is, the part of the detection area with the larger chromaticity difference from the reference area is marked as the object area.
  • the brightness of the area in the detection area whose chromaticity difference is lower than the first threshold is compared with the magnitude of the second threshold.
  • the specific value of the second threshold can be preset to distinguish the brightness of the detection area.
  • the area whose brightness is lower than the second threshold is selected as a shadow area, that is, the part in the detection area that is similar in color to the reference area and whose brightness is lower than the second threshold is marked as a shadow area.
  • the traveling area image collected when the AGV is moving may include the following situations: only one of the shadow area or the object area is captured, or no The shadow area and the object area are shot, or the shadow area and the object area are shot at the same time.
  • the AGV can drive normally without obstacle avoidance.
  • the corresponding situation may be that the object corresponding to the shadow area is in a suspended state, and the height of the suspended object at this time is generally higher than the height of the AGV, or it may be the shadow
  • the object corresponding to the area is still a certain distance away from the AGV and has not entered the shooting field of view of the down-tilt camera device.
  • the AGV can drive normally; or the AGV can temporarily reduce the driving speed, and then if the obstacle is detected in the set number of image frames, the obstacle will be avoided in time. If no obstacle is detected in the set number of image frames, the normal driving speed is restored.
  • the corresponding situation may be that the height of the object area cannot form a shadow area that meets the obstacle conditions.
  • the height of flat objects such as stickers and slogans will not cause the AGV to travel. Obstacle objects. In this case, it is determined that there are no obstacles in the image of the traveling area, and the AGV can drive normally without obstacle avoidance.
  • step S16 When there are object areas and shadow areas in the traveling area image captured by the AGV's downward tilt camera device, it can be considered that there may be obstacles in the AGV traveling area. In this case, you can continue to perform step S16. The positional relationship between the object area and the shadow area is further judged to confirm the possibility of obstacles in the AGV travel area.
  • the light projection direction generally refers to the positional relationship between the object and the corresponding shadow in the light and shadow phenomenon that meets the laws of nature.
  • it mainly detects whether the object area and the shadow area meet the corresponding positional relationship, and whether the matching light projection can be detected within a preset range around the object area
  • the direction of the shadow area that is, to determine whether the shadow area corresponds to the object area.
  • the preset range refers to the range of light and shadow phenomena that meet the natural laws around the object area, for example, the corresponding shadow area can be detected in the preset range below the suspended object area.
  • the object area if the object area is on the current driving route of the AGV, it is determined that the positional relationship between the object area and the shadow area matches the light projection direction; otherwise, it is determined that the positional relationship between the object area and the shadow area is not Match the light projection direction.
  • the positional relationship between the object area and the shadow area detected in the traveling area image matches the light projection direction, it can be determined that the object area corresponds to the shadow area.
  • mark the object area as a suspected obstacle The mark in the embodiment of the present application can logically determine the object area as a suspected obstacle or an obstacle.
  • the traveling area image captured by the downward tilt camera device installed on the AGV in the traveling area is acquired, and the reference area and the detection area are detected in the traveling area image.
  • the object area and the shadow area are marked by the difference of color, and the suspected obstacle is finally determined.
  • the travel area images are analyzed, and the three-dimensional information of the object is efficiently judged. Obstacle detection is realized, and the calculation amount is reduced, the calculation efficiency is high, and the cost is relatively high. low.
  • the amount of calculation is small, and it can be carried on a platform with weak computing power for real-time processing.
  • the down-tilt camera device can be a common camera that collects visible light images.
  • the AGV does not need to be equipped with expensive devices such as laser sensors or binocular cameras, which can reduce hardware costs.
  • the method for detecting obstacles in the embodiments of the present application mainly uses the light and shadow phenomenon of obstacles under illumination conditions to analyze the collected target pictures.
  • FIG. 3 it is a schematic diagram of a specific process of a method for detecting an obstacle provided by an embodiment of this application. Among them, the detailed process of the specific process is as follows:
  • an image of the travel area is captured in the travel area of the AGV by using a downward-inclining camera device provided on the AGV.
  • the down-tilting camera installed in front of the mobile robot may be a monocular camera.
  • the camera of the monocular camera is tilted downwards toward the ground, so that the image of the traveling area captured is mainly the traveling road in the front.
  • the down-tilting camera device may specifically be a monocular camera, and the specific values of the angle of the monocular camera and the angle of the field of view can be customized according to actual conditions.
  • FIG. 4 it is a schematic diagram of the installation and field of view of the downward tilting camera device provided in this embodiment of the application.
  • the downward tilt angle of the downward tilt camera device can be the angle between the central axis of the downward tilt camera lens and the horizontal direction, where the horizontal direction is the horizontal direction in the world coordinate system; the downward tilt angle of the downward tilt camera device can also be Is the angle formed by the vertical line of the central axis in the vertical plane and the direction of the ground normal vector.
  • the vertical field of view of the down-tilt camera device can be 30°, forming a relative Better camera field of view.
  • the angle and field of view of the down-tilt camera device can also be adjusted to other specific values, which will not be repeated here.
  • S302 Determine a reference area and a detection area in the captured travel area image, and divide the detection area and the reference area into several sub-detection areas and several sub-reference areas, respectively.
  • the lower part of the travel area image may be determined as the reference area, and the remaining part of the travel area image located above the reference area may be determined as the detection area.
  • the detection area is divided into several sub-detection areas according to columns, where some pixels of adjacent sub-detection areas overlap; the reference area is divided into several sub-reference areas according to columns, among which, Some pixels of adjacent sub-reference areas overlap.
  • some pixels in the divided sub-detection area overlap with other sub-detection areas, which refers to an image in which the divided sub-detection area partially overlaps with other adjacent sub-detection areas.
  • the detection area and the reference area can be divided by dividing different pixel columns. Multiple pixel columns are regarded as a region column, and there is overlap of pixel columns between adjacent region columns.
  • the number of overlapping pixel columns can be customized according to actual conditions, for example, it can be set to 1 pixel column, 2 pixel columns, or 4 pixel columns.
  • Each pixel in the same area column in the detection area is regarded as a sub-detection area, and there is overlap of pixels between adjacent sub-detection areas.
  • the division is carried out according to the above division rules until the entire detection area is divided, and several sub detection areas are obtained, and there is overlap between any two adjacent sub detection areas.
  • Each pixel in the reference area belonging to the same area column is taken as a sub-reference area, and there is overlap of pixels between adjacent sub-reference areas.
  • the specific method for dividing the sub-reference area can refer to the above-mentioned method for dividing the sub-detection area, which will not be repeated here.
  • the area in the sub-detection area that meets the chromaticity difference greater than the first threshold is called the first partial area; and, the sub-detection area meets the chromaticity difference less than the first threshold and the brightness
  • the area of the second threshold is called the second partial area.
  • the reference area is divided as the detection area to obtain several sub-reference areas.
  • the color of the sub-detection area is compared with the corresponding sub-reference area of the same column.
  • the column here specifically refers to the area column, that is, the sub-detection area corresponds to
  • the sub-reference area of is a sub-reference area belonging to the same area column as the sub-detection area.
  • the color in the sub-detection area and the color in the corresponding sub-reference area can have multiple representation formats, such as HSV (Hue, Saturation, Value, hue, saturation, lightness) color space, YUV color space, etc., Among them, Y represents brightness, U and V represent chromaticity. In an optional embodiment, the chromaticity difference between the sub-detection area and the sub-reference area in the same column can be obtained by calculating the Euclidean distance.
  • HSV Human, Saturation, Value, hue, saturation, lightness
  • the chromaticity difference of the pixel in the sub-detection area can be compared with the first threshold, and when the chromaticity difference of the pixel is less than the first threshold, the brightness of the pixel and the The magnitude relationship of the second threshold. Based on the above comparison result, if there are pixels with a chromaticity difference higher than the first threshold in the sub-detection area, then the pixels in the sub-detection area whose chromaticity difference is higher than the first threshold and whose positions are adjacent are connected to obtain the first Part of the area.
  • the value of the second threshold is determined according to the brightness of the corresponding sub-reference area. For example, the brightness of the sub-reference area may be subtracted by a preset value or multiplied by a scale factor less than 1 as the second threshold.
  • Integrating the screening results of each sub-detection area there may be the following results: 1) All sub-detection areas cannot be screened for the first and second partial areas; 2) All sub-detection areas cannot be screened for the first partial area, at least One sub-detection area can screen out the second partial area; 3) At least one sub-detection area can screen the first partial area, and all sub-detection areas can not screen the second partial area; 4) At least one sub-detection area can screen the first partial area In a part of the area, at least one sub-detection area can be filtered to the second part of the area.
  • Each sub-detection area included in the traveling area image cannot be filtered out of the first partial area and the second partial area, which means that there are no object areas and shadow areas in the traveling area image, and the corresponding actual situation may be within a certain range in front of the AGV There are no obstacles. At this time, corresponding to several types of obstacles listed in step S15, it can be considered that there are no obstacles ahead and the AGV can pass smoothly.
  • each sub-detection area included in the image of the traveling area cannot be filtered out of the first partial area, but there is at least one sub-detection area that can be filtered to the second partial area, it means that there may be a shadow area but no object area in the traveling area image .
  • the corresponding actual situation may be that the object corresponding to the shadow area is in a suspended state, or it may be that the object corresponding to the shadow area is still a certain distance from the AGV and has not entered the shooting field of view of the down-tilt camera device.
  • corresponding to several types of obstacles listed in step S15 it can be considered that there are no obstacles ahead and the AGV can pass smoothly.
  • FIG. 6a it is a specific schematic diagram in which only the shadow area is detected in the traveling area image provided by this embodiment of the application. At this time, the actual situation of the AGV and the object may be as shown in Figure 6b.
  • the area of the shadow area in the field of view of the down-tilting camera device becomes larger and larger. If the obstacle is suspended, Until the AGV passes under the obstacle, the oblique camera device cannot see the obstacle corresponding to the shadow. The object corresponding to the shadow area can be determined as an invalid obstacle, and the AGV can continue to pass forward. Therefore, when only the shadow area is detected in the detection area, it can be determined that there is no obstacle, and the AGV will keep moving forward.
  • each sub-detection area included in the travel area image cannot be filtered out of the second partial area, but there is at least one sub-detection area that can be filtered to the first partial area, it means that there may be object areas but no shadow areas in the travel area image .
  • the corresponding actual situation may be that the height of the object area cannot form a shadow area that meets the obstacle conditions, such as flat objects such as stickers and slogans, whose height will not cause obstacles to the driving of the AGV. At this time, it can be determined that there are no obstacles in the traveling area image, and this process ends.
  • the object area can be marked by the following steps: if there is only one sub-detection area that can filter the first partial area, each first partial area selected from the one sub-detection area can be selected separately Mark as an object area. If there are multiple sub-detection areas that can filter out the first partial area, the first partial areas with overlapping pixels can be merged, and the merged area can be marked as an object area, and the first partial area can be combined with other sub-detection areas. The first part of the area where there are overlapped pixels is separately marked as an object area.
  • the first partial area 1 and the first partial area 2 are screened out in the sub-detection area 1, the first partial area 3 is screened out in the sub-detection area 2, and the first partial area 4 is screened out in the sub-detection area 3;
  • the area of each object area can be calculated before or after the object area is marked, and the object area with a smaller area can be eliminated.
  • the corresponding situation may be that the actual object corresponding to the object area has a small area and low height, which will not cause obstacles to the driving of the AGV, or it may correspond to the object area The actual object is still far away from the AGV at this time, which will not affect the driving of the AGV for the time being; here, the object area with a small area is eliminated, which can reduce the subsequent processing amount.
  • An implementation method is as follows: merge the first partial regions with overlapping pixels, and mark the merged area with an area greater than the third threshold as an object area, and/or, remove no overlapping pixels and with an area greater than the third threshold The first part of the area is individually marked as an object area.
  • the object area is screened by setting the third threshold to determine the object area that may cause obstacles to the driving of the AGV.
  • the shaded area can be marked by the following steps: When the second partial area in the sub-detection area whose chromaticity difference is less than the first threshold and whose brightness is less than the second threshold is filtered out, the second partial area where the pixels overlap can be performed. Merging, and marking the merged area as a shaded area, and/or separately marking the second partial area where there is no pixel overlap as a shaded area.
  • the area can be calculated to eliminate the shadow area that does not meet the conditions. For example, merge the second partial areas with overlapping pixels, calculate the area of the merged partial detection area, and mark the area with the merged area greater than the fourth threshold as a shaded area; and/or, it will not exist
  • the second partial area where the pixels overlap and the area is greater than the fourth threshold is individually marked as a shaded area. For example, through the aforementioned color detection, after detecting low obstacles such as stickers, landmarks or cardboard as object areas, the height of them is very small, so the shadow area of cardboard, stickers or landmarks is very small or even no shadow can be detected. If the fourth threshold is not met, the shadow area can be filtered to filter out objects such as stickers that are lower than the chassis of the robot and do not affect the travel of the robot.
  • each object area it is determined whether there is a corresponding shadow area in each of the foregoing object areas.
  • there will be corresponding object areas within a preset range around the shadow area that is, the positional relationship between the object area and the shadow area meets the light and shadow phenomenon in natural phenomena. It is determined that the real obstacle and its corresponding shadow are connected in the traveling area image, or the obstacle and its corresponding shadow are not connected, but should exist in the space of the preset range at the same time.
  • the specific size of the preset range may be determined by factors such as the shooting range of the down-tilt camera set on the robot and the height of the robot.
  • the preset range may be a down-tilt camera installed on the traveling surface of the AGV.
  • the range of the captured travel area image; for example, the preset range can also be the area range of the AGV’s travel area mapped to the travel area image.
  • the AGV can be mapped to the current travel route and the width and height of the AGV.
  • the range of the obtained mapping area is the preset range.
  • the object area if the object area is on the current driving route of the AGV, it is determined that the positional relationship between the object area and the shadow area matches the light projection direction; otherwise, it is determined that the positional relationship between the object area and the shadow area is not Match the light projection direction.
  • the object area corresponding to the shadow area is marked as a suspected obstacle.
  • a suspended object may affect the normal driving of the AGV, in the captured travel area image, it can generally be detected that the shadow area and the object area exist at the same time.
  • the downward tilt camera device can detect the shadow area and the corresponding object area.
  • the actual situation of the AGV and the obstacle may be as shown in Figure 7b.
  • the object area shown in FIG. 7a can be marked as a suspected obstacle.
  • S310 Perform size condition filtering on the object area marked as a suspected obstacle, and mark the object area retained after filtering as an obstacle.
  • the size condition filtering of the suspected obstacle is mainly by calculating the ratio of the area between the suspected obstacle and the corresponding shadow area to determine whether the suspected obstacle meets the size that will hinder the AGV condition.
  • the ratio of the area of the object area corresponding to the suspected obstacle to the area of the shadow area is calculated, and the obtained ratio is compared with a preset fifth threshold.
  • the fifth threshold can be customized according to actual conditions.
  • the object area After marking the object area, in order to avoid false detection of flat objects, such as stickers, landmarks, etc., as object areas, it is necessary to obtain height information of the object area to filter out objects below the robot chassis that will not affect the travel of the robot. In order to avoid the complicated calculation of the three-dimensional information of obstacles, based on the phenomenon that objects with a certain height have shadows of a certain size, the calculation of the three-dimensional information of obstacles is replaced by detecting the shadows of obstacles. In an embodiment of the present application, it is required that the corresponding shadow area can be detected before the object area marked as a suspected obstacle.
  • the downward tilt camera device can see the object area corresponding to the front shadow area during the advancement of the AGV. But at this time, the ratio of the area of the suspected obstacle to the area of the shadow area is less than the set fifth threshold, and the robot can continue to pass for a certain distance. For obstacles that affect the robot's passage, within a certain distance, the ratio of the obstacle area to the shadow area in the camera's field of view will exceed the set fifth threshold. At this time, it is considered that an obstacle that affects the robot's passage is detected in the front.
  • the corresponding object area is an obstacle, and the robot activates the corresponding obstacle avoidance strategy.
  • S311 Detect whether the obstacle affects the movement of the AGV, and implement obstacle avoidance for the AGV.
  • the coordinates of the projection point on the left boundary and the projection point on the right boundary of the object area marked as the obstacle are determined; Detect whether the AGV's trajectory passes through the position range between the coordinates of the left boundary projection point and the above-mentioned right boundary projection point coordinates; when it is detected that the AGV's trajectory passes through the position range between the left boundary projection point coordinates and the right boundary projection point coordinates , Generate a planning obstacle avoidance strategy for the AGV to avoid the position range between the coordinates of the projection point on the left boundary and the coordinates of the projection point on the right boundary.
  • up and down correspond to the traveling direction of the robot, and left and right correspond to the obstacle avoidance direction of the robot, so the coordinates of the left boundary projection point and the right boundary projection point coordinates are calculated here.
  • the embodiments of the present application utilize light and shadow phenomena in the real world to efficiently determine the three-dimensional information of an object through a single picture without calculating the information, so as to achieve the obstacle avoidance effect of the mobile robot.
  • an embodiment of the present application also provides an obstacle detection device, where, as shown in FIG. 8, the device includes:
  • the acquiring module 801 is used to acquire the traveling area image captured by the downward tilt camera device installed in the AGV;
  • the first determining module 802 is configured to determine the lower part of the travel area image as a reference area
  • the second determining module 803 is configured to determine the remaining part of the travel area image located above the reference area as the detection area;
  • the first detection module 804 is configured to detect the object area and the shadow area in the detection area, where the object area is an area whose chromaticity difference from the reference area exceeds a first threshold, and the shadow area is a chromaticity difference from the reference area. An area where the difference is less than the first threshold and the brightness is lower than the reference area;
  • the second detection module 805 is configured to detect whether the positional relationship between the object area and the shadow area matches the light projection direction when it is detected that the object area and the shadow area exist in the detection area at the same time;
  • the first marking module 806 is used to mark the object area as a suspected obstacle when the positional relationship between the object area and the shadow area matches the light projection direction.
  • the specific functions and interaction modes of the acquiring module 801, the first determining module 802, the second determining module 803, the first detecting module 804, the second detecting module 805, and the first marking module 806 can be found in the corresponding FIG. 1 The record of the embodiment will not be repeated here.
  • the device further includes:
  • the second marking module 807 is configured to perform size condition filtering on the object area marked as the suspected obstacle, and mark the object area remaining after filtering as an obstacle.
  • the first detection module 804 includes:
  • the first dividing unit is used to divide the detection area into several sub-detection areas in columns, wherein some pixels of adjacent sub-detection areas overlap;
  • the second dividing unit is used to divide the reference area into a number of sub-reference areas in columns, wherein some pixels of adjacent sub-reference areas overlap;
  • a first comparison unit configured to compare the chromaticity difference of each sub-detection area with the corresponding sub-reference area in the same column, and screen out the first partial area in the sub-detection area whose chromaticity difference is greater than the first threshold;
  • the first marking unit is used to merge the first partial areas with pixel overlap and mark the merged area as an object area; and/or separately mark the first partial areas without pixel overlap as an object area.
  • the first detection module 804 further includes:
  • the screening unit is used to screen out the second partial area where the chromaticity difference in the sub-detection area is less than the first threshold and the brightness is less than the second threshold;
  • the second marking unit is used to merge the second partial area with overlapping pixels, and mark the merged area as a shaded area; and/or separately mark the second partial area without pixel overlap as a shade area.
  • the first marking unit includes:
  • the first marking subunit is used to merge the first partial areas where the pixels overlap, and mark the merged area larger than the third threshold as an object area;
  • the second marking subunit is used for marking the first partial area with no overlapping pixels and an area larger than the third threshold as an object area.
  • the second marking unit includes:
  • the third marking subunit is used for merging the second partial area where the pixels overlap, and marking the area whose merged area is greater than the fourth threshold as a shaded area;
  • the fourth marking subunit is used to individually mark the second partial area where there is no pixel overlap and the area is greater than the fourth threshold as a shaded area.
  • the first marking module 806 includes:
  • the first calculation unit is configured to calculate the ratio of the area of the shadow area to the area of the object area of the suspected obstacle;
  • the third marking unit is used to mark the object area as the obstacle when the ratio is greater than the fifth threshold.
  • the device further includes:
  • the third determining module 808 is configured to determine the coordinates of the projection point on the left boundary and the coordinates of the projection point on the right boundary of the object area marked as the obstacle in the reference area;
  • the third detection module 809 is configured to detect whether the travel trajectory of the AGV passes through the position range between the coordinates of the projection point on the left boundary and the coordinates of the projection point on the right boundary;
  • the planning module 810 is configured to generate for the AGV to make the travel trajectory avoid the left boundary projection point coordinates and the position range between the left boundary projection point coordinates and the right boundary projection point coordinates when it is detected that the travel trajectory of the AGV Planning obstacle avoidance strategy for the location range between the coordinates of the projection point on the right boundary.
  • the device further includes:
  • the fourth determining module 811 is configured to determine that there is no obstacle in the detection area when it is detected that the shadow area and the object area do not exist at the same time in the detection area, or when only one of the shadow area and the object area is searched Things.
  • the embodiment of the present application also provides a non-transitory computer-readable storage medium, which stores instructions, which when executed by a processor causes the processor to execute any of the aforementioned obstacle detection methods
  • the storage medium can be a general storage medium, such as a removable disk, a hard disk, and FLASH.
  • the computer program on the storage medium can execute the above-mentioned obstacle detection method, so as to capture the travel area Image, efficiently judge the three-dimensional information of obstacles, and achieve the effect of reducing the amount of calculation.
  • Another embodiment of the present application further provides a mobile robot, including a processor and a downward tilt camera device, the downward tilt camera device tilts down at a preset angle at the front end of the mobile robot, and the processor is used to perform any of the foregoing obstacle detection Methods.
  • the preset angle of the downward tilt of the downward tilt camera device is 8° to 12°.
  • the preset angle of the downward tilt of the downward tilt camera device is the angle between the central axis of the lens of the downward tilt camera device and the horizontal direction. It can also be expressed by the angle between the vertical line of the central axis in the vertical plane and the vertical direction.
  • the horizontal direction and the vertical direction here are respectively the horizontal direction and the vertical direction in the world coordinate system.
  • the preset angle of the downward tilt of the downward tilt camera device may be 10 degrees.
  • An embodiment of the present application also provides a computer program, which when executed by a processor causes the processor to execute any of the foregoing obstacle detection methods.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present application discloses an obstacle detection method, a device, a storage medium, and a mobile robot. The method specifically comprises: acquiring a traveling region image captured by a downward-tilted image capturing device mounted on an AGV; determining a lower portion of the traveling region image as a reference region; determining a remaining portion above the lower portion in the traveling region image as a detection region; performing detection on the detection region for an object region and a shadow region; if the object region and the shadow region are both detected as present in the detection region, detecting whether a positional relationship between the object region and the shadow region matches a light projection direction; and if so, marking the object region as a suspected obstacle. Embodiments of the present application achieve obstacle detection by analyzing an acquired traveling region image, and determine position information of a three-dimensional object efficiently, thereby reducing computation or training based on a large amount of data, and accordingly achieving high computational efficiency.

Description

一种检测障碍物的方法、装置、存储介质和移动机器人Method, device, storage medium and mobile robot for detecting obstacles
本申请要求于2019年06月03日提交中国专利局、申请号为201910476765.5发明名称为“一种检测障碍物的方法、装置、存储介质和移动机器人”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed with the Chinese Patent Office with the application number 201910476765.5 and the invention titled "A method, device, storage medium and mobile robot for detecting obstacles" on June 3, 2019, and its entire contents Incorporated in this application by reference.
技术领域Technical field
本申请涉及机器人技术领域,尤其涉及一种检测障碍物的方法、装置、存储介质和移动机器人。This application relates to the field of robotics, and in particular to a method, device, storage medium, and mobile robot for detecting obstacles.
背景技术Background technique
移动机器人(Automated Guided Vehicle,AGV)实现智能化的一个重要标志就是自主导航,而避障是自主导航中的主要研究课题。避障一般是指移动机器人根据采集的障碍物的状态信息,按照一定的方法进行有效地避障,最后到达目标点,其中,障碍物的状态信息可以为AGV在行走过程中通过传感器感知到的妨碍其通行的静态和/或动态物体的信息。移动机器人通常使用传感器来获取周围的环境信息,包括障碍物的尺寸、形状以及位置等。目前,移动机器人的主流感知避障技术方案按照使用的传感器主要分为三类。其中,第一类是使用主动式传感器,如激光、雷达或超声波等;第二类是使用被动式传感器,如双目相机或深度相机等;第三类是将主动式传感器和被动式传感器结合,如使用相机和雷达相结合的避障方案等。另外,移动机器人也可以通过遗传算法、神经网络算法和模糊算法等人工智能领域的方法来实现避障,在技术上的分类主要分为:使用立体视觉的方案和使用深度学习的方案。An important sign of the realization of intelligentization of mobile robots (Automated Guided Vehicle, AGV) is autonomous navigation, and obstacle avoidance is the main research topic in autonomous navigation. Obstacle avoidance generally means that the mobile robot effectively avoids obstacles according to a certain method according to the collected obstacle status information, and finally reaches the target point. Among them, the obstacle status information can be sensed by the sensor during the walking process of the AGV. Information on static and/or dynamic objects that hinder its passage. Mobile robots usually use sensors to obtain information about the surrounding environment, including the size, shape, and location of obstacles. At present, the mainstream perception and obstacle avoidance technical solutions of mobile robots are mainly divided into three categories according to the sensors used. Among them, the first type is the use of active sensors, such as laser, radar or ultrasonic, etc.; the second type is the use of passive sensors, such as binocular cameras or depth cameras, etc.; the third type is the combination of active and passive sensors, such as Obstacle avoidance solutions that use a combination of camera and radar, etc. In addition, mobile robots can also achieve obstacle avoidance through artificial intelligence methods such as genetic algorithms, neural network algorithms and fuzzy algorithms. The technical classification is mainly divided into: the use of stereo vision and the use of deep learning.
发明内容Summary of the invention
本申请实施例提供了一种检测障碍物的方法、装置、存储介质和移动机器人,以实现对障碍物进行检测。具体技术方案如下:The embodiments of the present application provide a method, a device, a storage medium, and a mobile robot for detecting obstacles to realize obstacle detection. The specific technical solutions are as follows:
第一方面,本申请实施例提供了一种检测障碍物的方法,该方法包括:In the first aspect, an embodiment of the present application provides a method for detecting an obstacle, the method including:
获取装设于AGV的下倾摄像装置拍摄到的行进区域图像;Obtain the image of the travel area captured by the downward tilt camera device installed in the AGV;
将所述行进区域图像中地面区域的下部确定为参考区域;Determining the lower part of the ground area in the travel area image as a reference area;
将所述行进区域图像中位于所述参考区域的上方的其余部分确定为检测区域;Determining the remaining part of the travel area image located above the reference area as a detection area;
在所述检测区域中检测物体区域和阴影区域,其中,所述物体区域为与所述参考区域的色度差超过第一阈值的区域,并且,所述阴影区域为与所述参考区域的色度差小于所述第一阈值、且亮度低于所述参考区域的区域;An object area and a shadow area are detected in the detection area, where the object area is an area whose chromaticity difference from the reference area exceeds a first threshold, and the shadow area is a color difference from the reference area. An area whose degree difference is less than the first threshold and whose brightness is lower than the reference area;
当检测到所述检测区域中同时存在所述物体区域和所述阴影区域时,检测所述物体区域和所述阴影区域之间的位置关系是否匹配光投影方向;When it is detected that the object area and the shadow area are simultaneously present in the detection area, detecting whether the positional relationship between the object area and the shadow area matches the light projection direction;
当所述物体区域和所述阴影区域之间的位置关系匹配光投影方向时,将所述物体区域标记为疑似障碍物。When the positional relationship between the object area and the shadow area matches the light projection direction, the object area is marked as a suspected obstacle.
可选的,在所述将物体区域标记为疑似障碍物的步骤之后,所述方法还包括:Optionally, after the step of marking the object area as a suspected obstacle, the method further includes:
对标记为所述疑似障碍物的所述物体区域进行尺寸条件过滤,并将经过滤后保留的所述物体区域标记为障碍物。Perform size condition filtering on the object area marked as the suspected obstacle, and mark the object area remaining after filtering as an obstacle.
可选的,通过以下步骤标记所述物体区域:Optionally, mark the object area through the following steps:
将所述检测区域按列划分为若干子检测区域,其中,相邻的所述子检测区域的部分像素重合;Dividing the detection area into several sub-detection areas in columns, wherein some pixels of the adjacent sub-detection areas overlap;
将所述参考区域按列划分为若干子参考区域,其中,相邻的所述子参考区域的部分像素重合;Divide the reference area into a number of sub-reference areas in columns, wherein some pixels of adjacent sub-reference areas overlap;
比较每个所述子检测区域与同一列的对应的所述子参考区域的色度差,筛选出子检测区域中的色度差大于所述第一阈值的第一部分区域;Comparing the chromaticity difference of each of the sub-detection areas with the corresponding sub-reference areas in the same column, and filtering out the first partial area in the sub-detection area whose chromaticity difference is greater than the first threshold;
将存在像素重合的第一部分区域进行合并,并将合并后的区域标记为一个物体区域;和/或,将不存在像素重合的第一部分区域单独标记为一个物体区域。Merging the first partial areas with overlapping pixels, and marking the merged area as an object area; and/or, separately marking the first partial areas without pixel overlap as an object area.
可选的,通过以下步骤标记所述阴影区域:Optionally, mark the shaded area through the following steps:
筛选出子检测区域中的色度差小于所述第一阈值且亮度小于第二阈值的第二部分区域;Filter out the second partial area in the sub-detection area where the chromaticity difference is less than the first threshold and the brightness is less than the second threshold;
将存在像素重合的第二部分区域进行合并,并将合并后的区域标记为一个阴影区域;和/或,将不存在像素重合的第二部分区域单独标记为一个阴影区域。Merging the second partial areas with overlapping pixels, and marking the merged area as a shaded area; and/or, separately marking the second partial areas without overlapping pixels as a shaded area.
可选的,所述将存在像素重合的第一部分区域进行合并,并将合并后的区域标记为一个物体区域;和/或,将不存在像素重合的第一部分区域单独标 记为一个物体区域的步骤,包括:Optionally, the step of merging the first partial areas with overlapping pixels, and marking the merged area as an object area; and/or, separately marking the first partial areas without pixel overlap as an object area ,include:
将存在像素重合的第一部分区域进行合并,并将合并后的面积大于第三阈值的区域标记为一个物体区域;Combine the first partial areas with overlapping pixels, and mark the area whose combined area is greater than the third threshold as an object area;
和/或,将不存在像素重合且面积大于第三阈值的第一部分区域,单独标记为一个物体区域。And/or, mark the first partial area that does not have overlapping pixels and the area is greater than the third threshold as an object area.
可选的,所述将存在像素重合的第二部分区域进行合并,并将合并后的区域标记为一个阴影区域;和/或,将不存在像素重合的第二部分区域单独标记为一个阴影区域的步骤,包括:Optionally, the second partial area with overlapping pixels is merged, and the merged area is marked as a shaded area; and/or, the second partial area without overlapping pixels is separately marked as a shaded area The steps include:
将存在像素重合的第二部分区域进行合并,并将合并后的面积大于第四阈值的区域标记为一个阴影区域;Combine the second partial areas with overlapping pixels, and mark the area whose combined area is greater than the fourth threshold as a shaded area;
和/或,将不存在像素重合且面积大于第四阈值的第二部分区域,单独标记为一个阴影区域。And/or, separately mark the second partial area with no overlapping pixels and an area larger than the fourth threshold as a shaded area.
可选的,所述对标记为疑似障碍物的所述物体区域进行尺寸条件过滤,并将经过滤后保留的所述物体区域标记为障碍物的步骤,包括:Optionally, the step of performing size condition filtering on the object area marked as a suspected obstacle, and marking the object area retained after filtering as an obstacle, includes:
计算所述阴影区域的面积与所述疑似障碍物的所述物体区域的面积的比值;Calculating the ratio of the area of the shadow area to the area of the object area of the suspected obstacle;
当所述比值大于第五阈值时,将所述物体区域标记为所述障碍物。When the ratio is greater than a fifth threshold, the object area is marked as the obstacle.
可选的,在所述将经过滤后保留的所述物体区域标记为障碍物的步骤之后,所述方法还包括:Optionally, after the step of marking the object area retained after filtering as an obstacle, the method further includes:
确定标记为所述障碍物的所述物体区域在所述参考区域的左边界投影点坐标和右边界投影点坐标;Determining the coordinates of the projection point on the left boundary and the coordinates of the projection point on the right boundary of the object area marked as the obstacle;
检测所述AGV的行进轨迹是否途经所述左边界投影点坐标和所述右边界投影点坐标之间的位置范围;Detecting whether the travel trajectory of the AGV passes through the position range between the coordinates of the projection point on the left boundary and the coordinates of the projection point on the right boundary;
当检测出所述AGV的行进轨迹途经所述左边界投影点坐标和右边界投影点坐标之间的位置范围时,为所述AGV生成使所述行进轨迹避让所述左边界投影点坐标和右边界投影点坐标之间的位置范围的规划避障策略。When it is detected that the travel trajectory of the AGV passes through the position range between the left boundary projection point coordinates and the right boundary projection point coordinates, generate for the AGV so that the travel trajectory avoids the left boundary projection point coordinates and the right boundary projection point coordinates. Planning obstacle avoidance strategy for the location range between the boundary projection point coordinates.
可选的,在所述检测区域中检测物体区域和阴影区域的步骤之后,所述方法还包括:Optionally, after the step of detecting the object area and the shadow area in the detection area, the method further includes:
当检测到所述检测区域中不存在所述阴影区域和所述物体区域时、或仅 搜索到所述阴影区域和所述物体区域的其中之一时,确定所述检测区域中无障碍物。When it is detected that the shadow area and the object area do not exist in the detection area, or when only one of the shadow area and the object area is searched, it is determined that there is no obstacle in the detection area.
第二方面,本申请实施例提供了一种检测障碍物的装置,其中,该装置包括:In a second aspect, an embodiment of the present application provides an obstacle detection device, wherein the device includes:
获取模块,用于获取装设于AGV的下倾摄像装置拍摄到的行进区域图像;The acquisition module is used to acquire the traveling area image captured by the downward tilt camera device installed in the AGV;
第一确定模块,用于将所述行进区域图像中的下部确定为参考区域;A first determining module, configured to determine the lower part of the travel area image as a reference area;
第二确定模块,用于将所述行进区域图像中位于所述参考区域的上方的其余部分确定为检测区域;A second determining module, configured to determine the remaining part of the travel area image located above the reference area as a detection area;
第一检测模块,用于在所述检测区域中检测物体区域和阴影区域,其中,所述物体区域为与所述参考区域的色度差超过第一阈值的区域,并且,所述阴影区域为与所述参考区域的色度差小于所述第一阈值、且亮度低于所述参考区域的区域;The first detection module is configured to detect an object area and a shadow area in the detection area, wherein the object area is an area whose chromaticity difference with the reference area exceeds a first threshold, and the shadow area is An area whose chromaticity difference from the reference area is less than the first threshold and whose brightness is lower than the reference area;
第二检测模块,用于当检测到所述检测区域中同时存在所述物体区域和所述阴影区域时,检测所述物体区域和所述阴影区域之间的位置关系是否匹配光投影方向;The second detection module is configured to detect whether the positional relationship between the object area and the shadow area matches the light projection direction when it is detected that the object area and the shadow area are simultaneously present in the detection area;
第一标记模块,用于当所述物体区域和所述阴影区域之间的位置关系匹配光投影方向时,将所述物体区域标记为疑似障碍物。The first marking module is configured to mark the object area as a suspected obstacle when the positional relationship between the object area and the shadow area matches the light projection direction.
可选的,所述装置还包括:Optionally, the device further includes:
第二标记模块,用于对标记为所述疑似障碍物的所述物体区域进行尺寸条件过滤,并将经过滤后保留的所述物体区域标记为障碍物。The second marking module is used to perform size condition filtering on the object area marked as the suspected obstacle, and mark the object area remaining after filtering as an obstacle.
可选的,所述第一检测模块包括:Optionally, the first detection module includes:
第一划分单元,用于将检测区域按列划分为若干子检测区域,其中,相邻的子检测区域的部分像素重合;The first dividing unit is used to divide the detection area into several sub-detection areas in columns, wherein some pixels of adjacent sub-detection areas overlap;
第二划分单元,用于将参考区域按列划分为若干子参考区域,其中,相邻的子参考区域的部分像素重合;The second dividing unit is used to divide the reference area into a number of sub-reference areas in columns, wherein some pixels of adjacent sub-reference areas overlap;
第一比较单元,用于比较每个所述子检测区域与同一列的对应的所述子参考区域的色度差,筛选出子检测区域中的色度差大于所述第一阈值的第一部分区域;The first comparison unit is configured to compare the chromaticity difference between each of the sub-detection areas and the corresponding sub-reference areas in the same column, and filter out the first part of the sub-detection area whose chromaticity difference is greater than the first threshold area;
第一标记单元,用于将存在像素重合的第一部分区域进行合并,并将合 并后的区域标记为一个物体区域;和/或,将不存在像素重合的第一部分区域单独标记为一个物体区域。The first marking unit is used to merge the first partial areas with pixel overlap and mark the merged area as an object area; and/or separately mark the first partial areas without pixel overlap as an object area.
可选的,所述第一检测模块还包括:Optionally, the first detection module further includes:
筛选单元,用于筛选出子检测区域中的色度差小于所述第一阈值且亮度小于第二阈值的第二部分区域;The screening unit is configured to screen out the second partial area in the sub-detection area where the chromaticity difference is less than the first threshold and the brightness is less than the second threshold;
第二标记单元,用于将存在像素重合的第二部分区域进行合并,并将合并后的区域标记为一个阴影区域;和/或,将不存在像素重合的第二部分区域单独标记为一个阴影区域。The second marking unit is used to merge the second partial area with overlapping pixels, and mark the merged area as a shaded area; and/or separately mark the second partial area without pixel overlap as a shade area.
可选的,所述第一标记单元包括:Optionally, the first marking unit includes:
第一标记子单元,用于将存在像素重合的第一部分区域进行合并,并将合并后的面积大于第三阈值的区域标记为一个物体区域;The first marking subunit is used to merge the first partial areas where the pixels overlap, and mark the merged area larger than the third threshold as an object area;
第二标记子单元,用于将不存在像素重合且面积大于第三阈值的第一部分区域,单独标记为一个物体区域。The second marking subunit is used for marking the first partial area with no overlapping pixels and an area larger than the third threshold as an object area.
可选的,所述第二标记单元包括:Optionally, the second marking unit includes:
第三标记子单元,用于将存在像素重合的第二部分区域进行合并,并将合并后的面积大于第四阈值的区域标记为一个阴影区域;The third marking subunit is used for merging the second partial area where the pixels overlap, and marking the area whose merged area is greater than the fourth threshold as a shaded area;
第四标记子单元,用于将不存在像素重合且面积大于第四阈值的第二部分区域,单独标记为一个阴影区域。The fourth marking subunit is used to individually mark the second partial area where there is no pixel overlap and the area is greater than the fourth threshold as a shaded area.
可选的,所述第一标记模块包括:Optionally, the first marking module includes:
第一计算单元,用于计算所述阴影区域的面积与所述疑似障碍物的所述物体区域的面积的比值;A first calculation unit for calculating the ratio of the area of the shadow area to the area of the object area of the suspected obstacle;
第三标记单元,用于当所述比值大于第五阈值时,将所述物体区域标记为所述障碍物。The third marking unit is used for marking the object area as the obstacle when the ratio is greater than a fifth threshold.
可选的,所述装置还包括:Optionally, the device further includes:
第三确定模块,用于确定标记为所述障碍物的所述物体区域在所述参考区域的左边界投影点坐标和右边界投影点坐标;A third determining module, configured to determine the coordinates of the projection point on the left boundary and the coordinates of the projection point on the right boundary of the object area marked as the obstacle;
第三检测模块,用于检测所述AGV的行进轨迹是否途径所述左边界投影点坐标和所述右边界投影点坐标之间的位置范围;The third detection module is configured to detect whether the AGV's trajectory passes through the position range between the coordinates of the left boundary projection point and the coordinates of the right boundary projection point;
规划模块,用于当检测出所述AGV的行进轨迹途经所述左边界投影点坐 标和右边界投影点坐标之间的位置范围时,为所述AGV生成使所述行进轨迹避让所述左边界投影点坐标和右边界投影点坐标之间的位置范围的规划避障策略。The planning module is used to generate for the AGV to make the travel trajectory avoid the left boundary when it is detected that the trajectory of the AGV passes through the position range between the coordinates of the projection point on the left boundary and the coordinates of the projection point on the right boundary. The planning obstacle avoidance strategy of the position range between the projection point coordinates and the right boundary projection point coordinates.
可选的,所述装置还包括:Optionally, the device further includes:
第四确定模块,用于当检测到所述检测区域中未同时存在所述阴影区域和所述物体区域时、或仅搜索到所述阴影区域和所述物体区域的其中之一时,确定所述检测区域中无障碍物。The fourth determining module is configured to determine when it is detected that the shadow area and the object area do not exist at the same time in the detection area, or when only one of the shadow area and the object area is searched There are no obstacles in the detection area.
第三方面,本申请实施例提供了一种非瞬时计算机可读存储介质,所述非瞬时计算机可读存储介质存储指令,所述指令在由处理器执行时使得所述处理器执行上述任一所述的检测障碍物的方法。In the third aspect, the embodiments of the present application provide a non-transitory computer-readable storage medium, the non-transitory computer-readable storage medium stores instructions, which when executed by a processor causes the processor to execute any of the above The described method of detecting obstacles.
第四方面,本申请实施例提供了一种移动机器人,包括处理器和下倾摄像装置,所述下倾摄像装置在所述移动机器人的前端以预设的角度下倾,所述处理器用于执行上述任一所述的检测障碍物的方法。In a fourth aspect, an embodiment of the present application provides a mobile robot, including a processor and a downward tilt camera device, the downward tilt camera device tilts down at a preset angle at the front end of the mobile robot, and the processor is used for Perform any of the obstacle detection methods described above.
可选的,所述下倾摄像装置下倾的预设的角度为8°至12°。Optionally, the preset angle of the downward tilt of the downward tilt camera device is 8° to 12°.
第五方面,本申请实施例还提供了一种计算机程序,所述计算机程序在由处理器执行时使得所述处理器执行上述任一所述的检测障碍物的方法。In a fifth aspect, the embodiments of the present application also provide a computer program, which when executed by a processor causes the processor to execute any of the obstacle detection methods described above.
如上可见,基于上述实施例,首先获取装设于AGV的下倾摄像装置拍摄到的行进区域图像,将行进区域图像中的下部确定为参考区域,将行进区域图像中位于参考区域的上方的其余部分确定为检测区域,其次,在检测区域中检测物体区域和阴影区域,其中,物体区域为与参考区域的色度差超过第一阈值的区域,并且,阴影区域为与参考区域的色度差小于第一阈值、且亮度低于参考区域的区域,然后,当检测到检测区域中同时存在物体区域和阴影区域时,检测物体区域和阴影区域之间的位置关系是否匹配光投影方向,最后,当物体区域和阴影区域之间的位置关系匹配光投影方向时,将物体区域标记为疑似障碍物。本申请实施例通过在行进区域图像中判断物体区域和阴影区域,并根据物体区域和阴影区域之间的关系,以及光投影方向,在物体区域中高效判断疑似障碍物的位置信息,实现了对障碍物的检测,并且判断方式较简单,且计算效率较高。As can be seen from the above, based on the above embodiment, firstly, the image of the traveling area captured by the down-tilt camera device installed in the AGV is acquired, the lower part of the image of the traveling area is determined as the reference area, and the rest of the image of the traveling area located above the reference area is determined. Part of it is determined as the detection area. Secondly, the object area and the shadow area are detected in the detection area. The object area is the area whose chromaticity difference from the reference area exceeds the first threshold, and the shadow area is the chromaticity difference from the reference area. The area that is less than the first threshold and whose brightness is lower than the reference area. Then, when it is detected that the object area and the shadow area exist in the detection area at the same time, it is detected whether the positional relationship between the object area and the shadow area matches the light projection direction. Finally, When the positional relationship between the object area and the shadow area matches the light projection direction, the object area is marked as a suspected obstacle. In the embodiment of the application, the object area and the shadow area are judged in the image of the traveling area, and the position information of the suspected obstacle is efficiently judged in the object area according to the relationship between the object area and the shadow area, and the light projection direction. Obstacle detection, and the judgment method is simpler, and the calculation efficiency is higher.
附图说明Description of the drawings
为了更清楚地说明本申请实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present application and the technical solutions of the prior art more clearly, the following briefly introduces the drawings needed in the embodiments and the prior art. Obviously, the drawings in the following description are only the present For some of the embodiments of the application, for those of ordinary skill in the art, other drawings can be obtained from these drawings without creative work.
图1示出了本申请实施例所提供的一种检测障碍物的方法的流程图;FIG. 1 shows a flowchart of a method for detecting obstacles provided by an embodiment of the present application;
图2示出了本申请实施例所提供的确定检测区域参考区域的示意图;FIG. 2 shows a schematic diagram of determining a reference area of a detection area provided by an embodiment of the present application;
图3示出了本申请实施例所提供的一种检测障碍物的方法的具体流程的示意图;FIG. 3 shows a schematic diagram of a specific process of a method for detecting obstacles provided by an embodiment of the present application;
图4示出了本申请实施例所提供的单目相机安装及视野范围的示意图;4 shows a schematic diagram of the installation and field of view of the monocular camera provided by the embodiment of the present application;
图5示出了本申请实施例所提供的划分子检测区域和子参考区域的方法的具体示意图;FIG. 5 shows a specific schematic diagram of a method for dividing a sub-detection area and a sub-reference area provided by an embodiment of the present application;
图6a示出了本申请实施例所提供的拍摄悬空物体的行进区域图像的具体示意图;Fig. 6a shows a specific schematic diagram of photographing a traveling area image of a suspended object provided by an embodiment of the present application;
图6b示出了本申请实施例所提供的与图6a对应的AGV与悬空物体的位置的具体示意图;FIG. 6b shows a specific schematic diagram of the position of the AGV and the suspended object corresponding to FIG. 6a provided by the embodiment of the present application;
图6c示出了本申请实施例所提供的行进区域图像中第一部分区域的合并示意图;FIG. 6c shows a schematic diagram of merging the first partial area in the traveling area image provided by an embodiment of the present application;
图7a示出了本申请实施例所提供的另一种拍摄悬空物体的行进区域图像的具体示意图;Fig. 7a shows another specific schematic diagram of photographing a traveling area image of a suspended object provided by an embodiment of the present application;
图7b示出了本申请实施例所提供的与图7a对应的AGV与悬空物体的位置的具体示意图;FIG. 7b shows a specific schematic diagram of the position of the AGV and the suspended object corresponding to FIG. 7a provided by the embodiment of the present application;
图8示出了本申请实施例所提供的一种检测障碍物的装置的示意图。Fig. 8 shows a schematic diagram of an obstacle detection device provided by an embodiment of the present application.
具体实施方式Detailed ways
为使本申请的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本申请进一步详细说明。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本 申请保护的范围。In order to make the purpose, technical solutions, and advantages of the present application clearer, the following further describes the present application in detail with reference to the drawings and embodiments. Obviously, the described embodiments are only a part of the embodiments of the present application, rather than all the embodiments. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of this application.
本申请实施例提供了一种检测障碍物的方法,利用在光照均匀的室内环境中,具有一定高度的障碍物都会具有一定面积的阴影的光影现象,通过移动机器人自带的下倾摄像装置采集移动机器人在行进区域中的行进区域图像,并在行进区域图像中选取参考区域和检测区域。通过图像分析在检测区域中确定阴影区域和物体区域,并根据阴影区域和物体区域之间的位置关系,判断物体区域是否为疑似障碍物。最后,对确定的疑似障碍物的尺寸条件进行筛选,并将通过筛选的疑似障碍物确定为障碍物。通过上述方法实现低成本并高效检测障碍物的效果。The embodiment of the present application provides a method for detecting obstacles. In an indoor environment with uniform illumination, obstacles with a certain height will have a certain area of shadow. The light and shadow phenomenon is collected by the downward tilt camera device of the mobile robot. The moving area image of the mobile robot in the moving area, and the reference area and the detection area are selected from the moving area image. Determine the shadow area and the object area in the detection area through image analysis, and determine whether the object area is a suspected obstacle according to the positional relationship between the shadow area and the object area. Finally, the size conditions of the determined suspected obstacles are screened, and the suspected obstacles that pass the screening are determined as obstacles. Through the above method, the effect of low-cost and efficient detection of obstacles is achieved.
本申请的应用领域主要是机器人领域,可以适用环境是光照均匀的室内、工厂环境或仓储环境。通过采集在AGV行进区域内的行进区域图像,并通过分析行进区域图像中阴影区域和物体区域的位置关系,实现对障碍物的检测,计算方式简单,并且计算量较小。参见图1所示,详细步骤如下:The application field of this application is mainly the field of robotics, and the applicable environment is indoor, factory environment or storage environment with uniform illumination. By collecting the image of the travel area in the travel area of the AGV, and by analyzing the positional relationship between the shadow area and the object area in the travel area image, the detection of obstacles is realized. The calculation method is simple and the calculation amount is small. See Figure 1, the detailed steps are as follows:
S11,获取装设于AGV的下倾摄像装置拍摄到的行进区域图像。S11: Obtain an image of the travel area captured by the downward tilt camera device installed in the AGV.
本步骤中,AGV(移动机器人)上设置下倾摄像装置,下倾摄像装置的摄像头可以下倾朝向地面方向。在移动机器人前进的过程中,下倾摄像装置拍摄移动机器人在行进区域中的图像。行进区域一般是指移动机器人自行前进的区域,也即移动机器人即将进行避障的可能区域。下倾摄像装置的拍摄范围以移动机器人前进过程中前进方向的路面为主。In this step, the AGV (mobile robot) is equipped with a downward tilt camera device, and the camera of the downward tilt camera device can tilt downward toward the ground. When the mobile robot is advancing, the downward-tilting camera device takes an image of the mobile robot in the travel area. The travel area generally refers to the area where the mobile robot advances by itself, that is, the possible area where the mobile robot is about to avoid obstacles. The shooting range of the down-tilt camera device is mainly based on the road surface in the forward direction of the mobile robot.
移动机器人在行进区域中前进时,通过携带的下倾摄像装置采集行进区域的图像,得到行进区域图像,并存储采集的行进区域图像。其中,下倾摄像装置可以为单目相机等,行进区域图像可以为单幅照片。When the mobile robot is advancing in the travel area, it collects images of the travel area through the carried down-tilt camera device, obtains the travel area image, and stores the collected travel area image. Among them, the downward tilting camera device may be a monocular camera, etc., and the traveling area image may be a single photo.
S12,将行进区域图像中的下部确定为参考区域。S12: Determine the lower part of the travel area image as a reference area.
本步骤中,在采集了行进区域图像后,选取行进区域图像中的一部分区域作为当前地面的参考,本申请中将选取的这一部分区域称之为参考区域。在本申请的一个实施例中,默认拍摄到的行进区域图像中包含地面,且地面一般位于行进区域图像的下部,因此在一种可选的实现方式中,如图2所示,可以选取行进区域图像底部的一块预设大小的区域作为参考区域。In this step, after collecting the image of the traveling area, a part of the area in the image of the traveling area is selected as the reference of the current ground. In this application, the selected area is referred to as the reference area. In an embodiment of the present application, the travel area image captured by default includes the ground, and the ground is generally located in the lower part of the travel area image. Therefore, in an alternative implementation, as shown in FIG. 2, the travel An area with a preset size at the bottom of the area image is used as a reference area.
S13,将行进区域图像中位于参考区域的上方的其余部分确定为检测区域。S13: Determine the remaining part of the travel area image located above the reference area as the detection area.
本步骤中,在上述步骤S12中确定了行进区域图像中的参考区域后,可以将行进区域图像中除了参考区域外的其它部分作为检测区域。如图2所示,检测区域为行进图像中位于参考区域上方的其余部分。In this step, after the reference area in the travel area image is determined in the above step S12, other parts of the travel area image except the reference area can be used as the detection area. As shown in Figure 2, the detection area is the remaining part of the traveling image above the reference area.
S14,在检测区域中检测物体区域和阴影区域。S14, detecting the object area and the shadow area in the detection area.
本步骤中,在检测区域中检测是否有符合条件的物体区域和阴影区域。其中,物体区域为与参考区域的色度差超过第一阈值的区域,并且,阴影区域为与参考区域的色度差小于第一阈值、且亮度低于参考区域的区域。可选的,在检测区域中检测是否存在与参考区域的色度差较大的部分,其中,当色度差大于预设色度差阈值则判定为色度差较大。将检测区域与参考区域的颜色进行比较。根据检测区域与参考区域的颜色的差异大小,计算检测区域与参考区域的颜色的色度差。在计算得出颜色的色度差后,将该色度差与第一阈值进行比较。其中,第一阈值的具体数值可以预先设定,用于区别检测区域与参考区域的颜色。在比较后,将色度差高于第一阈值的检测区域标记为物体区域,即将检测区域中与参考区域的色度差较大的部分标记为物体区域。In this step, it is detected whether there are object areas and shadow areas that meet the conditions in the detection area. The object area is an area whose chromaticity difference from the reference area exceeds a first threshold, and the shaded area is an area whose chromaticity difference from the reference area is less than the first threshold and whose brightness is lower than the reference area. Optionally, it is detected in the detection area whether there is a part with a large chromaticity difference from the reference area, wherein when the chromaticity difference is greater than a preset chromaticity difference threshold, it is determined that the chromaticity difference is relatively large. Compare the color of the detection area with the reference area. According to the size of the color difference between the detection area and the reference area, the color difference between the detection area and the reference area is calculated. After the chromaticity difference of the color is calculated, the chromaticity difference is compared with the first threshold. Wherein, the specific value of the first threshold can be preset to distinguish the colors of the detection area from the reference area. After the comparison, the detection area with the chromaticity difference higher than the first threshold is marked as the object area, that is, the part of the detection area with the larger chromaticity difference from the reference area is marked as the object area.
另外,比较检测区域中色度差低于第一阈值的区域的亮度与第二阈值的大小。其中,第二阈值的具体数值可以预先设定,用于对检测区域的亮度进行区分。在色差度低于第一阈值的检测区域中,选取亮度低于第二阈值的区域标记为阴影区域,即将检测区域中与参考区域的颜色较为相似且亮度低于第二阈值的部分标记为阴影区域。In addition, the brightness of the area in the detection area whose chromaticity difference is lower than the first threshold is compared with the magnitude of the second threshold. Wherein, the specific value of the second threshold can be preset to distinguish the brightness of the detection area. In the detection area where the chromatic aberration is lower than the first threshold, the area whose brightness is lower than the second threshold is selected as a shadow area, that is, the part in the detection area that is similar in color to the reference area and whose brightness is lower than the second threshold is marked as a shadow area.
S15,当检测到检测区域中同时存在物体区域和阴影区域时,检测物体区域和阴影区域之间的位置关系是否匹配光投影方向。S15: When it is detected that both the object area and the shadow area exist in the detection area, it is detected whether the positional relationship between the object area and the shadow area matches the light projection direction.
本申请实施例中,基于AGV上装设的面向地面方向的下倾摄像装置,在AGV前进时采集的行进区域图像可能包含以下几种情况:只拍摄到阴影区域或者物体区域中的一个,或者未拍摄到阴影区域和物体区域,或者同时拍摄到阴影区域和物体区域。In the embodiments of this application, based on the downward tilt camera device installed on the AGV facing the ground, the traveling area image collected when the AGV is moving may include the following situations: only one of the shadow area or the object area is captured, or no The shadow area and the object area are shot, or the shadow area and the object area are shot at the same time.
当行进区域图像中未拍摄到阴影区域和物体区域时,一般可以确定AGV前方的一定范围内没有障碍物,这种情况下判定该行进区域图像中无障碍物,AGV可以正常行驶,无需避障。When the shadow area and object area are not captured in the image of the traveling area, it can generally be determined that there are no obstacles in a certain range in front of the AGV. In this case, it is determined that there are no obstacles in the image of the traveling area, and the AGV can drive normally without obstacle avoidance. .
当在行进区域图像中只拍摄到阴影区域时,对应的情况可能是该阴影区域对应的物体处于悬空状态,且此时悬空状态的物体的高度一般会高于AGV的高度,或者可能是该阴影区域对应的物体距离AGV尚有一定距离还未进入下倾摄像装置的拍摄视野。这种情况下判定该行进区域图像中无障碍物,AGV可以正常行驶;或者AGV也可以暂时降低行驶速度,后续若在设定数量的图像帧内检测到障碍物则及时进行避障,若在设定数量的图像帧内未检测到障碍物,则恢复正常的行驶速度。When only the shadow area is captured in the image of the traveling area, the corresponding situation may be that the object corresponding to the shadow area is in a suspended state, and the height of the suspended object at this time is generally higher than the height of the AGV, or it may be the shadow The object corresponding to the area is still a certain distance away from the AGV and has not entered the shooting field of view of the down-tilt camera device. In this case, it is determined that there are no obstacles in the image of the traveling area, and the AGV can drive normally; or the AGV can temporarily reduce the driving speed, and then if the obstacle is detected in the set number of image frames, the obstacle will be avoided in time. If no obstacle is detected in the set number of image frames, the normal driving speed is restored.
当在行进区域图像中只拍摄到物体区域时,对应的情况可能是该物体区域的高度不能形成满足障碍物条件的阴影区域,如贴纸、字面标语等平面物体其高度不会对AGV的行驶造成障碍的物体。这种情况下判定该行进区域图像中无障碍物,AGV可以正常行驶,无需避障。When only the object area is captured in the traveling area image, the corresponding situation may be that the height of the object area cannot form a shadow area that meets the obstacle conditions. The height of flat objects such as stickers and slogans will not cause the AGV to travel. Obstacle objects. In this case, it is determined that there are no obstacles in the image of the traveling area, and the AGV can drive normally without obstacle avoidance.
当AGV的下倾摄像装置拍摄到的行进区域图像中同时存在物体区域和阴影区域时,可以认为在AGV的行进区域中可能存在障碍物,这种情况下可以继续执行步骤S16,通过对拍摄到的物体区域和阴影区域的位置关系做进一步判断,来确认AGV行进区域中存在障碍物的可能性。When there are object areas and shadow areas in the traveling area image captured by the AGV's downward tilt camera device, it can be considered that there may be obstacles in the AGV traveling area. In this case, you can continue to perform step S16. The positional relationship between the object area and the shadow area is further judged to confirm the possibility of obstacles in the AGV travel area.
S16,当物体区域和阴影区域之间的位置关系匹配光投影方向时,将物体区域标记为疑似障碍物。S16: When the positional relationship between the object area and the shadow area matches the light projection direction, mark the object area as a suspected obstacle.
光投影方向一般是指满足自然规律的光影现象中物体与对应的阴影的位置关系。在本申请的一个实施例中,根据光线均匀的室内环境中的光影现象,主要检测物体区域与阴影区域是否满足对应的位置关系,在物体区域周围的预设范围内是否可以检测到匹配光投影方向的阴影区域,即判断阴影区域是否与物体区域对应。预设范围是指在物体区域周围满足自然规律的光影现象的范围,例如,在悬空的物体区域下方的预设范围内可以检测到对应的阴影区域等。在本申请的一个实施例中,若物体区域在AGV当前的行驶路线上,则判定物体区域和阴影区域之间的位置关系匹配光投影方向;否则判定物体区域和阴影区域之间的位置关系不匹配光投影方向。The light projection direction generally refers to the positional relationship between the object and the corresponding shadow in the light and shadow phenomenon that meets the laws of nature. In an embodiment of the present application, according to the light and shadow phenomenon in an indoor environment with uniform light, it mainly detects whether the object area and the shadow area meet the corresponding positional relationship, and whether the matching light projection can be detected within a preset range around the object area The direction of the shadow area, that is, to determine whether the shadow area corresponds to the object area. The preset range refers to the range of light and shadow phenomena that meet the natural laws around the object area, for example, the corresponding shadow area can be detected in the preset range below the suspended object area. In an embodiment of the present application, if the object area is on the current driving route of the AGV, it is determined that the positional relationship between the object area and the shadow area matches the light projection direction; otherwise, it is determined that the positional relationship between the object area and the shadow area is not Match the light projection direction.
本步骤中,当在行进区域图像中检测出的上述物体区域和阴影区域之间的位置关系匹配光投影方向时,可以判断该物体区域与该阴影区域对应。此时,将物体区域标记为疑似障碍物。本申请实施例中的标记可以为逻辑上将 物体区域判定为疑似障碍物或障碍物。In this step, when the positional relationship between the object area and the shadow area detected in the traveling area image matches the light projection direction, it can be determined that the object area corresponds to the shadow area. At this time, mark the object area as a suspected obstacle. The mark in the embodiment of the present application can logically determine the object area as a suspected obstacle or an obstacle.
本申请实施例中,获取装设在AGV上的下倾摄像装置在行进区域中拍摄到的行进区域图像,并在行进区域图像中检测参考区域和检测区域。在检测区域中通过颜色的差异性标记物体区域和阴影区域,以及最后确定疑似障碍物。利用现实世界中的光影现象和采集的行进区域图像,对行进区域图像进行分析,高效判断物体的三维信息,实现了对障碍物的检测,并且减小了计算量,计算效率高,且成本较低。同时,因其主要是利用拍摄的行进区域图像进行分析,计算量较小,可以搭载在计算力较弱的平台上进行实时处理。下倾摄像装置可以为采集可见光图像的常用相机,AGV不用搭载激光传感器或双目相机等价格昂贵的器件,可以降低硬件成本。并且本申请实施例中不用基于点云计算获取现实世界的三维信息,不用通过复杂的深度学习算法对图像进行分析,数据量及计算量小。In the embodiment of the present application, the traveling area image captured by the downward tilt camera device installed on the AGV in the traveling area is acquired, and the reference area and the detection area are detected in the traveling area image. In the detection area, the object area and the shadow area are marked by the difference of color, and the suspected obstacle is finally determined. Using the light and shadow phenomena in the real world and the collected travel area images, the travel area images are analyzed, and the three-dimensional information of the object is efficiently judged. Obstacle detection is realized, and the calculation amount is reduced, the calculation efficiency is high, and the cost is relatively high. low. At the same time, because it mainly uses the captured travel area images for analysis, the amount of calculation is small, and it can be carried on a platform with weak computing power for real-time processing. The down-tilt camera device can be a common camera that collects visible light images. The AGV does not need to be equipped with expensive devices such as laser sensors or binocular cameras, which can reduce hardware costs. In addition, in the embodiments of the present application, it is not necessary to obtain three-dimensional information of the real world based on point cloud computing, and it is not necessary to analyze images through complex deep learning algorithms, and the amount of data and calculations are small.
本申请实施例中的检测障碍物的方法主要是利用障碍物在光照条件下的光影现象,对采集的目标图片进行分析。如图3所示,为本申请实施例所提供的一种检测障碍物的方法的具体流程的示意图。其中,该具体流程的详细过程如下:The method for detecting obstacles in the embodiments of the present application mainly uses the light and shadow phenomenon of obstacles under illumination conditions to analyze the collected target pictures. As shown in FIG. 3, it is a schematic diagram of a specific process of a method for detecting an obstacle provided by an embodiment of this application. Among them, the detailed process of the specific process is as follows:
S301,通过设置在AGV上的下倾摄像装置在AGV的行进区域中拍摄行进区域图像。In S301, an image of the travel area is captured in the travel area of the AGV by using a downward-inclining camera device provided on the AGV.
在本申请的一个实施例中,在移动机器人的前方安装的下倾摄像装置可以是单目相机。单目相机的摄像头向地面方向下倾,以使拍摄的行进区域图像主要以前方的行进路面为主。在本申请的一个实施例中,下倾摄像装置具体可以为单目相机,单目相机的角度和视野范围的角度的具体数值可以按照实际情况自定义设定。可选的,如图4所示,为本申请实施例提供的下倾摄像装置的安装及视野范围的示意图。下倾摄像装置下倾的角度可以为下倾摄像装置镜头的中心轴线与水平方向的夹角,此处的水平方向为世界坐标系中的水平方向;下倾摄像装置的下倾的角度还可以为中心轴线在竖直平面内的垂线与地面法向量的方向形成的夹角,图4中以10°为例,下倾摄像装置的视野范围中垂直视场角可以为30°,形成相对较佳的相机视野。当然,下倾摄像装置的角度和视野范围还可以调整为其它具体数值,此处不再赘述。In an embodiment of the present application, the down-tilting camera installed in front of the mobile robot may be a monocular camera. The camera of the monocular camera is tilted downwards toward the ground, so that the image of the traveling area captured is mainly the traveling road in the front. In an embodiment of the present application, the down-tilting camera device may specifically be a monocular camera, and the specific values of the angle of the monocular camera and the angle of the field of view can be customized according to actual conditions. Optionally, as shown in FIG. 4, it is a schematic diagram of the installation and field of view of the downward tilting camera device provided in this embodiment of the application. The downward tilt angle of the downward tilt camera device can be the angle between the central axis of the downward tilt camera lens and the horizontal direction, where the horizontal direction is the horizontal direction in the world coordinate system; the downward tilt angle of the downward tilt camera device can also be Is the angle formed by the vertical line of the central axis in the vertical plane and the direction of the ground normal vector. In Fig. 4, taking 10° as an example, the vertical field of view of the down-tilt camera device can be 30°, forming a relative Better camera field of view. Of course, the angle and field of view of the down-tilt camera device can also be adjusted to other specific values, which will not be repeated here.
S302,在拍摄到的行进区域图像中确定参考区域和检测区域,并将检测区域和参考区域分别划分为若干子检测区域和若干子参考区域。S302: Determine a reference area and a detection area in the captured travel area image, and divide the detection area and the reference area into several sub-detection areas and several sub-reference areas, respectively.
在本申请的一个实施例中,可以将行进区域图像中的下部确定为参考区域,并将行进区域图像中位于参考区域的上方的其余部分确定为检测区域。In an embodiment of the present application, the lower part of the travel area image may be determined as the reference area, and the remaining part of the travel area image located above the reference area may be determined as the detection area.
在确定了上述参考区域和检测区域后,将检测区域按列划分为若干子检测区域,其中,相邻的子检测区域的部分像素重合;将参考区域按列划分为若干子参考区域,其中,相邻子参考区域的部分像素重合。其中,划分后的子检测区域中的部分像素与其它子检测区域的部分像素重合,是指划分后的子检测区域与相邻的其它子检测区域存在部分重合的图像。可选的,如图5所示,可以通过划分不同像素列的方式实现对检测区域及参考区域的划分,将多列像素列作为一区域列,相邻区域列之间存在像素列的重合,重合的像素列的数量可以按照实际情况自定义设置,例如可以设置为1列像素列、2列像素列或4列像素列等。将检测区域中同一区域列中的各像素作为一子检测区域,相邻子检测区域之间存在像素的重合。按照上述划分规则进行划分,直到将整个检测区域划分完毕,获得若干子检测区域,任意相邻的两子检测区域之间存在重合。将参考区域中属于同一区域列中的各像素作为一子参考区域,相邻子参考区域之间存在像素的重合。具体的子参考区域的划分方法可以参照上述子检测区域的划分方法,此处不再赘述。在将划分后的子检测区域与同一区域列的子参考区域进行对比时,多个子检测区域之间存在重合,在后续对比中,相邻子检测区域之间重合的部分一般会经过重复对比,可以使得对比结果更精确。After the above-mentioned reference area and detection area are determined, the detection area is divided into several sub-detection areas according to columns, where some pixels of adjacent sub-detection areas overlap; the reference area is divided into several sub-reference areas according to columns, among which, Some pixels of adjacent sub-reference areas overlap. Wherein, some pixels in the divided sub-detection area overlap with other sub-detection areas, which refers to an image in which the divided sub-detection area partially overlaps with other adjacent sub-detection areas. Optionally, as shown in FIG. 5, the detection area and the reference area can be divided by dividing different pixel columns. Multiple pixel columns are regarded as a region column, and there is overlap of pixel columns between adjacent region columns. The number of overlapping pixel columns can be customized according to actual conditions, for example, it can be set to 1 pixel column, 2 pixel columns, or 4 pixel columns. Each pixel in the same area column in the detection area is regarded as a sub-detection area, and there is overlap of pixels between adjacent sub-detection areas. The division is carried out according to the above division rules until the entire detection area is divided, and several sub detection areas are obtained, and there is overlap between any two adjacent sub detection areas. Each pixel in the reference area belonging to the same area column is taken as a sub-reference area, and there is overlap of pixels between adjacent sub-reference areas. The specific method for dividing the sub-reference area can refer to the above-mentioned method for dividing the sub-detection area, which will not be repeated here. When comparing the divided sub-detection areas with the sub-reference areas in the same area column, there is overlap between multiple sub-detection areas. In subsequent comparisons, the overlapping parts between adjacent sub-detection areas will generally undergo repeated comparison. Can make the comparison result more accurate.
S303,比较每个子检测区域与同一列的对应的子参考区域的色度差,从子检测区域中筛选色度差大于第一阈值的第一部分区域,以及筛选色度差小于第一阈值且亮度小于第二阈值的第二部分区域。S303. Compare the chromaticity difference of each sub-detection area with the corresponding sub-reference area in the same column, filter the first partial area with the chromaticity difference greater than the first threshold from the sub-detection area, and filter the chromaticity difference less than the first threshold and the brightness The second partial area smaller than the second threshold.
为了便于区分和描述,本申请实施例将子检测区域中满足色度差大于第一阈值的区域,称为第一部分区域;以及,将子检测区域中满足色度差小于第一阈值且亮度小于第二阈值的区域,称为第二部分区域。In order to facilitate the distinction and description, in the embodiment of the present application, the area in the sub-detection area that meets the chromaticity difference greater than the first threshold is called the first partial area; and, the sub-detection area meets the chromaticity difference less than the first threshold and the brightness The area of the second threshold is called the second partial area.
本步骤中,在获取了上述若干个子检测区域和子参考区域后,针对每个子检测区域,计算该子检测区域的颜色与其对应的子参考区域的颜色的色度 差。其中,参考区域也如检测区域进行划分,得到若干个子参考区域,将子检测区域的颜色与对应的同一列的子参考区域进行比较,此处的列具体是指区域列,即子检测区域对应的子参考区域为与该子检测区域属于同一区域列的子参考区域。可选的,子检测区域中的颜色和对应的子参考区域中的颜色可以有多种表示格式,例如HSV(Hue、Saturation、Value,色相、饱和度、明度)颜色空间、YUV颜色空间等,其中,Y表示明亮度,U和V表示色度。在一个可选的实施例中,可以通过计算欧氏距离得到子检测区域与同一列的子参考区域的色度差。例如,可以先计算子参考区域的YUV平均值,然后计算同一列的子检测区域中每个像素的YUV值与该子参考区域的YUV平均值之间的欧式距离,将得到的欧式距离作为子检测区域中该像素与同一列的子参考区域的色度差。In this step, after acquiring the above-mentioned several sub-detection areas and sub-reference areas, for each sub-detection area, calculate the chromaticity difference between the color of the sub-detection area and the color of the corresponding sub-reference area. Among them, the reference area is divided as the detection area to obtain several sub-reference areas. The color of the sub-detection area is compared with the corresponding sub-reference area of the same column. The column here specifically refers to the area column, that is, the sub-detection area corresponds to The sub-reference area of is a sub-reference area belonging to the same area column as the sub-detection area. Optionally, the color in the sub-detection area and the color in the corresponding sub-reference area can have multiple representation formats, such as HSV (Hue, Saturation, Value, hue, saturation, lightness) color space, YUV color space, etc., Among them, Y represents brightness, U and V represent chromaticity. In an optional embodiment, the chromaticity difference between the sub-detection area and the sub-reference area in the same column can be obtained by calculating the Euclidean distance. For example, you can first calculate the average YUV of the sub-reference area, and then calculate the Euclidean distance between the YUV value of each pixel in the sub-detection area of the same column and the average YUV of the sub-reference area, and use the obtained Euclidean distance as the sub-reference area. The chromaticity difference between the pixel in the detection area and the sub-reference area in the same column.
在计算出子检测区域中像素的色度差后,可以将该像素的色度差与第一阈值进行比较,并在像素的色度差小于第一阈值的情况下进一步比较该像素的亮度与第二阈值的大小关系。基于上述比较结果,如果子检测区域内存在色度差高于第一阈值的像素,则将该子检测区域内这类色度差高于第一阈值且位置相邻的像素进行连通,得到第一部分区域。同理,如果子检测区域内存在色度差小于第一阈值且亮度小于第二阈值的像素,则将该子检测区域内这类色度差小于第一阈值、且亮度小于第二阈值且位置相邻的像素进行连通,得到第二部分区域。After the chromaticity difference of the pixel in the sub-detection area is calculated, the chromaticity difference of the pixel can be compared with the first threshold, and when the chromaticity difference of the pixel is less than the first threshold, the brightness of the pixel and the The magnitude relationship of the second threshold. Based on the above comparison result, if there are pixels with a chromaticity difference higher than the first threshold in the sub-detection area, then the pixels in the sub-detection area whose chromaticity difference is higher than the first threshold and whose positions are adjacent are connected to obtain the first Part of the area. Similarly, if there are pixels with chromaticity difference less than the first threshold and brightness less than the second threshold in the sub-detection area, then such chromaticity difference in the sub-detection area is less than the first threshold, and the brightness is less than the second threshold and the position The adjacent pixels are connected to obtain the second partial area.
其中,第二阈值的数值大小依据对应的子参考区域的亮度确定,例如可以将子参考区域的亮度减去预设数值或乘以某小于1的比例系数得到的数值作为第二阈值。The value of the second threshold is determined according to the brightness of the corresponding sub-reference area. For example, the brightness of the sub-reference area may be subtracted by a preset value or multiplied by a scale factor less than 1 as the second threshold.
综合各子检测区域的筛选结果,可能有以下几种结果:1)所有子检测区域均筛选不到第一部分区域和第二部分区域;2)所有子检测区域均筛选不到第一部分区域,至少一个子检测区域能够筛选出第二部分区域;3)至少一个子检测区域能够筛选到第一部分区域,所有子检测区域均筛选不到第二部分区域;4)至少一个子检测区域能够筛选到第一部分区域,至少一个子检测区域能够筛选到第二部分区域。Integrating the screening results of each sub-detection area, there may be the following results: 1) All sub-detection areas cannot be screened for the first and second partial areas; 2) All sub-detection areas cannot be screened for the first partial area, at least One sub-detection area can screen out the second partial area; 3) At least one sub-detection area can screen the first partial area, and all sub-detection areas can not screen the second partial area; 4) At least one sub-detection area can screen the first partial area In a part of the area, at least one sub-detection area can be filtered to the second part of the area.
S304,如果所有子检测区域均筛选不到第一部分区域和第二部分区域, 则确定不存在障碍物,结束本流程。S304: If the first partial area and the second partial area cannot be screened in all the sub-detection areas, it is determined that there is no obstacle, and the process ends.
行进区域图像包括的每个子检测区域均筛选不到第一部分区域和第二部分区域,即意味着行进区域图像中不存在物体区域和阴影区域,其对应的实际情况可能是AGV前方的一定范围内没有障碍物。此时,对应于前述步骤S15中列举的几类无障碍物的情形,可以认为前方无障碍物,AGV可以顺利通行。Each sub-detection area included in the traveling area image cannot be filtered out of the first partial area and the second partial area, which means that there are no object areas and shadow areas in the traveling area image, and the corresponding actual situation may be within a certain range in front of the AGV There are no obstacles. At this time, corresponding to several types of obstacles listed in step S15, it can be considered that there are no obstacles ahead and the AGV can pass smoothly.
S305,如果所有子检测区域均筛选不到第一部分区域,且至少一个子检测区域能够筛选出第二部分区域,则确定不存在障碍物,结束本流程。S305: If all the sub-detection areas cannot be screened out of the first partial area and at least one sub-detection area can screen out the second partial area, it is determined that there is no obstacle, and the process ends.
当行进区域图像包括的每个子检测区域均筛选不到第一部分区域,但存在至少一个子检测区域能够筛选到第二部分区域时,即意味着行进区域图像中可能存在阴影区域但不存在物体区域。对应的实际情况可能是该阴影区域对应的物体处于悬空状态,或者可能是该阴影区域对应的物体距离AGV尚有一定距离还未进入下倾摄像装置的拍摄视野。此时,对应于前述步骤S15中列举的几类无障碍物的情形,可以认为前方无障碍物,AGV可以顺利通行。When each sub-detection area included in the image of the traveling area cannot be filtered out of the first partial area, but there is at least one sub-detection area that can be filtered to the second partial area, it means that there may be a shadow area but no object area in the traveling area image . The corresponding actual situation may be that the object corresponding to the shadow area is in a suspended state, or it may be that the object corresponding to the shadow area is still a certain distance from the AGV and has not entered the shooting field of view of the down-tilt camera device. At this time, corresponding to several types of obstacles listed in step S15, it can be considered that there are no obstacles ahead and the AGV can pass smoothly.
针对上述情况中只检测到悬空物体的阴影区域的情况,在室内或工厂环境中,光照较为充足且均匀,若在机器人前方存在悬空或半悬空的物体,那么在其下方会存在一定大小的阴影。当在获取的行进区域图像中只能检测到阴影区域,而在该阴影区域的预设范围内检测不到物体区域时,则可以认为该阴影对应的物体悬空的高度不影响AGV的正常行进。如图6a所示,为本申请实施例提供的行进区域图像中只检测到阴影区域的具体示意图。此时,AGV与物体的实际情况可能为图6b所示,可选的,随着机器人不断前进,下倾摄像装置的视野范围内的阴影区域的面积越来越大,若障碍物悬空,则直至AGV从障碍物物体下方通过时,下倾摄像装置都无法看见阴影对应的障碍物,可以将该阴影区域对应的物体判定为无效障碍物,AGV可以继续向前通行。因此,当检测区域中只检测到阴影区域时,则可以判定无障碍物,AGV会保持向前行进。In view of the above situation where only the shadow area of suspended objects is detected, in indoor or factory environments, the illumination is sufficient and uniform. If there is a suspended or semi-suspended object in front of the robot, there will be a certain size of shadow below it. . When only the shadow area can be detected in the acquired travel area image, but the object area cannot be detected within the preset range of the shadow area, it can be considered that the height of the object corresponding to the shadow does not affect the normal travel of the AGV. As shown in FIG. 6a, it is a specific schematic diagram in which only the shadow area is detected in the traveling area image provided by this embodiment of the application. At this time, the actual situation of the AGV and the object may be as shown in Figure 6b. Optionally, as the robot continues to advance, the area of the shadow area in the field of view of the down-tilting camera device becomes larger and larger. If the obstacle is suspended, Until the AGV passes under the obstacle, the oblique camera device cannot see the obstacle corresponding to the shadow. The object corresponding to the shadow area can be determined as an invalid obstacle, and the AGV can continue to pass forward. Therefore, when only the shadow area is detected in the detection area, it can be determined that there is no obstacle, and the AGV will keep moving forward.
S306,如果至少一个子检测区域能够筛选到第一部分区域,且所有子检测区域均筛选不到第二部分区域,则确定不存在障碍物,结束本流程。S306: If at least one sub-detection area can be screened into the first partial area, and all sub-detection areas cannot be screened into the second partial area, it is determined that there is no obstacle, and the process ends.
当行进区域图像包括的每个子检测区域均筛选不到第二部分区域,但存 在至少一个子检测区域能够筛选到第一部分区域时,即意味着行进区域图像中可能存在物体区域但不存在阴影区域。其对应的实际情况可能是该物体区域的高度不能形成满足障碍物条件的阴影区域,如贴纸、字面标语等平面物体,其高度不会对AGV的行驶造成障碍。此时,可以判定行进区域图像中不存在障碍物,结束本流程。When each sub-detection area included in the travel area image cannot be filtered out of the second partial area, but there is at least one sub-detection area that can be filtered to the first partial area, it means that there may be object areas but no shadow areas in the travel area image . The corresponding actual situation may be that the height of the object area cannot form a shadow area that meets the obstacle conditions, such as flat objects such as stickers and slogans, whose height will not cause obstacles to the driving of the AGV. At this time, it can be determined that there are no obstacles in the traveling area image, and this process ends.
S307,如果至少一个子检测区域能够筛选到第一部分区域,且至少一个子检测区域能够筛选到第二部分区域,则将筛选出的第一部分区域标记为物体区域,以及将筛选出的第二部分区域标记为阴影区域。S307: If at least one sub-detection area can be screened into the first partial area and at least one sub-detection area can be screened into the second partial area, mark the screened out first partial area as an object area, and mark the screened out second part The areas are marked as shaded areas.
在一可选的实现方式中,可以通过以下步骤标记物体区域:如果仅存在一个子检测区域能够筛选出第一部分区域,则可以将从该一个子检测区域中筛选出的每个第一部分区域分别标记为一个物体区域。如果有多个子检测区域均能够筛选出第一部分区域,则可以将相互之间存在像素重合的第一部分区域进行合并,并将合并后的区域标记为一个物体区域,以及,将其中不与其他第一部分区域存在像素重合的第一部分区域单独标记为一个物体区域。In an optional implementation manner, the object area can be marked by the following steps: if there is only one sub-detection area that can filter the first partial area, each first partial area selected from the one sub-detection area can be selected separately Mark as an object area. If there are multiple sub-detection areas that can filter out the first partial area, the first partial areas with overlapping pixels can be merged, and the merged area can be marked as an object area, and the first partial area can be combined with other sub-detection areas. The first part of the area where there are overlapped pixels is separately marked as an object area.
例如,参见图6c所示,子检测区域1内筛选出第一部分区域1和第一部分区域2,子检测区域2内筛选出第一部分区域3,子检测区域3内筛选出第一部分区域4;由于第一部分区域2与第一部分区域3之间存在像素重合,第一部分区域3与第一部分区域4之间也存在像素重合,因此可以将这三个区域合并后标记为一个物体区域;而由于第一部分区域1不与其他第一部分区域存在像素重合,因此可以将第一部分区域1单独标记为一个物体区域。For example, referring to Figure 6c, the first partial area 1 and the first partial area 2 are screened out in the sub-detection area 1, the first partial area 3 is screened out in the sub-detection area 2, and the first partial area 4 is screened out in the sub-detection area 3; There is a pixel overlap between the first partial area 2 and the first partial area 3, and there is also a pixel overlap between the first partial area 3 and the first partial area 4. Therefore, these three areas can be combined and marked as one object area; Area 1 does not overlap with other first partial areas. Therefore, the first partial area 1 can be separately marked as an object area.
可选的,在标记物体区域之前或之后还可以计算每个物体区域的面积,剔除掉面积较小的物体区域。当物体区域在行进区域图像中面积较小时,对应的情况可能是该物体区域对应的实际物体的面积较小、高度较低,不会对AGV的行驶造成障碍,或者可能是该物体区域对应的实际物体此时距离AGV还较远,暂时不会影响AGV的行驶;这里剔除面积较小的物体区域,可以减小后续的处理量。一种实现方式如下:对存在像素重合的第一部分区域进行合并,并将合并后的面积大于第三阈值的区域标记为一个物体区域,和/或,将不存在像素重合且面积大于第三阈值的第一部分区域,单独标记为一个物体区域。在本申请实施例中,通过设定第三阈值对物体区域进行筛选,以确 定可能对AGV的行驶造成障碍的物体区域。Optionally, the area of each object area can be calculated before or after the object area is marked, and the object area with a smaller area can be eliminated. When the area of the object area in the travel area image is small, the corresponding situation may be that the actual object corresponding to the object area has a small area and low height, which will not cause obstacles to the driving of the AGV, or it may correspond to the object area The actual object is still far away from the AGV at this time, which will not affect the driving of the AGV for the time being; here, the object area with a small area is eliminated, which can reduce the subsequent processing amount. An implementation method is as follows: merge the first partial regions with overlapping pixels, and mark the merged area with an area greater than the third threshold as an object area, and/or, remove no overlapping pixels and with an area greater than the third threshold The first part of the area is individually marked as an object area. In the embodiment of the present application, the object area is screened by setting the third threshold to determine the object area that may cause obstacles to the driving of the AGV.
相应的,可以通过以下步骤标记阴影区域:当筛选出子检测区域中的色度差小于第一阈值且亮度小于第二阈值的第二部分区域时,可以将存在像素重合的第二部分区域进行合并,并将合并后的区域标记为一个阴影区域,和/或,将不存在像素重合的第二部分区域单独标记为一个阴影区域。Correspondingly, the shaded area can be marked by the following steps: When the second partial area in the sub-detection area whose chromaticity difference is less than the first threshold and whose brightness is less than the second threshold is filtered out, the second partial area where the pixels overlap can be performed. Merging, and marking the merged area as a shaded area, and/or separately marking the second partial area where there is no pixel overlap as a shaded area.
可选的,在标记出阴影区域之前或之后,可以计算面积以剔除不符合条件的阴影区域。例如,将存在像素重合的第二部分区域进行合并,并计算上述合并后的部分检测区域的面积,将合并后的面积大于第四阈值的区域标记为一个阴影区域;和/或,将不存在像素重合且面积大于第四阈值的第二部分区域,单独标记为一个阴影区域。如通过前述的颜色检测,将贴纸、地标或纸板等低矮障碍物检测为物体区域后,由于其高度很小,因此纸板、贴纸或地标等的阴影面积非常小甚至检测不出阴影,其面积不满足设置的第四阈值,即可通过过滤阴影面积,以此过滤掉贴纸等高度低于机器人底盘、且对机器人的行进不产生影响的物体。Optionally, before or after the shadow area is marked, the area can be calculated to eliminate the shadow area that does not meet the conditions. For example, merge the second partial areas with overlapping pixels, calculate the area of the merged partial detection area, and mark the area with the merged area greater than the fourth threshold as a shaded area; and/or, it will not exist The second partial area where the pixels overlap and the area is greater than the fourth threshold is individually marked as a shaded area. For example, through the aforementioned color detection, after detecting low obstacles such as stickers, landmarks or cardboard as object areas, the height of them is very small, so the shadow area of cardboard, stickers or landmarks is very small or even no shadow can be detected. If the fourth threshold is not met, the shadow area can be filtered to filter out objects such as stickers that are lower than the chassis of the robot and do not affect the travel of the robot.
S308,检测标记的物体区域和阴影区域之间的位置关系是否匹配光投影方向。S308: Detect whether the positional relationship between the marked object area and the shadow area matches the light projection direction.
在本申请的一个实施例中,针对每个物体区域,判断上述每一个物体区域是否存在对应的阴影区域。可选的,检测物体区域和阴影区域之间的位置关系是否匹配光投影方向。一般情况下,在光照均匀的室内环境中,在阴影区域周围的预设范围内会存在对应的物体区域,即物体区域和阴影区域之间的位置关系满足自然现象中的光影现象。认定真实的障碍物与其对应的阴影在行进区域图像中是连接的,或者障碍物与其对应的阴影虽然不连接,但应同时存在于预设范围的空间里。其中,预设范围的具体大小可以是由机器人上设置的下倾摄像装置的拍摄范围以及机器人的高度等因素共同决定的,例如,预设范围可以为装设在AGV行进面的下倾摄像装置拍摄到的行进区域图像的范围;例如,预设范围还可以为AGV的行驶区域映射到行进区域图像中的区域范围,可以根据AGV当前的行驶路线及AGV的宽度和高度,将AGV映射到的行进区域图像中,得到的映射区域的范围即为预设范围。在本申请的一个实施例中,若物体区域在AGV当前的行驶路线上,则判定物体区域和 阴影区域之间的位置关系匹配光投影方向;否则判定物体区域和阴影区域之间的位置关系不匹配光投影方向。In an embodiment of the present application, for each object area, it is determined whether there is a corresponding shadow area in each of the foregoing object areas. Optionally, it is detected whether the positional relationship between the object area and the shadow area matches the light projection direction. Generally, in an indoor environment with uniform illumination, there will be corresponding object areas within a preset range around the shadow area, that is, the positional relationship between the object area and the shadow area meets the light and shadow phenomenon in natural phenomena. It is determined that the real obstacle and its corresponding shadow are connected in the traveling area image, or the obstacle and its corresponding shadow are not connected, but should exist in the space of the preset range at the same time. Among them, the specific size of the preset range may be determined by factors such as the shooting range of the down-tilt camera set on the robot and the height of the robot. For example, the preset range may be a down-tilt camera installed on the traveling surface of the AGV. The range of the captured travel area image; for example, the preset range can also be the area range of the AGV’s travel area mapped to the travel area image. The AGV can be mapped to the current travel route and the width and height of the AGV. In the travel area image, the range of the obtained mapping area is the preset range. In an embodiment of the present application, if the object area is on the current driving route of the AGV, it is determined that the positional relationship between the object area and the shadow area matches the light projection direction; otherwise, it is determined that the positional relationship between the object area and the shadow area is not Match the light projection direction.
S309,当检测到物体区域和阴影区域之间的位置关系匹配光投影方向时,将物体区域标记为疑似障碍物。S309: When it is detected that the positional relationship between the object area and the shadow area matches the light projection direction, mark the object area as a suspected obstacle.
在本申请的一个实施例中,当在阴影区域周围检测到与阴影区域匹配光投影方向的对应的物体区域时,将阴影区域对应的物体区域标记为疑似障碍物。如当悬空的物体可能影响AGV的正常行驶时,则在拍摄到的行进区域图像中,一般可以检测到同时存在阴影区域和物体区域。如图7a所示,下倾摄像装置可以检测到阴影区域和对应的物体区域。此时,AGV与障碍物的实际情况可能如图7b所示。此时,可以将如图7a所示的物体区域标记为疑似障碍物。In an embodiment of the present application, when a corresponding object area matching the light projection direction of the shadow area is detected around the shadow area, the object area corresponding to the shadow area is marked as a suspected obstacle. For example, when a suspended object may affect the normal driving of the AGV, in the captured travel area image, it can generally be detected that the shadow area and the object area exist at the same time. As shown in Figure 7a, the downward tilt camera device can detect the shadow area and the corresponding object area. At this time, the actual situation of the AGV and the obstacle may be as shown in Figure 7b. At this time, the object area shown in FIG. 7a can be marked as a suspected obstacle.
S310,对标记为疑似障碍物的物体区域进行尺寸条件过滤,并将经过滤后保留的物体区域标记为障碍物。S310: Perform size condition filtering on the object area marked as a suspected obstacle, and mark the object area retained after filtering as an obstacle.
在本申请的一个实施例中,对疑似障碍物进行尺寸条件过滤主要是通过计算疑似障碍物与对应的阴影区域之间的面积的比值,以判断疑似障碍物是否满足会对AGV形成阻碍的尺寸条件。利用具有一定高度的物体其阴影会具有一定面积的现象,计算阴影区域的面积与疑似障碍物的物体区域的面积的比值,当比值大于第五阈值时,将疑似障碍物对应的物体区域标记为障碍物。可选的,计算疑似障碍物对应的物体区域与阴影区域的面积的比值,并将获取的比值与预设的第五阈值进行比较。其中,第五阈值可以按照实际情况自定义设定。在标记完物体区域后,为避免将平面物体,如贴纸、地标等误检测为物体区域,需要通过获取物体区域的高度信息,过滤低于机器人底盘的对机器人的行进不会产生影响的物体。为避免对障碍物三维信息的复杂计算,依据具有高度的物体都带有一定大小的阴影的现象,通过检测障碍物的阴影来代替对障碍物三维信息的计算。在本申请的一个实施例中,要求被标记为疑似障碍物的物体区域前都能够检测到对应的阴影区域。In an embodiment of the present application, the size condition filtering of the suspected obstacle is mainly by calculating the ratio of the area between the suspected obstacle and the corresponding shadow area to determine whether the suspected obstacle meets the size that will hinder the AGV condition. Using the phenomenon that the shadow of an object with a certain height will have a certain area, calculate the ratio of the area of the shadow area to the area of the object area of the suspected obstacle. When the ratio is greater than the fifth threshold, mark the object area corresponding to the suspected obstacle as obstacle. Optionally, the ratio of the area of the object area corresponding to the suspected obstacle to the area of the shadow area is calculated, and the obtained ratio is compared with a preset fifth threshold. Among them, the fifth threshold can be customized according to actual conditions. After marking the object area, in order to avoid false detection of flat objects, such as stickers, landmarks, etc., as object areas, it is necessary to obtain height information of the object area to filter out objects below the robot chassis that will not affect the travel of the robot. In order to avoid the complicated calculation of the three-dimensional information of obstacles, based on the phenomenon that objects with a certain height have shadows of a certain size, the calculation of the three-dimensional information of obstacles is replaced by detecting the shadows of obstacles. In an embodiment of the present application, it is required that the corresponding shadow area can be detected before the object area marked as a suspected obstacle.
若障碍物半悬空或距离机器人有一定距离,则在AGV的前进过程中下倾摄像装置可以看见前方阴影区域所对应的物体区域。但此时疑似障碍物的物体面积与其阴影区域的面积的比例小于设定的第五阈值,机器人可以继续向 前通行一段距离。对于会影响机器人通行的障碍物,在一定距离范围内,相机视野内的障碍物面积与阴影面积的比值会超过设定的第五阈值,此时认为前方检测到了影响机器人通行的障碍物,确定相应的物体区域为障碍物,机器人启动相应的避障策略。If the obstacle is half-suspended or there is a certain distance from the robot, the downward tilt camera device can see the object area corresponding to the front shadow area during the advancement of the AGV. But at this time, the ratio of the area of the suspected obstacle to the area of the shadow area is less than the set fifth threshold, and the robot can continue to pass for a certain distance. For obstacles that affect the robot's passage, within a certain distance, the ratio of the obstacle area to the shadow area in the camera's field of view will exceed the set fifth threshold. At this time, it is considered that an obstacle that affects the robot's passage is detected in the front. The corresponding object area is an obstacle, and the robot activates the corresponding obstacle avoidance strategy.
S311,检测障碍物是否影响AGV行进,并为AGV实现避障。S311: Detect whether the obstacle affects the movement of the AGV, and implement obstacle avoidance for the AGV.
在本申请的一个实施例中,在通过上述步骤确定了移动机器人行进区域中的实际障碍物后,确定标记为障碍物的物体区域在参考区域的左边界投影点坐标和右边界投影点坐标;检测AGV的行进轨迹是否途经左边界投影点坐标和上述右边界投影点坐标之间的位置范围;当检测出AGV的行进轨迹途经左边界投影点坐标和右边界投影点坐标之间的位置范围时,为AGV生成行进轨迹避让左边界投影点坐标和右边界投影点坐标之间的位置范围的规划避障策略。在行进区域图像中,上、下对应机器人的行进方向,左、右对应机器人的避障方向,所以此处计算的为左边界投影点坐标和右边界投影点坐标。In an embodiment of the present application, after the actual obstacle in the travel area of the mobile robot is determined through the above steps, the coordinates of the projection point on the left boundary and the projection point on the right boundary of the object area marked as the obstacle are determined; Detect whether the AGV's trajectory passes through the position range between the coordinates of the left boundary projection point and the above-mentioned right boundary projection point coordinates; when it is detected that the AGV's trajectory passes through the position range between the left boundary projection point coordinates and the right boundary projection point coordinates , Generate a planning obstacle avoidance strategy for the AGV to avoid the position range between the coordinates of the projection point on the left boundary and the coordinates of the projection point on the right boundary. In the image of the traveling area, up and down correspond to the traveling direction of the robot, and left and right correspond to the obstacle avoidance direction of the robot, so the coordinates of the left boundary projection point and the right boundary projection point coordinates are calculated here.
本申请实施例基于上述步骤,利用现实世界中的光影现象,通过单张图片,高效判断物体的三维信息而无需将这些信息计算出来,实现移动机器人的避障效果。Based on the above steps, the embodiments of the present application utilize light and shadow phenomena in the real world to efficiently determine the three-dimensional information of an object through a single picture without calculating the information, so as to achieve the obstacle avoidance effect of the mobile robot.
基于同一发明构思,本申请实施例还提供一种检测障碍物的装置,其中,如图8所示,该装置包括:Based on the same inventive concept, an embodiment of the present application also provides an obstacle detection device, where, as shown in FIG. 8, the device includes:
获取模块801,用于获取装设于AGV的下倾摄像装置拍摄到的行进区域图像;The acquiring module 801 is used to acquire the traveling area image captured by the downward tilt camera device installed in the AGV;
第一确定模块802,用于将行进区域图像中的下部确定为参考区域;The first determining module 802 is configured to determine the lower part of the travel area image as a reference area;
第二确定模块803,用于将行进区域图像中位于参考区域的上方的其余部分确定为检测区域;The second determining module 803 is configured to determine the remaining part of the travel area image located above the reference area as the detection area;
第一检测模块804,用于在检测区域中检测物体区域和阴影区域,其中,物体区域为与上述参考区域的色度差超过第一阈值的区域,并且,阴影区域为与参考区域的色度差小于第一阈值、且亮度低于参考区域的区域;The first detection module 804 is configured to detect the object area and the shadow area in the detection area, where the object area is an area whose chromaticity difference from the reference area exceeds a first threshold, and the shadow area is a chromaticity difference from the reference area. An area where the difference is less than the first threshold and the brightness is lower than the reference area;
第二检测模块805,用于当检测到检测区域中同时存在物体区域和阴影区域时,检测物体区域和阴影区域之间的位置关系是否匹配光投影方向;The second detection module 805 is configured to detect whether the positional relationship between the object area and the shadow area matches the light projection direction when it is detected that the object area and the shadow area exist in the detection area at the same time;
第一标记模块806,用于当物体区域和阴影区域之间的位置关系匹配光投 影方向时,将物体区域标记为疑似障碍物。The first marking module 806 is used to mark the object area as a suspected obstacle when the positional relationship between the object area and the shadow area matches the light projection direction.
本实施例中,获取模块801、第一确定模块802、第二确定模块803、第一检测模块804、第二检测模块805和第一标记模块806的具体功能和交互方式,可参见图1对应的实施例的记载,在此不再赘述。In this embodiment, the specific functions and interaction modes of the acquiring module 801, the first determining module 802, the second determining module 803, the first detecting module 804, the second detecting module 805, and the first marking module 806 can be found in the corresponding FIG. 1 The record of the embodiment will not be repeated here.
可选的,该装置还包括:Optionally, the device further includes:
第二标记模块807,用于对标记为上述疑似障碍物的上述物体区域进行尺寸条件过滤,并将经过滤后保留的上述物体区域标记为障碍物。The second marking module 807 is configured to perform size condition filtering on the object area marked as the suspected obstacle, and mark the object area remaining after filtering as an obstacle.
可选的,第一检测模块804包括:Optionally, the first detection module 804 includes:
第一划分单元,用于将检测区域按列划分为若干子检测区域,其中,相邻的子检测区域的部分像素重合;The first dividing unit is used to divide the detection area into several sub-detection areas in columns, wherein some pixels of adjacent sub-detection areas overlap;
第二划分单元,用于将参考区域按列划分为若干子参考区域,其中,相邻的子参考区域的部分像素重合;The second dividing unit is used to divide the reference area into a number of sub-reference areas in columns, wherein some pixels of adjacent sub-reference areas overlap;
第一比较单元,用于比较每个上述子检测区域与同一列的对应的上述子参考区域的色度差,筛选出子检测区域中的色度差大于上述第一阈值的第一部分区域;A first comparison unit, configured to compare the chromaticity difference of each sub-detection area with the corresponding sub-reference area in the same column, and screen out the first partial area in the sub-detection area whose chromaticity difference is greater than the first threshold;
第一标记单元,用于将存在像素重合的第一部分区域进行合并,并将合并后的区域标记为一个物体区域;和/或,将不存在像素重合的第一部分区域单独标记为一个物体区域。The first marking unit is used to merge the first partial areas with pixel overlap and mark the merged area as an object area; and/or separately mark the first partial areas without pixel overlap as an object area.
可选的,第一检测模块804还包括:Optionally, the first detection module 804 further includes:
筛选单元,用于筛选出子检测区域中的色度差小于上述第一阈值且亮度小于第二阈值的第二部分区域;The screening unit is used to screen out the second partial area where the chromaticity difference in the sub-detection area is less than the first threshold and the brightness is less than the second threshold;
第二标记单元,用于将存在像素重合的第二部分区域进行合并,并将合并后的区域标记为一个阴影区域;和/或,将不存在像素重合的第二部分区域单独标记为一个阴影区域。The second marking unit is used to merge the second partial area with overlapping pixels, and mark the merged area as a shaded area; and/or separately mark the second partial area without pixel overlap as a shade area.
可选的,第一标记单元包括:Optionally, the first marking unit includes:
第一标记子单元,用于将存在像素重合的第一部分区域进行合并,并将合并后的面积大于第三阈值的区域标记为一个物体区域;The first marking subunit is used to merge the first partial areas where the pixels overlap, and mark the merged area larger than the third threshold as an object area;
第二标记子单元,用于将不存在像素重合且面积大于第三阈值的第一部分区域,单独标记为一个物体区域。The second marking subunit is used for marking the first partial area with no overlapping pixels and an area larger than the third threshold as an object area.
可选的,第二标记单元包括:Optionally, the second marking unit includes:
第三标记子单元,用于将存在像素重合的第二部分区域进行合并,并将合并后的面积大于第四阈值的区域标记为一个阴影区域;The third marking subunit is used for merging the second partial area where the pixels overlap, and marking the area whose merged area is greater than the fourth threshold as a shaded area;
第四标记子单元,用于将不存在像素重合且面积大于第四阈值的第二部分区域,单独标记为一个阴影区域。The fourth marking subunit is used to individually mark the second partial area where there is no pixel overlap and the area is greater than the fourth threshold as a shaded area.
可选的,第一标记模块806包括:Optionally, the first marking module 806 includes:
第一计算单元,用于计算上述阴影区域的面积与上述疑似障碍物的上述物体区域的面积的比值;The first calculation unit is configured to calculate the ratio of the area of the shadow area to the area of the object area of the suspected obstacle;
第三标记单元,用于当上述比值大于第五阈值时,将上述物体区域标记为上述障碍物。The third marking unit is used to mark the object area as the obstacle when the ratio is greater than the fifth threshold.
可选的,该装置还包括:Optionally, the device further includes:
第三确定模块808,用于确定标记为上述障碍物的上述物体区域在上述参考区域的左边界投影点坐标和右边界投影点坐标;The third determining module 808 is configured to determine the coordinates of the projection point on the left boundary and the coordinates of the projection point on the right boundary of the object area marked as the obstacle in the reference area;
第三检测模块809,用于检测上述AGV的行进轨迹是否途径上述左边界投影点坐标和上述右边界投影点坐标之间的位置范围;The third detection module 809 is configured to detect whether the travel trajectory of the AGV passes through the position range between the coordinates of the projection point on the left boundary and the coordinates of the projection point on the right boundary;
规划模块810,用于当检测出上述AGV的行进轨迹途经上述左边界投影点坐标和右边界投影点坐标之间的位置范围时,为上述AGV生成使上述行进轨迹避让上述左边界投影点坐标和右边界投影点坐标之间的位置范围的规划避障策略。The planning module 810 is configured to generate for the AGV to make the travel trajectory avoid the left boundary projection point coordinates and the position range between the left boundary projection point coordinates and the right boundary projection point coordinates when it is detected that the travel trajectory of the AGV Planning obstacle avoidance strategy for the location range between the coordinates of the projection point on the right boundary.
可选的,该装置还包括:Optionally, the device further includes:
第四确定模块811,用于当检测到上述检测区域中未同时存在上述阴影区域和上述物体区域时、或仅搜索到上述阴影区域和上述物体区域的其中之一时,确定上述检测区域中无障碍物。The fourth determining module 811 is configured to determine that there is no obstacle in the detection area when it is detected that the shadow area and the object area do not exist at the same time in the detection area, or when only one of the shadow area and the object area is searched Things.
本申请实施例还提供了一种非瞬时计算机可读存储介质,该非瞬时计算机可读存储介质存储指令,上述指令在由处理器执行时使得处理器执行上述任一检测障碍物的方法The embodiment of the present application also provides a non-transitory computer-readable storage medium, which stores instructions, which when executed by a processor causes the processor to execute any of the aforementioned obstacle detection methods
具体地,该存储介质能够为通用的存储介质,如移动磁盘、硬盘和FLASH等,该存储介质上的计算机程序被运行时,能够执行上述的一种检测障碍物的方法,从而通过拍摄行进区域图像,高效判断障碍物的三维信息,达到降 低计算量的效果。Specifically, the storage medium can be a general storage medium, such as a removable disk, a hard disk, and FLASH. When the computer program on the storage medium is run, it can execute the above-mentioned obstacle detection method, so as to capture the travel area Image, efficiently judge the three-dimensional information of obstacles, and achieve the effect of reducing the amount of calculation.
本申请的又一实施例还提供一种移动机器人,包括处理器和下倾摄像装置,下倾摄像装置在移动机器人的前端以预设的角度下倾,处理器用于执行上述任一检测障碍物的方法。Another embodiment of the present application further provides a mobile robot, including a processor and a downward tilt camera device, the downward tilt camera device tilts down at a preset angle at the front end of the mobile robot, and the processor is used to perform any of the foregoing obstacle detection Methods.
可选的,下倾摄像装置下倾的预设的角度为8°至12°。Optionally, the preset angle of the downward tilt of the downward tilt camera device is 8° to 12°.
下倾摄像装置下倾的预设的角度为下倾摄像装置镜头的中心轴线与水平方向的夹角,也可以通过中心轴线在竖直平面内的垂线与竖直方向的夹角进行表示,此处的水平方向与竖直方向分别为世界坐标系中的水平方向与竖直方向。可选的,例如图4所示,下倾摄像装置下倾的预设的角度可以为10度。The preset angle of the downward tilt of the downward tilt camera device is the angle between the central axis of the lens of the downward tilt camera device and the horizontal direction. It can also be expressed by the angle between the vertical line of the central axis in the vertical plane and the vertical direction. The horizontal direction and the vertical direction here are respectively the horizontal direction and the vertical direction in the world coordinate system. Optionally, for example, as shown in FIG. 4, the preset angle of the downward tilt of the downward tilt camera device may be 10 degrees.
本申请实施例还提供了一种计算机程序,所述计算机程序在由处理器执行时使得所述处理器执行上述任一所述的检测障碍物的方法。An embodiment of the present application also provides a computer program, which when executed by a processor causes the processor to execute any of the foregoing obstacle detection methods.
最后应说明的是:以上所述实施例,仅为本申请的具体实施方式,用以说明本申请的技术方案,而非对其限制,本申请的保护范围并不局限于此,尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本申请实施例技术方案的精神和范围,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应所述以权利要求的保护范围为准。Finally, it should be noted that the above-mentioned embodiments are only specific implementations of the application, which are used to illustrate the technical solutions of the application, rather than limit it. The scope of protection of the application is not limited thereto, although referring to the foregoing The examples describe the application in detail, and those of ordinary skill in the art should understand that any person skilled in the art can still modify the technical solutions described in the foregoing examples within the technical scope disclosed in this application. Or it is easy to think of changes, or equivalent replacements of some of the technical features; and these modifications, changes or replacements do not cause the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of the application, and should be covered in this application Within the scope of protection. Therefore, the protection scope of this application shall be subject to the protection scope of the claims.

Claims (21)

  1. 一种检测障碍物的方法,包括:A method of detecting obstacles, including:
    获取装设于移动机器人AGV的下倾摄像装置拍摄到的行进区域图像;Obtain the image of the traveling area captured by the downward tilt camera device installed on the mobile robot AGV;
    将所述行进区域图像中的下部确定为参考区域;Determining the lower part of the travel area image as a reference area;
    将所述行进区域图像中位于所述参考区域的上方的其余部分确定为检测区域;Determining the remaining part of the travel area image located above the reference area as a detection area;
    在所述检测区域中检测物体区域和阴影区域,其中,所述物体区域为与所述参考区域的色度差超过第一阈值的区域,并且,所述阴影区域为与所述参考区域的色度差小于所述第一阈值、且亮度低于所述参考区域的区域;An object area and a shadow area are detected in the detection area, where the object area is an area whose chromaticity difference from the reference area exceeds a first threshold, and the shadow area is a color difference from the reference area. An area whose degree difference is less than the first threshold and whose brightness is lower than the reference area;
    当检测到所述检测区域中同时存在所述物体区域和所述阴影区域时,检测所述物体区域和所述阴影区域之间的位置关系是否匹配光投影方向;When it is detected that the object area and the shadow area are simultaneously present in the detection area, detecting whether the positional relationship between the object area and the shadow area matches the light projection direction;
    当所述物体区域和所述阴影区域之间的位置关系匹配光投影方向时,将所述物体区域标记为疑似障碍物。When the positional relationship between the object area and the shadow area matches the light projection direction, the object area is marked as a suspected obstacle.
  2. 根据权利要求1所述的方法,其中,在所述将物体区域标记为疑似障碍物的步骤之后,所述方法还包括:The method according to claim 1, wherein, after the step of marking the object area as a suspected obstacle, the method further comprises:
    对标记为所述疑似障碍物的所述物体区域进行尺寸条件过滤,并将经过滤后保留的所述物体区域标记为障碍物。Perform size condition filtering on the object area marked as the suspected obstacle, and mark the object area remaining after filtering as an obstacle.
  3. 根据权利要求1所述的方法,其中,通过以下步骤标记所述物体区域:The method according to claim 1, wherein the object area is marked by the following steps:
    将所述检测区域按列划分为若干子检测区域,其中,相邻的所述子检测区域的部分像素重合;Dividing the detection area into several sub-detection areas in columns, wherein some pixels of the adjacent sub-detection areas overlap;
    将所述参考区域按列划分为若干子参考区域,其中,相邻的所述子参考区域的部分像素重合;Divide the reference area into a number of sub-reference areas in columns, wherein some pixels of adjacent sub-reference areas overlap;
    比较每个所述子检测区域与同一列的对应的所述子参考区域的色度差,筛选出子检测区域中的色度差大于所述第一阈值的第一部分区域;Comparing the chromaticity difference of each of the sub-detection areas with the corresponding sub-reference areas in the same column, and filtering out the first partial area in the sub-detection area whose chromaticity difference is greater than the first threshold;
    将存在像素重合的第一部分区域进行合并,并将合并后的区域标记为一个物体区域;和/或,将不存在像素重合的第一部分区域单独标记为一个物体区域。Merging the first partial areas with overlapping pixels, and marking the merged area as an object area; and/or, separately marking the first partial areas without pixel overlap as an object area.
  4. 根据权利要求3所述的方法,其中,通过以下步骤标记所述阴影区域:The method according to claim 3, wherein the shaded area is marked by the following steps:
    筛选出子检测区域中的色度差小于所述第一阈值且亮度小于第二阈值的 第二部分区域;Filter out the second partial area in the sub-detection area where the chromaticity difference is less than the first threshold and the brightness is less than the second threshold;
    将存在像素重合的第二部分区域进行合并,并将合并后的区域标记为一个阴影区域;和/或,将不存在像素重合的第二部分区域单独标记为一个阴影区域。Merging the second partial areas with overlapping pixels, and marking the merged area as a shaded area; and/or, separately marking the second partial areas without overlapping pixels as a shaded area.
  5. 根据权利要求3所述的方法,其中,所述将存在像素重合的第一部分区域进行合并,并将合并后的区域标记为一个物体区域;和/或,将不存在像素重合的第一部分区域单独标记为一个物体区域的步骤,包括:The method according to claim 3, wherein the first partial regions with overlapping pixels are combined, and the combined region is marked as an object region; and/or, the first partial regions without overlapping pixels are separated The steps for marking an object area include:
    将存在像素重合的第一部分区域进行合并,并将合并后的面积大于第三阈值的区域标记为一个物体区域;Combine the first partial areas with overlapping pixels, and mark the area whose combined area is greater than the third threshold as an object area;
    和/或,将不存在像素重合且面积大于第三阈值的第一部分区域,单独标记为一个物体区域。And/or, mark the first partial area that does not have overlapping pixels and the area is greater than the third threshold as an object area.
  6. 根据权利要求4所述的方法,其中,所述将存在像素重合的第二部分区域进行合并,并将合并后的区域标记为一个阴影区域;和/或,将不存在像素重合的第二部分区域单独标记为一个阴影区域的步骤,包括:The method according to claim 4, wherein said merging the second part of the area where the pixels overlap, and marking the merged area as a shaded area; and/or, there is no second part where the pixels overlap The steps for individually marking the area as a shaded area include:
    将存在像素重合的第二部分区域进行合并,并将合并后的面积大于第四阈值的区域标记为一个阴影区域;Combine the second partial areas with overlapping pixels, and mark the area whose combined area is greater than the fourth threshold as a shaded area;
    和/或,将不存在像素重合且面积大于第四阈值的第二部分区域,单独标记为一个阴影区域。And/or, separately mark the second partial area with no overlapping pixels and an area larger than the fourth threshold as a shaded area.
  7. 根据权利要求2所述的方法,其中,所述对标记为疑似障碍物的所述物体区域进行尺寸条件过滤,并将经过滤后保留的所述物体区域标记为障碍物的步骤,包括:The method according to claim 2, wherein the step of filtering the object area marked as a suspected obstacle by a size condition, and marking the object area retained after the filtering as an obstacle, comprises:
    计算所述阴影区域的面积与所述疑似障碍物的所述物体区域的面积的比值;Calculating the ratio of the area of the shadow area to the area of the object area of the suspected obstacle;
    当所述比值大于第五阈值时,将所述物体区域标记为所述障碍物。When the ratio is greater than a fifth threshold, the object area is marked as the obstacle.
  8. 根据权利要求2所述的方法,其中,在所述将经过滤后保留的所述物体区域标记为障碍物的步骤之后,所述方法还包括:The method according to claim 2, wherein after the step of marking the object area retained after filtering as an obstacle, the method further comprises:
    确定标记为所述障碍物的所述物体区域在所述参考区域的左边界投影点坐标和右边界投影点坐标;Determining the coordinates of the projection point on the left boundary and the coordinates of the projection point on the right boundary of the object area marked as the obstacle;
    检测所述AGV的行进轨迹是否途经所述左边界投影点坐标和所述右边 界投影点坐标之间的位置范围;Detecting whether the travel trajectory of the AGV passes through the position range between the coordinates of the projection point on the left boundary and the coordinates of the projection point on the right boundary;
    当检测出所述AGV的行进轨迹途经所述左边界投影点坐标和右边界投影点坐标之间的位置范围时,为所述AGV生成使所述行进轨迹避让所述左边界投影点坐标和右边界投影点坐标之间的位置范围的规划避障策略。When it is detected that the travel trajectory of the AGV passes through the position range between the left boundary projection point coordinates and the right boundary projection point coordinates, generate for the AGV so that the travel trajectory avoids the left boundary projection point coordinates and the right boundary projection point coordinates. Planning obstacle avoidance strategy for the location range between the boundary projection point coordinates.
  9. 根据权利要求1所述的方法,其中,在所述检测区域中检测物体区域和阴影区域的步骤之后,所述方法还包括:The method according to claim 1, wherein, after the step of detecting the object area and the shadow area in the detection area, the method further comprises:
    当检测到所述检测区域中不存在所述阴影区域和所述物体区域时、或仅搜索到所述阴影区域和所述物体区域的其中之一时,确定所述检测区域中无障碍物。When it is detected that the shadow area and the object area do not exist in the detection area, or only one of the shadow area and the object area is searched, it is determined that there is no obstacle in the detection area.
  10. 一种检测障碍物的装置,其中,包括:A device for detecting obstacles, which includes:
    获取模块,用于获取装设于AGV的下倾摄像装置拍摄到的行进区域图像;The acquisition module is used to acquire the traveling area image captured by the downward tilt camera device installed in the AGV;
    第一确定模块,用于将所述行进区域图像的下部确定为参考区域;A first determining module, configured to determine the lower part of the travel area image as a reference area;
    第二确定模块,用于将所述行进区域图像中位于所述参考区域的上方的其余部分确定为检测区域;A second determining module, configured to determine the remaining part of the travel area image located above the reference area as a detection area;
    第一检测模块,用于在所述检测区域中检测物体区域和阴影区域,其中,所述物体区域为与所述参考区域的色度差超过第一阈值的区域,并且,所述阴影区域为与所述参考区域的色度差小于所述第一阈值、且亮度低于所述参考区域的区域;The first detection module is configured to detect an object area and a shadow area in the detection area, wherein the object area is an area whose chromaticity difference with the reference area exceeds a first threshold, and the shadow area is An area whose chromaticity difference from the reference area is less than the first threshold and whose brightness is lower than the reference area;
    第二检测模块,用于当检测到所述检测区域中同时存在所述物体区域和所述阴影区域时,检测所述物体区域和所述阴影区域之间的位置关系是否匹配光投影方向;The second detection module is configured to detect whether the positional relationship between the object area and the shadow area matches the light projection direction when it is detected that the object area and the shadow area are simultaneously present in the detection area;
    第一标记模块,用于当所述物体区域和所述阴影区域之间的位置关系匹配光投影方向时,将所述物体区域标记为疑似障碍物。The first marking module is configured to mark the object area as a suspected obstacle when the positional relationship between the object area and the shadow area matches the light projection direction.
  11. 根据权利要求10所述的装置,其中,所述装置还包括:The device according to claim 10, wherein the device further comprises:
    第二标记模块,用于对标记为所述疑似障碍物的所述物体区域进行尺寸条件过滤,并将经过滤后保留的所述物体区域标记为障碍物。The second marking module is used to perform size condition filtering on the object area marked as the suspected obstacle, and mark the object area remaining after filtering as an obstacle.
  12. 根据权利要求10所述的装置,其中,所述第一检测模块包括:The device according to claim 10, wherein the first detection module comprises:
    第一划分单元,用于将检测区域按列划分为若干子检测区域,其中,相邻的子检测区域的部分像素重合;The first dividing unit is used to divide the detection area into several sub-detection areas in columns, wherein some pixels of adjacent sub-detection areas overlap;
    第二划分单元,用于将参考区域按列划分为若干子参考区域,其中,相邻的子参考区域的部分像素重合;The second dividing unit is used to divide the reference area into a number of sub-reference areas in columns, wherein some pixels of adjacent sub-reference areas overlap;
    第一比较单元,用于比较每个所述子检测区域与同一列的对应的所述子参考区域的色度差,筛选出子检测区域中的色度差大于所述第一阈值的第一部分区域;The first comparison unit is configured to compare the chromaticity difference between each of the sub-detection areas and the corresponding sub-reference areas in the same column, and filter out the first part of the sub-detection area whose chromaticity difference is greater than the first threshold area;
    第一标记单元,用于将存在像素重合的第一部分区域进行合并,并将合并后的区域标记为一个物体区域;和/或,将不存在像素重合的第一部分区域单独标记为一个物体区域。The first marking unit is used to merge the first partial areas with pixel overlap and mark the merged area as an object area; and/or separately mark the first partial areas without pixel overlap as an object area.
  13. 根据权利要求12所述的装置,其中,所述第一检测模块还包括:The device according to claim 12, wherein the first detection module further comprises:
    筛选单元,用于筛选出子检测区域中的色度差小于所述第一阈值且亮度小于第二阈值的第二部分区域;The screening unit is configured to screen out the second partial area in the sub-detection area where the chromaticity difference is less than the first threshold and the brightness is less than the second threshold;
    第二标记单元,用于将存在像素重合的第二部分区域进行合并,并将合并后的区域标记为一个阴影区域;和/或,将不存在像素重合的第二部分区域单独标记为一个阴影区域。The second marking unit is used to merge the second partial area with overlapping pixels, and mark the merged area as a shaded area; and/or separately mark the second partial area without pixel overlap as a shade area.
  14. 根据权利要求12所述的装置,其中,所述第一标记单元包括:The device according to claim 12, wherein the first marking unit comprises:
    第一标记子单元,用于将存在像素重合的第一部分区域进行合并,并将合并后的面积大于第三阈值的区域标记为一个物体区域;The first marking subunit is used to merge the first partial areas where the pixels overlap, and mark the merged area larger than the third threshold as an object area;
    第二标记子单元,用于将不存在像素重合且面积大于第三阈值的第一部分区域,单独标记为一个物体区域。The second marking subunit is used for marking the first partial area with no overlapping pixels and an area larger than the third threshold as an object area.
  15. 根据权利要求14所述的装置,其中,所述第二标记单元包括:The device according to claim 14, wherein the second marking unit comprises:
    第三标记子单元,用于将存在像素重合的第二部分区域进行合并,并将合并后的面积大于第四阈值的区域标记为一个阴影区域;The third marking subunit is used for merging the second partial area where the pixels overlap, and marking the area whose merged area is greater than the fourth threshold as a shaded area;
    第四标记子单元,用于将不存在像素重合且面积大于第四阈值的第二部分区域,单独标记为一个阴影区域。The fourth marking subunit is used to individually mark the second partial area where there is no pixel overlap and the area is greater than the fourth threshold as a shaded area.
  16. 根据权利要求11所述的装置,其中,所述第一标记模块包括:The device according to claim 11, wherein the first marking module comprises:
    第一计算单元,用于计算所述阴影区域的面积与所述疑似障碍物的所述物体区域的面积的比值;A first calculation unit for calculating the ratio of the area of the shadow area to the area of the object area of the suspected obstacle;
    第三标记单元,用于当所述比值大于第五阈值时,将所述物体区域标记为所述障碍物。The third marking unit is used for marking the object area as the obstacle when the ratio is greater than a fifth threshold.
  17. 根据权利要求11所述的装置,其中,所述装置还包括:The device according to claim 11, wherein the device further comprises:
    第三确定模块,用于确定标记为所述障碍物的所述物体区域在所述参考区域的左边界投影点坐标和右边界投影点坐标;A third determining module, configured to determine the coordinates of the projection point on the left boundary and the coordinates of the projection point on the right boundary of the object area marked as the obstacle;
    第三检测模块,用于检测所述AGV的行进轨迹是否途径所述左边界投影点坐标和所述右边界投影点坐标之间的位置范围;The third detection module is configured to detect whether the AGV's trajectory passes through the position range between the coordinates of the left boundary projection point and the coordinates of the right boundary projection point;
    规划模块,用于当检测出所述AGV的行进轨迹途经所述左边界投影点坐标和右边界投影点坐标之间的位置范围时,为所述AGV生成使所述行进轨迹避让所述左边界投影点坐标和右边界投影点坐标之间的位置范围的规划避障策略。The planning module is used to generate for the AGV to make the travel trajectory avoid the left boundary when it is detected that the trajectory of the AGV passes through the position range between the coordinates of the projection point on the left boundary and the coordinates of the projection point on the right boundary. The planning obstacle avoidance strategy of the position range between the projection point coordinates and the right boundary projection point coordinates.
  18. 根据权利要求10所述的装置,其中,所述装置还包括:The device according to claim 10, wherein the device further comprises:
    第四确定模块,用于当检测到所述检测区域中未同时存在所述阴影区域和所述物体区域时、或仅搜索到所述阴影区域和所述物体区域的其中之一时,确定所述检测区域中无障碍物。The fourth determining module is configured to determine when it is detected that the shadow area and the object area do not exist at the same time in the detection area, or when only one of the shadow area and the object area is searched There are no obstacles in the detection area.
  19. 一种非瞬时计算机可读存储介质,其中,所述非瞬时计算机可读存储介质存储指令,所述指令在由处理器执行时使得所述处理器执行如权利要求1至9任一所述的检测障碍物的方法。A non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores instructions, which when executed by a processor cause the processor to execute any of claims 1 to 9 Methods of detecting obstacles.
  20. 一种移动机器人,其中,包括处理器和下倾摄像装置,所述下倾摄像装置在所述移动机器人的前端以预设的角度下倾,所述处理器用于执行如权利要求1至9任一所述的检测障碍物的方法。A mobile robot, comprising a processor and a downward tilting camera device, the downward tilting camera device tilts down at a preset angle at the front end of the mobile robot, and the processor is configured to perform any of claims 1 to 9 1. The method of detecting obstacles.
  21. 根据权利要求20所述的移动机器人,其中,所述下倾摄像装置下倾的预设的角度为8°至12°。The mobile robot according to claim 20, wherein the preset angle of the downward tilt of the downward tilt camera device is 8° to 12°.
PCT/CN2020/092276 2019-06-03 2020-05-26 Obstacle detection method, device, storage medium, and mobile robot WO2020244414A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910476765.5A CN112036210B (en) 2019-06-03 2019-06-03 Method and device for detecting obstacle, storage medium and mobile robot
CN201910476765.5 2019-06-03

Publications (1)

Publication Number Publication Date
WO2020244414A1 true WO2020244414A1 (en) 2020-12-10

Family

ID=73576676

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/092276 WO2020244414A1 (en) 2019-06-03 2020-05-26 Obstacle detection method, device, storage medium, and mobile robot

Country Status (2)

Country Link
CN (1) CN112036210B (en)
WO (1) WO2020244414A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113624249A (en) * 2021-08-26 2021-11-09 北京京东乾石科技有限公司 Lock point operation execution method and device, electronic equipment and computer readable medium
CN114359174A (en) * 2021-12-16 2022-04-15 苏州镁伽科技有限公司 Conductive particle recognition method, conductive particle recognition device, electronic equipment and storage medium
CN114751151A (en) * 2021-01-12 2022-07-15 贵州中烟工业有限责任公司 Calculation method for installation area of detection device and storage medium
WO2023113799A1 (en) * 2021-12-16 2023-06-22 Hewlett-Packard Development Company, L.P. Surface marking robots and obstacles
CN117496359A (en) * 2023-12-29 2024-02-02 浙江大学山东(临沂)现代农业研究院 Plant planting layout monitoring method and system based on three-dimensional point cloud

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114677318A (en) * 2020-12-24 2022-06-28 苏州科瓴精密机械科技有限公司 Obstacle identification method, device, equipment, medium and weeding robot
CN114200946B (en) * 2021-12-14 2024-05-28 闽江学院 AGV trolley control method for intelligent manufacturing machining production line
CN117148811B (en) * 2023-11-01 2024-01-16 宁波舜宇贝尔机器人有限公司 AGV trolley carrying control method and system, intelligent terminal and lifting mechanism

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1782668A (en) * 2004-12-03 2006-06-07 曾俊元 Method and device for preventing collison by video obstacle sensing
CN104574365A (en) * 2014-12-18 2015-04-29 中国科学院计算技术研究所 Barrier detection device and method
US20180174322A1 (en) * 2016-12-15 2018-06-21 Egismos Technology Corporation Path detection system and path detection method generating laser pattern by diffractive optical element
CN108596012A (en) * 2018-01-19 2018-09-28 海信集团有限公司 A kind of barrier frame merging method, device and terminal

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413308B (en) * 2013-08-01 2016-07-06 东软集团股份有限公司 A kind of obstacle detection method and device
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN106228110B (en) * 2016-07-07 2019-09-20 浙江零跑科技有限公司 A kind of barrier and drivable region detection method based on vehicle-mounted binocular camera
CN106650701B (en) * 2017-01-05 2020-01-14 华南理工大学 Binocular vision-based obstacle detection method and device in indoor shadow environment
CN106997721B (en) * 2017-04-17 2019-05-31 深圳奥比中光科技有限公司 Draw the method, apparatus and storage device of 2D map
CN108680157B (en) * 2018-03-12 2020-12-04 海信集团有限公司 Method, device and terminal for planning obstacle detection area
CN108416306B (en) * 2018-03-12 2020-12-25 海信集团有限公司 Continuous obstacle detection method, device, equipment and storage medium
CN109141364B (en) * 2018-08-01 2020-11-03 北京进化者机器人科技有限公司 Obstacle detection method and system and robot

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1782668A (en) * 2004-12-03 2006-06-07 曾俊元 Method and device for preventing collison by video obstacle sensing
CN104574365A (en) * 2014-12-18 2015-04-29 中国科学院计算技术研究所 Barrier detection device and method
US20180174322A1 (en) * 2016-12-15 2018-06-21 Egismos Technology Corporation Path detection system and path detection method generating laser pattern by diffractive optical element
CN108596012A (en) * 2018-01-19 2018-09-28 海信集团有限公司 A kind of barrier frame merging method, device and terminal

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114751151A (en) * 2021-01-12 2022-07-15 贵州中烟工业有限责任公司 Calculation method for installation area of detection device and storage medium
CN114751151B (en) * 2021-01-12 2024-03-26 贵州中烟工业有限责任公司 Calculation method of detection device installation area and storage medium
CN113624249A (en) * 2021-08-26 2021-11-09 北京京东乾石科技有限公司 Lock point operation execution method and device, electronic equipment and computer readable medium
CN113624249B (en) * 2021-08-26 2024-04-12 北京京东乾石科技有限公司 Lock point operation execution method, device, electronic equipment and computer readable medium
CN114359174A (en) * 2021-12-16 2022-04-15 苏州镁伽科技有限公司 Conductive particle recognition method, conductive particle recognition device, electronic equipment and storage medium
WO2023113799A1 (en) * 2021-12-16 2023-06-22 Hewlett-Packard Development Company, L.P. Surface marking robots and obstacles
CN117496359A (en) * 2023-12-29 2024-02-02 浙江大学山东(临沂)现代农业研究院 Plant planting layout monitoring method and system based on three-dimensional point cloud
CN117496359B (en) * 2023-12-29 2024-03-22 浙江大学山东(临沂)现代农业研究院 Plant planting layout monitoring method and system based on three-dimensional point cloud

Also Published As

Publication number Publication date
CN112036210B (en) 2024-03-08
CN112036210A (en) 2020-12-04

Similar Documents

Publication Publication Date Title
WO2020244414A1 (en) Obstacle detection method, device, storage medium, and mobile robot
US10776939B2 (en) Obstacle avoidance system based on embedded stereo vision for unmanned aerial vehicles
US10129521B2 (en) Depth sensing method and system for autonomous vehicles
Srinivasa Vision-based vehicle detection and tracking method for forward collision warning in automobiles
US10650252B2 (en) Lane detection device and lane detection method
US8055016B2 (en) Apparatus and method for normalizing face image used for detecting drowsy driving
Turk et al. Video road-following for the autonomous land vehicle
US10268904B2 (en) Vehicle vision system with object and lane fusion
CN104318206B (en) A kind of obstacle detection method and device
JP2018522348A (en) Method and system for estimating the three-dimensional posture of a sensor
CN108027248A (en) The industrial vehicle of positioning and navigation with feature based
CN106871906B (en) Navigation method and device for blind person and terminal equipment
JP3644668B2 (en) Image monitoring device
JP2008158958A (en) Road surface determination method and road surface determination device
JP5105481B2 (en) Lane detection device, lane detection method, and lane detection program
JP4042517B2 (en) Moving body and position detection device thereof
JPH11153406A (en) Obstacle detector for vehicle
JP5539250B2 (en) Approaching object detection device and approaching object detection method
CN108107897A (en) Real time sensor control method and device
TWI532619B (en) Dual Image Obstacle Avoidance Path Planning Navigation Control Method
US20240104936A1 (en) Object detection device, object detection method, and storage medium
JP6860445B2 (en) Object distance detector
WO2020154911A1 (en) Sky determination in environment detection for mobile platforms, and associated systems and methods
WO2018146997A1 (en) Three-dimensional object detection device
JP6949090B2 (en) Obstacle detection device and obstacle detection method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20818964

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20818964

Country of ref document: EP

Kind code of ref document: A1