CN114859938A - Robot, dynamic obstacle state estimation method and device and computer equipment - Google Patents

Robot, dynamic obstacle state estimation method and device and computer equipment Download PDF

Info

Publication number
CN114859938A
CN114859938A CN202210686427.6A CN202210686427A CN114859938A CN 114859938 A CN114859938 A CN 114859938A CN 202210686427 A CN202210686427 A CN 202210686427A CN 114859938 A CN114859938 A CN 114859938A
Authority
CN
China
Prior art keywords
obstacle
image
dynamic
value
environment map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210686427.6A
Other languages
Chinese (zh)
Inventor
黄寅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pudu Technology Co Ltd
Original Assignee
Shenzhen Pudu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pudu Technology Co Ltd filed Critical Shenzhen Pudu Technology Co Ltd
Priority to CN202210686427.6A priority Critical patent/CN114859938A/en
Publication of CN114859938A publication Critical patent/CN114859938A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0285Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using signals transmitted via a public communication network, e.g. GSM network

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a robot, a dynamic obstacle motion state estimation method, a dynamic obstacle motion state estimation device, a computer device, a storage medium and a computer program product. The robot acquires point cloud data and positions of the barrier at different moments; drawing an obstacle time image and a static probability image in an environment map according to point cloud data and positions at different moments; determining whether the obstacle is a dynamic obstacle based on the obstacle time image and the static probability image; if the obstacle is a dynamic obstacle, generating a dynamic obstacle time image based on the obstacle time image and the static probability image; and obtaining the motion state of the dynamic obstacle based on the dynamic obstacle time image. When the obstacle is in a dynamic motion state, the accuracy of segmentation of the target point cloud and the background point cloud is high, so that the state estimation of the target is more accurate.

Description

Robot, dynamic obstacle state estimation method and device and computer equipment
Technical Field
The present application relates to the field of robotics, and in particular, to a robot, a method and an apparatus for estimating a dynamic obstacle state, a computer device, a storage medium, and a computer program product.
Background
In the field of robot navigation, detection and motion estimation of obstacles are important tasks. The common method is based on multi-target tracking, and the basic flow of the method comprises target detection, target association and target tracking in sequence. In the process of target detection, the method mainly uses a target segmentation mode to distinguish target data from background data.
In the conventional technique, when the obstacle is in a dynamic motion state, the target point cloud and the background point cloud are likely to cause erroneous segmentation, resulting in erroneous estimation of the state of the target.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a computer device, a computer readable storage medium, and a computer program product for estimating a motion state of a dynamic obstacle of a robot, which can improve accuracy of estimating a state of a dynamic obstacle.
In a first aspect, the present application provides a robot comprising a memory for storing computer readable instructions executable on the processor and a processor for implementing the following steps when executing the computer readable instructions:
acquiring point cloud data and positions of the barrier at different moments;
drawing an obstacle time image of the obstacle and a static probability image of the obstacle in an environment map according to the point cloud data and the positions at different moments;
determining whether the obstacle is a dynamic obstacle based on the obstacle time image and the static probability image;
if the obstacle is a dynamic obstacle, generating a dynamic obstacle time image of the dynamic obstacle based on the obstacle time image and the static probability image;
and obtaining the motion state of the dynamic obstacle based on the dynamic obstacle time image.
In one embodiment, the drawing an obstacle time image of the obstacle and a static probability image of the obstacle in an environment map according to the point cloud data and the position at different time includes:
acquiring a first environment map and a second environment map;
and sequentially converting the point cloud data and the positions at all times into image values of the first environment map and image values of the second environment map according to different image value change trends to obtain the obstacle time image and the static probability image.
In one embodiment, the point cloud data and the position of the obstacle at different moments in time are acquired in real time; the converting the point cloud data and the positions at each moment into an image numerical value of the first environment map and an image numerical value of the second environment map to obtain the obstacle time image and the static probability image includes:
reducing the image numerical value of each area in the first environment map when point cloud data and positions acquired in real time are acquired, and obtaining a first environment map after value reduction;
projecting the point cloud data and the position acquired in real time to the second environment map and the first environment map subjected to value reduction;
determining an area in which an image numerical value in the first environment map is set as a preset value of a time image based on a projection result in the first environment map, and obtaining the time image of the obstacle;
and determining the area of the second environment map in which the image value is increased and/or decreased based on the projection result in the second environment map to obtain the static probability image.
In one embodiment, the point cloud data and location are collected in real-time by a sensor; the projecting the point cloud data and the position acquired in real time to the second environment map and the first environment map after the value reduction comprises:
according to the parameters of the sensor, converting the obstacle point cloud data and the position into a robot coordinate system to obtain obstacle data under the robot coordinate system;
according to the pose of the robot, performing coordinate system conversion on the barrier data under the robot coordinate system to obtain barrier data under a target coordinate system;
and projecting the obstacle data in the target coordinate system to the second environment map and the first environment map after the value reduction respectively.
In one embodiment, the determining whether the obstacle is a dynamic obstacle based on the obstacle time image and the static probability image includes:
carrying out binarization processing on the image numerical value corresponding to the position in the static probability image to obtain a static image;
determining whether the obstacle is a dynamic obstacle based on the static image and the obstacle time image.
In one embodiment, the motion state comprises a speed direction and a speed value of the dynamic obstacle, and point cloud data and positions of the obstacle at different moments are acquired according to a perception interval of a sensor; the obtaining the motion state of the dynamic obstacle based on the dynamic obstacle time image comprises:
respectively acquiring image numerical values related to the point cloud data and the acquisition time of the position in a plurality of obstacle areas of the dynamic obstacle time image;
acquiring a direction value and a track value of each obstacle region based on the image numerical value;
determining the speed direction according to the direction value;
determining a critical area in the plurality of obstacle areas according to the track value;
and calculating the speed value according to the critical area and the perception interval.
In one embodiment, the obtaining the direction value and the trajectory value of each obstacle region based on the image numerical value includes:
carrying out gradient calculation on image numerical values of each obstacle region at different moments to obtain a gradient image; the gradient image comprises a gradient direction image and a gradient magnitude image;
acquiring the direction value from the gradient direction image;
and filtering the gradient amplitude image, and taking an image numerical value corresponding to the filtered gradient amplitude image as the track value.
In one embodiment, the determining a critical area of the plurality of obstacle areas according to the trajectory values includes:
determining a first critical area and a second critical area of the plurality of obstacle areas based on the trajectory values and corresponding critical values;
the calculating the speed value according to the critical area and the sensing interval includes:
calculating a critical area physical distance based on a pixel distance between the first critical area and the second critical area;
calculating the difference between the first critical area and the second critical area to obtain a track difference value;
and calculating the speed value based on the perception interval, the track difference value and the physical distance of the critical area.
In one embodiment, the processor is further configured to implement the following steps when executing the computer readable instructions:
when the dynamic barrier is multiple, determining a region growing seed of a target dynamic barrier based on an image numerical value of each barrier region; the target dynamic obstacle belongs to at least one of the dynamic obstacles;
when it is determined that the seed neighborhood of the region growing seed meets the region growing condition, respectively taking each barrier region corresponding to the target dynamic barrier as a next region growing seed for region growing to obtain a region growing result;
determining an obstacle region of the target dynamic obstacle based on the region growing result, the region growing seeds, and a seed neighborhood satisfying the region growing condition.
In one embodiment, the determining that the seed neighborhood of the region growing seed satisfies the region growing condition includes:
in the seed neighborhood of the region growing seeds, comparing the image numerical value of each obstacle region with a region growing threshold value to obtain a growing threshold value comparison result;
calculating image numerical difference information of the region growing seeds and the seed neighborhoods to obtain a difference information comparison result;
performing image numerical comparison processing on the region growing seeds and the seed neighborhood to obtain a seed neighborhood comparison result;
and determining that the seed neighborhood meets the region growing condition based on the growth threshold comparison result, the difference information comparison result and the seed neighborhood comparison result.
In a second aspect, the present application further provides a method for estimating a motion state of a dynamic obstacle, where the method includes:
acquiring point cloud data and positions of the barrier at different moments;
drawing an obstacle time image of the obstacle and a static probability image of the obstacle in an environment map according to the point cloud data and the positions at different moments;
determining whether the obstacle is a dynamic obstacle based on the obstacle time image and the static probability image;
if the obstacle is a dynamic obstacle, generating a dynamic obstacle time image of the dynamic obstacle based on the obstacle time image and the static probability image;
and obtaining the motion state of the dynamic obstacle based on the dynamic obstacle time image.
In a third aspect, the present application further provides an apparatus for estimating a motion state of a dynamic obstacle, where the apparatus includes:
the data acquisition module is used for acquiring point cloud data and positions of the barrier at different moments;
the image construction module is used for drawing an obstacle time image of the obstacle and a static probability image of the obstacle in an environment map according to the point cloud data and the positions at different moments;
a dynamic obstacle determination module for determining whether the obstacle is a dynamic obstacle based on the obstacle time image and the static probability image;
a dynamic obstacle image generation module, configured to generate a dynamic obstacle time image of the dynamic obstacle based on the obstacle time image and the static probability image if the obstacle is a dynamic obstacle;
and the dynamic obstacle state estimation module is used for obtaining the motion state of the dynamic obstacle based on the dynamic obstacle time image.
In a fourth aspect, the present application further provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps implemented by the robot in any of the embodiments described above when executing the computer program.
In a fifth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps implemented by the robot of any of the above embodiments.
In a sixth aspect, the present application further provides a computer program product. The computer program product comprising a computer program that when executed by a processor performs the steps performed by the robot in any of the embodiments described above.
According to the robot, the dynamic obstacle motion state estimation method, the dynamic obstacle motion state estimation device, the computer equipment, the storage medium and the computer program product, the obstacle time image and the static probability image are drawn in the environment map according to the point cloud data and the positions of the obstacles at different moments, so that one part of point cloud data and one part of the position are respectively drawn according to different rules to obtain the obstacle time image and the static probability image with difference information, the obstacle is accurately determined to be the dynamic obstacle, the high-precision dynamic obstacle time image is generated based on the obstacle time image and the static probability image, and the motion state of the dynamic obstacle is more accurately determined based on the high-precision dynamic obstacle time image.
Drawings
Fig. 1 is an application environment diagram of a dynamic obstacle motion state estimation method of a robot in one embodiment;
FIG. 2 is a schematic flow chart illustrating a method for estimating a motion state of a dynamic obstacle of a robot according to an embodiment;
FIG. 3 is a time image at different times in one embodiment;
FIG. 4 is a static probability image at different times in another embodiment;
FIG. 5 is a static map generated based on a static probabilistic image in one embodiment;
FIG. 6 is a schematic flow chart diagram illustrating the generation of a dynamic obstacle time image in one embodiment;
FIG. 7 is a flow diagram illustrating motion state estimation in one embodiment;
FIG. 8 is a dynamic obstacle time image in one embodiment;
FIG. 9 is a flowchart illustrating a method for estimating a motion state of a dynamic obstacle of a robot according to an embodiment;
FIG. 10 is a schematic flow chart of region growing in one embodiment;
FIG. 11 is a diagram illustrating region growing seeds and seed neighborhoods in one embodiment;
FIG. 12 is a schematic flow chart of region growing in one embodiment;
FIG. 13 is a schematic representation of the results of region growing in one embodiment;
FIG. 14 is a schematic illustration of the result of region growing in one embodiment;
fig. 15 is a block diagram showing a configuration of a dynamic obstacle motion state estimation apparatus of a robot in one embodiment;
FIG. 16 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The method for estimating the motion state of the dynamic obstacle of the robot provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104, or may be located on the cloud or other network server. The robot acquires point cloud data and positions of the barrier at different moments; drawing an obstacle time image and a static probability image in an environment map according to point cloud data and positions at different moments; determining whether the obstacle is a dynamic obstacle based on the obstacle time image and the static probability image; if the obstacle is a dynamic obstacle, generating a dynamic obstacle time image based on the obstacle time image and the static probability image; and estimating the motion state of the dynamic obstacle based on the dynamic obstacle time image.
The terminal 102 may be, but not limited to, various robots, personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart car-mounted devices, and the like. The portable wearable device can be a smart watch, a smart bracelet, a head-mounted device, and the like. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In one embodiment, as shown in fig. 2, a method for estimating a motion state of a dynamic obstacle of a robot is provided, which is described by taking the method as an example applied to the terminal 102 in fig. 1, and includes the following steps:
step 202, point cloud data and positions of obstacles at different moments are obtained.
The point cloud data is data output after the sensors such as laser radar or RGBD and the like acquire data of the obstacles. The point cloud data is a vector set with at least three dimensions, the vector set is used for representing the shape of the outer surface of the corresponding obstacle, and each point in the point cloud data has at least partial data of the corresponding obstacle such as the position of each point of the obstacle. The point cloud data and positions of the obstacle at different moments are the point cloud data of the obstacle acquired at different moments by a certain sensor and the positions corresponding to the acquired point cloud data.
And step 204, drawing an obstacle time image and a static probability image in the environment map according to the point cloud data and the positions at different moments.
The environment map is used for representing known information in the environment where the processor is located, each position in the environment map has an image numerical value, the image numerical values of each position in the environment map are changed according to the change trend of the image numerical values, and an obstacle time image and a static probability image with difference information can be drawn. The obstacle time image is used to characterize the trajectory values of the obstacle, while the static probability image is used to determine whether the obstacle in the obstacle time image is a dynamic obstacle.
Before the obstacle time image and the static probability image are drawn in the environment map, the processor determines the environment map and corresponding obstacle positions according to a coordinate system of the environment map, one obstacle position can be a pixel block or a grid and the like, image numerical values in all the obstacle positions are the same, and the image numerical values can be displayed in any expression mode such as gray values or color values.
In one embodiment, the step of drawing an obstacle time image and a static probability image in an environment map according to point cloud data and positions at different moments comprises the following steps: acquiring a first environment map and a second environment map; and according to different image value change trends, sequentially converting the point cloud data and the positions at all times into image values of a first environment map and image values of a second environment map to obtain an obstacle time image and a static probability image.
The first environment map and the second environment map are initialized environment maps, and when the resolutions of the first environment map and the second environment map are the same, the obstacle time image and the static probability image with the same resolution can be obtained, so that when the dynamic obstacle time image is generated based on the obstacle time image and the static probability image, the process of generating the dynamic obstacle time image occupies less computing resources of a processor, and the computing speed is accelerated.
The image value change trend is used for determining the conversion rule of the point cloud data and the position, and different image value change rules enable the image values in the first environment image and the second environment image to have difference, so that the difference information of the first environment image and the second environment image can be calculated.
In one embodiment, the point cloud data and the positions of the obstacles at different times are collected in real time, so that the processor analyzes the real-time obstacle data, determines the real-time change of the obstacles, and plans the moving path of the processor to identify, operate, approach or avoid the corresponding obstacles. The environment map belongs to a two-dimensional map, each area of the environment map is obtained by point cloud data and position projection, and one area can be a grid.
Correspondingly, the method for projecting the point cloud data and the positions of all the moments to the first environment map and the second environment map in sequence to obtain the obstacle time image and the static probability image comprises the following steps: when point cloud data and positions acquired in real time are acquired, image numerical values of all areas in a first environment map are reduced, and the first environment map with the reduced values is obtained; projecting the point cloud data and the position acquired in real time to a second environment map and the first environment map after value reduction; determining an area in which an image numerical value in the first environment map is set as a time image preset value based on a projection result in the first environment map to obtain an obstacle time image; and determining the area of the second environment map in which the image value is increased and/or decreased based on the projection result in the second environment map to obtain a static probability image.
As shown in fig. 3, each time point cloud data and a position acquired in real time are acquired, image values of each area in the first environment map are reduced, and the reduced first environment map is obtained. Therefore, the image values of the areas in the first environment map are decreased along with the time lapse, so that the obstacle data at different moments are clearly displayed, and the point cloud data and the positions acquired at different moments are convenient to determine. And projecting the point cloud data and the position acquired in real time to a second environment map and the first environment map after value reduction, wherein the process of reducing the dimensions of the point cloud data and the position is used for removing the dimensions incompatible with the environment map in the point cloud data, and further obtaining a projection result with the same dimensions as the environment map.
And determining an area with the image numerical value set as a time image preset value in the first environment map based on the projection result in the first environment map, so that the difference information between the real-time obstacle area and other areas in the first environment map is gradually displayed.
As shown in fig. 4, determining the region where the image value increases and/or decreases in the second environment map based on the projection result in the second environment map includes: judging whether real-time obstacle data exist in each obstacle area in the second environment map or not based on the projection result in the second environment map; increasing image values for obstacle regions with real-time obstacle data; and/or, reducing the image value for the obstacle area without the real-time obstacle data. Thus, as time passes, the image values of the dynamic obstacle positions in the second environment map do not exceed the corresponding threshold values, while the static obstacle positions in the second environment map can accumulate the image values, facilitating determination of whether the obstacle is a dynamic obstacle.
For example: after the first environment map is determined, when point cloud data and positions corresponding to one frame of image are fused into the first environment map, image values are reduced once when one frame of image is obtained, and the image values of obstacle areas corresponding to the point cloud data of the frame are set as time image preset values, so that an obstacle time image is obtained; and the corresponding point cloud data and the corresponding position are not detected in the second environment map, the corresponding area in the second environment map is reduced once, the area of the obstacle is detected, and the corresponding area in the second environment map is added once, so that the static probability image is obtained.
In one embodiment, the point cloud data and the position are collected in real time through a sensor, and the point cloud data and the position collected in real time through the sensor can provide real-time information for the robot, and the point cloud data and the position can be collected whether the robot is networked or not.
Projecting the point cloud data and the position acquired in real time into a second environment map and a first environment map after value reduction, wherein the method comprises the following steps: according to parameters of the sensor, converting the cloud data and the position of the obstacle point into a robot coordinate system to obtain obstacle data in the robot coordinate system; according to the pose of the robot, performing coordinate system conversion on the barrier data in the robot coordinate system to obtain the barrier data in the target coordinate system; and respectively projecting the barrier data under the target coordinate system to the second environment map and the first environment map after value reduction.
The robot coordinate system is a coordinate system set based on the position of a certain component of the robot, and is used for determining the position information of the robot coordinate system relative to the world coordinate system. The robot coordinate system may be a type coordinate system such as a joint coordinate system or a workpiece coordinate system, and the position vector in the robot coordinate system determines position information of the robot itself or an obstacle.
The robot pose is data set based on a robot coordinate system, comprises robot position and orientation coordinates of the robot coordinate system, and can also relate to speed or robot joint pose. The robot posture is used for representing the posture of the robot in space, and corresponding motion trends and corresponding characteristics exist in different postures.
When the target coordinate system is a world coordinate system, obstacle avoidance is directly carried out on the basis of obstacle data under the world coordinate system; when the target coordinate system is set based on the environment map, firstly, obstacle data based on the world coordinate system is converted into the environment map coordinate system, and then obstacle avoidance is carried out on the obstacles according to the resolution of the environment map coordinate system.
And step 206, determining whether the obstacle is a dynamic obstacle or not based on the obstacle time image and the static probability image.
In the obstacle time image and the static probability image, because the image numerical value change trends corresponding to the obstacle time image and the static probability image are different, the same obstacle area has different expression forms, and the difference between the dynamic obstacle area and the static obstacle is further strengthened through the different expression forms so as to better judge whether the obstacle is the dynamic obstacle.
In one embodiment, determining whether the obstacle is a dynamic obstacle based on the obstacle time image and the static probability image includes: carrying out binarization processing on image numerical values corresponding to positions in the static probability image to obtain a static image; and determining whether the obstacle is a dynamic obstacle or not based on the static image and the obstacle time image.
As shown in fig. 5, the process of the binarization process of the static probability map is a process of comparing an image value with a binarization threshold value. When the image value of a certain position exceeds a binarization threshold value, determining the position of a static obstacle and the position of a non-static obstacle, and respectively assigning the image values of the position of the static obstacle and the position of the non-static obstacle to different image values to obtain a static image with two image values. And calculating the position difference between the static image and the obstacle time image, and determining whether the obstacle is a dynamic obstacle or not based on the position difference result. The image value of the static obstacle is the same as or similar to the image value of the background, and the image value of the dynamic obstacle is significantly different from the image value of the background.
For example: the binarization threshold value is 100, the image value of the position with the image value of 30 in the static probability image is set to be 0, and the position is determined to have no static obstacle; and for the position with the image value of 160 in the static probability image, the image value is set to be 255, the position is determined to have the static obstacle, and then the static image with the image value of 0 or 255 is obtained.
And step 208, if the obstacle is a dynamic obstacle, generating a dynamic obstacle time image based on the obstacle time image and the static probability image.
And the dynamic obstacle time image is used for representing the positions occupied by the dynamic obstacle at different moments. The dynamic obstacle area in the dynamic obstacle time image has corresponding image values, the image values are gradually changed along with time, calculation is carried out based on the gradually changed image values, and the calculation result is used for accurately estimating the motion state of the dynamic obstacle.
In the time image of the obstacle and the static probability image, calculating difference information of image values of all positions to obtain a time image of the dynamic obstacle; and a static image is generated based on the static probability image, and then image numerical value calculation is carried out based on the static image and the obstacle time image, so that a more accurate dynamic obstacle time image can be obtained.
In one embodiment, as shown in fig. 6, when the static image and the obstacle time image are calculated, in the static image in fig. 6(a) and the obstacle time image in fig. 6(b), areas having different values from those of the background area image are respectively used as corresponding obstacle areas, obstacle area difference information in the two images is calculated, and based on the calculation result, the static obstacle in the time obstacle image is removed, so that a dynamic obstacle time image, that is, fig. 6(c), is obtained.
And step 210, estimating the motion state of the dynamic obstacle based on the dynamic obstacle time image.
Each obstacle area in the dynamic obstacle time image has a corresponding image value, the image values represent track values of the dynamic obstacle, and the motion state of the dynamic obstacle can be estimated more accurately based on the track values.
In one embodiment, estimating the motion state of the dynamic obstacle based on the dynamic obstacle time image includes: determining an obstacle region in the dynamic obstacle time image based on the dynamic obstacle time image, determining a critical region based on the obstacle region, calculating a map distance of the critical region on an environment map, calculating based on the map distance and the resolution of the environment map, and calculating the motion state of the dynamic obstacle.
According to the method for estimating the motion state of the dynamic obstacle executed by the processor of the robot, an obstacle time image and a static probability image are drawn in an environment map according to point cloud data and positions of the obstacle at different moments, so that one part of point cloud data and one part of position are respectively drawn according to different rules, the obstacle time image and the static probability image with difference information are obtained, the obstacle is accurately determined to be the dynamic obstacle, then a high-precision dynamic obstacle time image is generated based on the obstacle time image and the static probability image, and the motion state of the dynamic obstacle is more accurately determined based on the high-precision dynamic obstacle time image.
In one embodiment, the motion state comprises a speed direction and a speed value of the dynamic obstacle, and point cloud data and positions of the obstacle at different moments are acquired according to a perception interval of the sensor. The sensing interval refers to a time interval for acquiring point cloud data and positions each time, and the sensing interval and the sensor frequency have a corresponding relation. For example: the sensing interval is 0.1s when the frequency of the sensor is 10 fps. As shown in fig. 7, the motion state of the dynamic obstacle is estimated based on the dynamic obstacle time image, and the process is specifically as follows:
step 702, obtaining image values related to the point cloud data and the acquisition time of the position in a plurality of obstacle areas of the dynamic obstacle time image respectively.
In a plurality of obstacle regions of the dynamic obstacle time image, each obstacle region has a corresponding image numerical value, and the image numerical values are used for identifying the direction value of each obstacle region and the time corresponding to each obstacle region.
Step 704, obtaining direction values and trajectory values of each obstacle area based on the image numerical values.
A direction value, which may be any one or more of a vector, a map, or an angle value, for characterizing a direction in which the corresponding obstacle region moves; the trajectory value is used to characterize the time when an obstacle is present in the corresponding obstacle area, and may be one or more of a gray value, a color value, and the like.
In one embodiment, obtaining the direction value and the trajectory value of each obstacle region based on the image value comprises: carrying out gradient calculation on image numerical values of each obstacle region at different moments to obtain a gradient image; the gradient image comprises a gradient direction image and a gradient amplitude image; acquiring a direction value from the gradient direction image; and filtering the gradient amplitude image, and taking an image numerical value corresponding to the filtered gradient amplitude image as a track value.
The process of performing gradient calculation on the image numerical values of each obstacle region at different time instants may be performed based on a Sobel operator, and specifically includes: determining obstacle area neighborhoods corresponding to the obstacle areas respectively, and calculating difference approximate values of the obstacle areas and the obstacle area neighborhoods; and performing weighted calculation on the neighborhood difference approximate values of the barrier regions to obtain gradient values of the barrier regions, and calculating the neighborhood difference approximate values of the barrier regions according to an inverse trigonometric function to obtain gradient directions of the barrier regions.
And the filtering of the gradient amplitude image is to filter the gradient value with overlarge amplitude and take the image value corresponding to the gradient amplitude image with smaller gradient value as the track value. In other words, the trajectory values are image values filtered by gradient magnitude.
Step 706, determining the speed direction according to the direction value.
In the process of acquiring the direction values from the gradient direction image, the averaging is to average the direction values of the obstacle regions of the same dynamic obstacle, and the average result is taken as the speed direction of the dynamic obstacle.
Step 708, determining a critical area of the plurality of obstacle areas according to the trajectory values;
the critical area refers to a boundary area among the plurality of barrier areas. The critical area comprises at least one obstacle area in a plurality of obstacle areas, and the number and the shape of the obstacle areas occupied by the critical area are in corresponding relation with the point cloud data of the obstacles. For example: four obstacle regions corresponding to certain point cloud data constitute a square obstacle region set, and the square obstacle region set constitutes a critical region.
A threshold region of the plurality of obstacle regions is determined, which is a process by which the processor compares the image value of each obstacle region to a threshold value. When the image value of a certain obstacle area reaches a critical value corresponding to a certain critical area, the obstacle area is determined to belong to the critical area.
Step 710, calculating a velocity value according to the critical area and the sensing interval.
In one embodiment, determining a critical area of the plurality of obstacle areas based on the trajectory values includes: a first critical area and a second critical area of the plurality of obstacle areas are determined based on the trajectory values and the corresponding critical values. For example: the critical values include a time image preset value and a region growing threshold value, the time image preset value is used for indicating that the obstacle data are real-time, the region growing threshold value is used for representing the numerical boundary of the image for performing region growing, the track value of the first critical region is the time image preset value, and the track value of the second critical region is the region growing threshold value.
Correspondingly, the calculating of the speed value according to the critical area and the sensing interval comprises: the critical area physical distance is calculated based on the pixel distance between the first critical area and the second critical area. And calculating the difference between the first critical area and the second critical area to obtain a track difference value. And calculating a speed value based on the perception interval, the track difference value and the physical distance of the critical area.
In one embodiment, as shown in FIG. 8, the speed values are estimated as follows:
V=D*Res/(255-T)*dt;
wherein, D is a pixel distance between the first critical area and the second critical area, Res is a resolution of the environment map, 255 is a time image preset value, T is an area growth threshold value, (255-T) is a track difference value, and dt is a perception interval of the sensor.
In this embodiment, in the dynamic obstacle time image, the image numerical values of the obstacle regions are relatively accurate, so that the direction value and the trajectory value of the dynamic obstacle are more accurately obtained, the movement direction of the dynamic obstacle is determined by the direction value, the critical area of the movement of the dynamic obstacle is determined by the trajectory value, and the speed value of the dynamic obstacle is more accurately obtained by calculating based on the critical area and the sensor sensing interval.
In one embodiment, the obstacle point cloud data and the position are point cloud coordinates, and correspondingly, as shown in fig. 9, the processor performs the following steps: respectively converting point cloud coordinates at different moments into an obstacle time image and a static probability image, generating a static image based on the static probability image, generating a dynamic obstacle time image based on the obstacle time image and the static image, performing gradient calculation on the dynamic obstacle time image, and finally estimating the motion state of the dynamic obstacle.
In one embodiment, as shown in fig. 10, different dynamic obstacles may exist in the same environment map at the same time, and in order to identify the obstacle area corresponding to each of the different dynamic obstacles, the processor is further configured to execute the computer-readable instructions to implement the following steps:
step 1002, when there are a plurality of dynamic obstacles, determining a region growing seed of a target dynamic obstacle based on the image numerical value of each obstacle region, wherein the target dynamic obstacle belongs to at least one of the dynamic obstacles.
And the region growing seeds are at least one obstacle region with the image value being a preset value. And when the processor detects that the image numerical value of the obstacle area reaches a certain preset value, determining that the corresponding obstacle area belongs to the area growing seeds of the target dynamic obstacle. For example: and when the image value is a time image preset value, determining the obstacle as a region growing seed. As shown in FIG. 11, the grid A is a region growing seed, B i A seed neighborhood belonging to a, where i is less than or equal to 8 and greater than or equal to 1.
In one embodiment, the point cloud data and the positions of the obstacles at different moments are acquired in real time, and when the point cloud data and the positions acquired in real time are acquired and an obstacle time image and a static probability image are generated, a dynamic obstacle time image is generated based on the obstacle time image and the static probability image; in the dynamic obstacle time image, the point cloud data collected in real time and the obstacle area image value corresponding to the position are the preset value of the time image. Correspondingly, the obstacle area with the image value as the preset value of the time image is the area growth seed of the target dynamic obstacle.
In one embodiment, determining that a seed neighborhood of a region growing seed satisfies a region growing condition comprises: and in the seed neighborhood, comparing the image numerical value of each obstacle area with an area growth threshold value to obtain a growth threshold value comparison result. And calculating image numerical difference information of the region growing seeds and the seed neighborhoods to obtain a difference information comparison result. And carrying out image numerical comparison processing on the region growing seeds and the seed neighborhood to obtain a seed neighborhood comparison result. And determining that the seed neighborhood meets the region growth condition based on the growth threshold comparison result, the difference information comparison result and the seed neighborhood comparison result. And when the seed neighborhood of the region growing seed meets the region growing condition, judging that the region growing seed and the seed neighborhood both represent the motion trail of the same target dynamic barrier, and further identifying each dynamic barrier.
In a specific embodiment, the growth conditions are:
Figure BDA0003699863710000131
wherein, V i Image values, V, representing region-grown seeds i+1 And representing the image value of any seed neighborhood, and T represents a region growing threshold, so that the region growing seed and the corresponding seed neighborhood are classified as the same object.
And 1004, when the seed neighborhood of the region growing seed is determined to meet the region growing condition, performing region growing by taking each barrier region corresponding to the target dynamic barrier as the next region growing seed to obtain a region growing result.
The seed neighborhood is a neighborhood of the region growing seeds of the target barrier, and the distance between each sub-neighborhood and the corresponding region growing seeds is within a preset range. The seed neighborhood is used for providing the growth condition of the region growing seed, and the seed neighborhood meeting the region growing condition can be used as the next region growing seed to continue the region growing.
In one embodiment, the area growing method includes the following steps of respectively taking each obstacle area corresponding to the target dynamic obstacle as a next area growing seed: determining an obstacle area which is not subjected to area growth and corresponds to the target dynamic obstacle to obtain a target area growth seed, determining that a target seed neighborhood meets an area growth condition, and determining the target seed neighborhood as a next area growth seed until the seed neighborhood of the next area growth seed does not meet the area growth condition.
And step 1006, determining an obstacle region of the target dynamic obstacle based on the region growing result, the region growing seeds and the seed neighborhood satisfying the region growing condition.
In one embodiment, as shown in fig. 12, the obstacle region of the target dynamic obstacle is obtained by performing an operation based on the dynamic obstacle time image of the target obstacle in fig. 12(a) and the region growing seed of the target obstacle in fig. 12 (b). The plurality of dynamic obstacles are provided, the obstacle regions of each dynamic obstacle form at least two independent grid region sets, one target dynamic obstacle is selected from the obstacle regions of the dynamic obstacles, and the obstacle region corresponding to fig. 12(c) is determined as the obstacle region of the target dynamic obstacle.
In one embodiment, as shown in fig. 13, when there are a plurality of dynamic obstacles, the obstacle regions of each dynamic obstacle constitute at least two independent grid region sets, and the obstacle region of any dynamic obstacle can be used as the obstacle region of the target dynamic obstacle.
In this embodiment, region growing seeds of the target dynamic barrier are determined based on the image numerical values of the barrier regions, and the region growing seeds are seeds for performing region growing for the first time; and identifying the corresponding seed field by setting the region growing condition, so that each seed neighborhood meeting the region growing condition respectively performs region growing, iteration or circulation is performed, the barrier region of the target dynamic barrier is further obtained, the track of at least one target dynamic barrier is identified from a plurality of dynamic barriers, and the motion state of the target dynamic barrier is further accurately estimated.
In one embodiment, the overall flow of the scheme is shown by a more specific embodiment, and the image values are set as grid values, so as to more conveniently show the related data. The method comprises four parts of generating a time raster image, calculating gradient, growing a region and estimating a state.
First, a process of generating a temporal raster image, which includes three substeps of point cloud coordinate transformation, generating an obstacle temporal image, generating a static image, and generating a dynamic obstacle temporal image, is discussed.
A point cloud coordinate conversion sub-step, comprising: the method comprises the steps of acquiring point cloud data and positions, converting the point cloud data and the positions into a robot coordinate system according to external parameters of a sensor to obtain obstacle data, converting the obstacle data into a world coordinate system based on the current pose of the robot, and converting into a map coordinate system based on the resolution of a map.
A generate temporal image sub-step comprising: initializing the first environment map, and setting all grid values in the first environment map to be 0. When point cloud data corresponding to one frame of image is fused into a time image, the grid of an object existing in the point cloud data of the frame is set to be 255, the gray value is reduced once (from minus 1) when one frame of image is obtained, the obstacle data obtained by point cloud conversion is projected into the first environment map, and the grid value of the position where the obstacle data is located is set to be 255. Thus, for a static obstacle, the grid value in the map is kept 255 at all times, while for a dynamic obstacle, at its current position, the grid value is 255, and at its historical trajectory, the grid value is decremented over time.
A generate still image sub-step comprising: initializing a second environment map, setting the grid value of the second environment map to be 0, after receiving a frame of point cloud data, automatically subtracting 1 from the corresponding position in the static probability map at the place where the obstacle is not detected, automatically adding 1 to the corresponding position in the static probability map to obtain a static probability image, and then binarizing the static probability image to obtain the static probability image. Specifically, grid values of static obstacles in the static probability image gradually rise, and for dynamic obstacles, the grid values rise for a short time and then fall until the grid values are the same as those of the background, and then the static probability image is binarized to obtain a static image, wherein 255 grids in the static probability image represent the static obstacles.
A dynamic obstacle time image generation sub-step, comprising: and fusing the static image and the obstacle time image, and outputting a non-0 grid in the time image to the dynamic obstacle time image if the grid value in the static image is 0, or else, outputting the non-0 grid to the dynamic obstacle time image if the grid value in the static image is 0.
After obtaining the dynamic obstacle time image, a gradient calculation step is carried out. The method comprises the following steps: and (3) processing the dynamic obstacle time image by using a Sobel operator of 5X5 or 7X7 to obtain a gradient image, wherein the gradient image comprises a gradient direction image and a gradient amplitude image, and filtering a gradient value with a larger amplitude.
After the gradient calculation, the grid points with the median value of 255 in the time image of the dynamic obstacle are used as seed points, and a region growing algorithm is executed to obtain a plurality of independent grid region sets, wherein each independent grid region set represents the current position and movement trajectory of the corresponding target dynamic obstacle, and is shown in fig. 14.
After the grid region set is obtained, for each subset in the region set C, the average value of the corresponding direction image is obtained as the speed direction of the representative object. And performing speed estimation on each subset in the area set C, and determining that the target dynamic obstacle moves leftwards at the speed of 1.0 m/s.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
In one embodiment, the present application further provides a method for estimating a motion state of a dynamic obstacle, the method including: acquiring point cloud data and positions of the barrier at different moments; drawing an obstacle time image and a static probability image in an environment map according to the point cloud data and the positions at different moments; determining whether the obstacle is a dynamic obstacle based on the obstacle time image and the static probability image; if the obstacle is a dynamic obstacle, generating a dynamic obstacle time image based on the obstacle time image and the static probability image; estimating a motion state of the dynamic obstacle based on the dynamic obstacle time image. Therefore, when the obstacle is in a dynamic motion state, the target point cloud and the background point cloud are easy to segment and are more accurate, and the state estimation accuracy of the target is higher.
Based on the same inventive concept, the embodiment of the present application further provides a dynamic obstacle motion state estimation device for a robot, which is used for implementing the above-mentioned dynamic obstacle motion state estimation method for a robot. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme described in the above method, so specific limitations in the following embodiments of the device for estimating the motion state of one or more robots may refer to the limitations in the above method for estimating the motion state of a robot, and details are not repeated herein.
In one embodiment, as shown in fig. 15, there is provided a dynamic obstacle motion state estimation apparatus of a robot, including: a data acquisition module 1502, a data acquisition module 1504, a dynamic obstacle determination module 1506, a dynamic obstacle image generation module 1508, and a dynamic obstacle state estimation module 1510, wherein:
a data acquisition module 1502, configured to acquire point cloud data and positions of an obstacle at different times;
the data acquisition module 1504 is used for drawing an obstacle time image and a static probability image in an environment map according to the point cloud data and the positions at different moments;
a dynamic obstacle determination module 1506, configured to determine whether the obstacle is a dynamic obstacle based on the obstacle time image and the static probability image;
a dynamic obstacle image generation module 1508, configured to generate a dynamic obstacle time image based on the obstacle time image and the static probability image if the obstacle is a dynamic obstacle;
a dynamic obstacle state estimation module 1510 configured to estimate a motion state of the dynamic obstacle based on the dynamic obstacle time image.
In one embodiment, the data acquisition module 1504 includes:
the initialization unit is used for acquiring a first environment map and a second environment map;
and the point cloud data conversion unit is used for sequentially converting the point cloud data and the positions at all times into the image numerical values of the first environment map and the second environment map according to different image numerical value change trends to obtain an obstacle time image and a static probability image.
In one embodiment, the point cloud data and the position of the obstacle at different moments in time are acquired in real time; the point cloud data conversion unit comprises:
the preprocessing subunit is used for reducing the image numerical values of all areas in the first environment map when point cloud data and positions acquired in real time are acquired, and obtaining a reduced first environment map;
the point cloud conversion subunit is used for projecting the point cloud data and the position acquired in real time to the second environment map and the first environment map after the value reduction;
the obstacle time image generation subunit is used for determining an area in the first environment map, in which an image numerical value is set as a preset value of a time image, based on a projection result in the first environment map, so as to obtain an obstacle time image;
and the static probability image generation subunit is used for determining an area in the second environment map where the image value increases and/or decreases based on the projection result in the second environment map to obtain the static probability image.
In one embodiment, the point cloud data and location are collected in real-time by a sensor; the point cloud conversion subunit comprises:
the first coordinate system conversion subunit is used for converting the obstacle point cloud data and the position into a robot coordinate system according to the parameters of the sensor to obtain obstacle data in the robot coordinate system;
the second coordinate system conversion subunit is used for performing coordinate system conversion on the obstacle data in the robot coordinate system according to the pose of the robot to obtain the obstacle data in the target coordinate system;
and the projection conversion subunit is used for projecting the obstacle data in the target coordinate system to the second environment map and the first environment map after the value reduction respectively.
In one embodiment, the dynamic obstacle determining module 1506 includes:
a binarization unit, configured to perform binarization processing on an image value corresponding to the position in the static probability image to obtain a static image;
and the dynamic obstacle judging unit is used for determining whether the obstacle is a dynamic obstacle or not based on the static image and the obstacle time image.
In one embodiment, the motion state comprises a speed direction and a speed value of the dynamic obstacle, and point cloud data and positions of the obstacle at different moments are acquired according to a perception interval of a sensor; the dynamic obstacle state estimation module 1510 comprises:
the image numerical value acquisition unit is used for respectively acquiring image numerical values related to the point cloud data and the position acquisition time in a plurality of obstacle areas of the dynamic obstacle time image;
the data extraction unit is used for acquiring a direction value and a track value of each obstacle area based on the image numerical value;
a speed direction determination unit for determining the speed direction according to the direction value;
a critical area determination unit configured to determine a critical area of the plurality of obstacle areas according to the trajectory value;
and the speed calculating unit is used for calculating the speed value according to the critical area and the perception interval.
In one embodiment, the data extraction unit includes:
the gradiometer unit is used for carrying out gradient calculation on image numerical values of the barrier regions at different moments to obtain gradient images; the gradient image comprises a gradient direction image and a gradient magnitude image;
a direction value calculation operator unit for acquiring the direction value from the gradient direction image;
and the track value operator unit is used for filtering the gradient amplitude image and taking an image numerical value corresponding to the filtered gradient amplitude image as the track value.
In one embodiment, the critical area determining unit includes:
a critical region determining subunit, configured to determine a first critical region and a second critical region of the plurality of obstacle regions based on the trajectory values and corresponding critical values;
correspondingly, the speed calculation unit comprises:
a physical distance calculating subunit, configured to calculate a critical area physical distance based on a pixel distance between the first critical area and the second critical area;
the difference value operator unit is used for calculating the difference between the first critical area and the second critical area to obtain a track difference value;
and the speed value calculation operator unit is used for calculating the speed value based on the perception interval, the track difference value and the physical distance of the critical area.
In one embodiment, the apparatus further comprises a region growing module, the region growing module comprising:
the region growing seed determining unit is used for determining region growing seeds of the target dynamic barrier based on the image numerical value of each barrier region when the dynamic barriers are multiple; the target dynamic barrier belongs to at least one of the dynamic barriers;
the region growing unit is used for taking each barrier region corresponding to the target dynamic barrier as a next region growing seed to carry out region growing when the seed neighborhood of the region growing seed is determined to meet the region growing condition, so as to obtain a region growing result;
and the region growing completion unit is used for determining the barrier region of the target dynamic barrier based on the region growing result, the region growing seeds and the seed neighborhood meeting the region growing condition.
In one embodiment, the region growing unit includes:
the first comparison unit is used for comparing the image numerical value of each obstacle area with an area growth threshold value in the seed neighborhood of the area growth seeds to obtain a growth threshold value comparison result;
the second comparison unit is used for calculating image numerical difference information of the region growing seeds and the seed neighborhood to obtain a difference information comparison result;
the third comparison unit is used for carrying out image numerical comparison processing on the region growing seeds and the seed neighborhoods to obtain a seed neighborhood comparison result;
and the condition judgment unit is used for determining that the seed neighborhood meets the region growth condition based on the growth threshold comparison result, the difference information comparison result and the seed neighborhood comparison result.
The modules in the device for estimating the motion state of the dynamic obstacle of the robot can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 16. The computer apparatus includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device. The processor, the memory and the input/output interface are connected by a system bus, and the communication interface, the display unit and the input device are connected by the input/output interface to the system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The input/output interface of the computer device is used for exchanging information between the processor and an external device. The communication interface of the computer device is used for communicating with an external terminal in a wired or wireless manner, and the wireless manner can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method for estimating a dynamic obstacle motion state of a robot. The display unit of the computer equipment is used for forming a visual and visible picture, and can be a display screen, a projection device or a virtual reality imaging device, the display screen can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 16 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In an embodiment, a computer program product is provided, comprising a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant country and region.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (13)

1. A robot comprising a memory for storing computer readable instructions executable on the processor and a processor for, when executing the computer readable instructions, performing the steps of:
acquiring point cloud data and positions of the barrier at different moments;
drawing an obstacle time image of the obstacle and a static probability image of the obstacle in an environment map according to the point cloud data and the positions at different moments;
determining whether the obstacle is a dynamic obstacle based on the obstacle time image and the static probability image;
if the obstacle is a dynamic obstacle, generating a dynamic obstacle time image of the dynamic obstacle based on the obstacle time image and the static probability image;
and obtaining the motion state of the dynamic obstacle based on the dynamic obstacle time image.
2. The robot of claim 1, wherein said mapping an obstacle time image of the obstacle and a static probability image of the obstacle in an environment map from the point cloud data and the location at different times comprises:
acquiring a first environment map and a second environment map;
and sequentially converting the point cloud data and the positions at all times into image values of the first environment map and image values of the second environment map according to different image value change trends to obtain the obstacle time image and the static probability image.
3. The robot of claim 2, wherein the point cloud data and the location of the obstacle at different times are collected in real time; the converting the point cloud data and the positions at each moment into an image numerical value of the first environment map and an image numerical value of the second environment map to obtain the obstacle time image and the static probability image includes:
reducing the image numerical value of each area in the first environment map when point cloud data and positions acquired in real time are acquired, and obtaining a first environment map after value reduction;
projecting the point cloud data and the position acquired in real time to the second environment map and the first environment map subjected to value reduction;
determining an area in which an image numerical value in the first environment map is set as a preset value of a time image based on a projection result in the first environment map, and obtaining the time image of the obstacle;
and determining the area of the second environment map in which the image value is increased and/or decreased based on the projection result in the second environment map to obtain the static probability image.
4. A robot as claimed in claim 3, wherein the point cloud data and location are acquired in real time by a sensor; the projecting the point cloud data and the position acquired in real time to the second environment map and the first environment map after the value reduction comprises:
according to the parameters of the sensor, converting the obstacle point cloud data and the position into a robot coordinate system to obtain obstacle data under the robot coordinate system;
according to the pose of the robot, performing coordinate system conversion on the barrier data under the robot coordinate system to obtain barrier data under a target coordinate system;
and projecting the obstacle data in the target coordinate system to the second environment map and the first environment map after the value reduction respectively.
5. The robot of claim 1, wherein said determining whether the obstacle is a dynamic obstacle based on the obstacle time image and the static probability image comprises:
carrying out binarization processing on the image numerical value corresponding to the position in the static probability image to obtain a static image;
determining whether the obstacle is a dynamic obstacle based on the static image and the obstacle time image.
6. The robot of claim 1, wherein the motion state comprises a velocity direction and a velocity value of the dynamic obstacle, point cloud data and positions of the obstacle at different times being collected at perception intervals of a sensor; the obtaining the motion state of the dynamic obstacle based on the dynamic obstacle time image comprises:
respectively acquiring image numerical values related to the point cloud data and the acquisition time of the position in a plurality of obstacle areas of the dynamic obstacle time image;
acquiring a direction value and a track value of each obstacle region based on the image numerical value;
determining the speed direction according to the direction value;
determining a critical area in the plurality of obstacle areas according to the track value;
and calculating the speed value according to the critical area and the perception interval.
7. The robot of claim 6, wherein said obtaining a direction value and a trajectory value for each of said obstacle regions based on said image values comprises:
performing gradient calculation on image values of each barrier region at different moments to obtain a gradient image; the gradient image comprises a gradient direction image and a gradient magnitude image;
acquiring the direction value from the gradient direction image;
and filtering the gradient amplitude image, and taking an image numerical value corresponding to the filtered gradient amplitude image as the track value.
8. The robot of claim 6, wherein said determining a critical area of said plurality of obstacle areas from said trajectory values comprises:
determining a first critical area and a second critical area of the plurality of obstacle areas based on the trajectory values and corresponding critical values;
the calculating the speed value according to the critical area and the sensing interval includes:
calculating a critical area physical distance based on a pixel distance between the first critical area and the second critical area;
calculating the difference between the first critical area and the second critical area to obtain a track difference value;
and calculating the speed value based on the perception interval, the track difference value and the physical distance of the critical area.
9. The robot of claim 6, wherein the processor is further configured to, when executing the computer readable instructions, perform the steps of:
when the dynamic barrier is multiple, determining a region growing seed of a target dynamic barrier based on an image numerical value of each barrier region; the target dynamic barrier belongs to at least one of the dynamic barriers;
when it is determined that the seed neighborhood of the region growing seed meets the region growing condition, respectively taking each barrier region corresponding to the target dynamic barrier as a next region growing seed for region growing to obtain a region growing result;
determining an obstacle region of the target dynamic obstacle based on the region growing result, the region growing seeds, and a seed neighborhood satisfying the region growing condition.
10. The robot of claim 9, wherein said determining that a seed neighborhood of the region growing seed satisfies a region growing condition comprises:
in the seed neighborhood of the region growing seeds, comparing the image numerical value of each obstacle region with a region growing threshold value to obtain a growing threshold value comparison result;
calculating image numerical difference information of the region growing seeds and the seed neighborhoods to obtain a difference information comparison result;
performing image numerical comparison processing on the region growing seeds and the seed neighborhood to obtain a seed neighborhood comparison result;
and determining that the seed neighborhood meets the region growing condition based on the growth threshold comparison result, the difference information comparison result and the seed neighborhood comparison result.
11. A method for estimating a motion state of a dynamic obstacle, the method comprising:
acquiring point cloud data and positions of the barrier at different moments;
drawing an obstacle time image of the obstacle and a static probability image of the obstacle in an environment map according to the point cloud data and the positions at different moments;
determining whether the obstacle is a dynamic obstacle based on the obstacle time image and the static probability image;
if the obstacle is a dynamic obstacle, generating a dynamic obstacle time image of the dynamic obstacle based on the obstacle time image and the static probability image;
and obtaining the motion state of the dynamic obstacle based on the dynamic obstacle time image.
12. An apparatus for estimating a motion state of a dynamic obstacle, the apparatus comprising:
the data acquisition module is used for acquiring point cloud data and positions of the barrier at different moments;
the image construction module is used for drawing an obstacle time image of the obstacle and a static probability image of the obstacle in an environment map according to the point cloud data and the positions at different moments;
a dynamic obstacle determination module for determining whether the obstacle is a dynamic obstacle based on the obstacle time image and the static probability image;
a dynamic obstacle image generation module, configured to generate a dynamic obstacle time image of the dynamic obstacle based on the obstacle time image and the static probability image if the obstacle is a dynamic obstacle;
and the dynamic obstacle state estimation module is used for obtaining the motion state of the dynamic obstacle based on the dynamic obstacle time image.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method as claimed in claim 11.
CN202210686427.6A 2022-06-17 2022-06-17 Robot, dynamic obstacle state estimation method and device and computer equipment Pending CN114859938A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210686427.6A CN114859938A (en) 2022-06-17 2022-06-17 Robot, dynamic obstacle state estimation method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210686427.6A CN114859938A (en) 2022-06-17 2022-06-17 Robot, dynamic obstacle state estimation method and device and computer equipment

Publications (1)

Publication Number Publication Date
CN114859938A true CN114859938A (en) 2022-08-05

Family

ID=82624072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210686427.6A Pending CN114859938A (en) 2022-06-17 2022-06-17 Robot, dynamic obstacle state estimation method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN114859938A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116088503A (en) * 2022-12-16 2023-05-09 深圳市普渡科技有限公司 Dynamic obstacle detection method and robot
CN117148837A (en) * 2023-08-31 2023-12-01 上海木蚁机器人科技有限公司 Dynamic obstacle determination method, device, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116088503A (en) * 2022-12-16 2023-05-09 深圳市普渡科技有限公司 Dynamic obstacle detection method and robot
CN117148837A (en) * 2023-08-31 2023-12-01 上海木蚁机器人科技有限公司 Dynamic obstacle determination method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN109448090B (en) Image processing method, device, electronic equipment and storage medium
CN108898676B (en) Method and system for detecting collision and shielding between virtual and real objects
EP3570253B1 (en) Method and device for reconstructing three-dimensional point cloud
CN114424250A (en) Structural modeling
CN114859938A (en) Robot, dynamic obstacle state estimation method and device and computer equipment
Vidas et al. Real-time mobile 3D temperature mapping
CN112784873A (en) Semantic map construction method and equipment
US11790661B2 (en) Image prediction system
CN116088503B (en) Dynamic obstacle detection method and robot
US20220277580A1 (en) Hand posture estimation method and apparatus, and computer storage medium
CN113743385A (en) Unmanned ship water surface target detection method and device and unmanned ship
CN112562000A (en) Robot vision positioning method based on feature point detection and mismatching screening
CN112154448A (en) Target detection method and device and movable platform
Nguyen et al. ROI-based LiDAR sampling algorithm in on-road environment for autonomous driving
CN116168384A (en) Point cloud target detection method and device, electronic equipment and storage medium
CN116879870A (en) Dynamic obstacle removing method suitable for low-wire-harness 3D laser radar
CN114663598A (en) Three-dimensional modeling method, device and storage medium
KR20210051707A (en) Apparatus for tracking feature point based on image for drone hovering and method thereof
CN117132649A (en) Ship video positioning method and device for artificial intelligent Beidou satellite navigation fusion
CN116931583A (en) Method, device, equipment and storage medium for determining and avoiding moving object
Saleh et al. Estimating the 2d static map based on moving stereo camera
CN116642490A (en) Visual positioning navigation method based on hybrid map, robot and storage medium
CN116310832A (en) Remote sensing image processing method, device, equipment, medium and product
CN116091998A (en) Image processing method, device, computer equipment and storage medium
CN115830073A (en) Map element reconstruction method, map element reconstruction device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination