CN112597857B - Indoor robot stair climbing pose rapid estimation method based on kinect - Google Patents
Indoor robot stair climbing pose rapid estimation method based on kinect Download PDFInfo
- Publication number
- CN112597857B CN112597857B CN202011485098.6A CN202011485098A CN112597857B CN 112597857 B CN112597857 B CN 112597857B CN 202011485098 A CN202011485098 A CN 202011485098A CN 112597857 B CN112597857 B CN 112597857B
- Authority
- CN
- China
- Prior art keywords
- stair
- edge
- coordinate system
- camera
- coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 230000009194 climbing Effects 0.000 title claims abstract description 39
- 239000011159 matrix material Substances 0.000 claims abstract description 20
- 230000003287 optical effect Effects 0.000 claims abstract description 14
- 238000001514 detection method Methods 0.000 claims description 11
- 230000009466 transformation Effects 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 9
- 239000013598 vector Substances 0.000 claims description 9
- 238000013519 translation Methods 0.000 claims description 7
- 230000000007 visual effect Effects 0.000 claims description 4
- 238000003384 imaging method Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 230000000717 retained effect Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 4
- 230000036544 posture Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003153 chemical reaction reagent Substances 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000010363 phase shift Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for quickly estimating a stair climbing pose of an indoor robot based on kinect, which comprises the steps of obtaining an RGB image and a depth image of a stair, and detecting the edge of the image; extracting an edge straight line from the obtained edge image; detecting the intersection point of the edge straight lines to obtain the edge point of the inner corner of the stair; finding the edge line of the stair according to the obtained edge points, and extracting the pixel coordinates of which the middle point of the Hough line segment corresponding to the edge line is a representative point; solving an internal reference matrix according to the relation between a camera pixel coordinate system and a camera coordinate system to obtain coordinates under an edge point camera coordinate system; establishing a stair coordinate system through the obtained camera coordinates of the edge points; obtaining the coordinates of the optical center of the camera under a stair coordinate system; and under the stair coordinate system, estimating the distance from the optical center of the camera to the origin of the stair coordinate system, comparing the distance with a threshold value, counting the stairs, and obtaining attitude information. The invention estimates the pose of the robot when climbing the stair area with acceptable running time and accuracy, and has robustness and effectiveness.
Description
Technical Field
The invention belongs to the technical field of robot three-dimensional space positioning, and particularly relates to a method for quickly estimating a stair climbing pose of an indoor robot based on kinect.
Background
To interact with a typical indoor environment in a predictable and safe manner, robots rely heavily on identifying the environment and estimating their own pose relative to the environment. Stairs are a typical obstacle environment encountered by indoor mobile access, and the problem can be divided into three steps: (1) detecting stairs, (2) obtaining the self pose of the robot, and (3) climbing stairs.
A method for embedding two-dimensional features into three-dimensional data and acquiring stair point cloud data by means of octree down-sampling is proposed in an examination paper < Efficient standing detection and modeling for automatic nomous robot binding >. The method can obtain a more accurate stair model, but point cloud acquisition and modeling need a large amount of calculation, and are difficult to deploy on a small-sized low-cost mobile platform. (Chan D S, Silva R K, Monteiro J C, et al. efficient standing detection and modification for autonomous robot binding [ J ]. 2017).
Most indoor robot position and posture estimation methods are positioning in a two-dimensional plane space, and a paper "High-precision index positioning fusion algorithm based on WiFi finger print" provides a method combining gradient lifting decision tree and machine learning, so that the non-linear problem of the indoor WiFi finger print positioning method is solved, the problem of error accumulation in dynamic target tracking is effectively solved by using a particle filtering technology, the position and posture estimation precision is improved, the fingerprint collection workload is large, and the WiFi equipment needs to be collected again after being moved. (Ka Xiaofei, Li Mengmeng, Qiao Wei. high-precision index positioning fusion on WiFi finger print [ J ]. Journal of Xi' an University of Science and Technology,2020,40(3): 470-476.).
The robot climbs stairs indoors in a cross-layer mode and moves in a three-dimensional space, and all the positioning methods are positioning in a two-dimensional plane space (the robot does not have displacement on the height) and pose estimation is carried out by using a characteristic point matching method. However, the structural features of the stair scene are repeated and have displacement in height when climbing, and a large mismatching rate can be caused by the feature point matching method.
Disclosure of Invention
Aiming at the defects of the existing stair climbing pose estimation method in the background art, the method for rapidly estimating the pose in the three-dimensional space based on the stair geometric characteristics is provided, the pose of the robot when climbing the stair area is estimated with acceptable running time and accuracy, and the method has robustness and effectiveness.
In order to solve the technical problems, the invention adopts the following technical scheme: a method for quickly estimating stair climbing pose of an indoor robot based on kinect comprises the following steps:
s1, acquiring an RGB image and a depth image of a staircase based on a kinect sensor, performing image processing by using a gabor filter to acquire an ROI (region of interest), and then detecting the edge of the image by using a canny operator;
s2, detecting the edge image obtained from the S1 by using a Hough line, and extracting an edge straight line;
s3, filtering repeated and deviated straight lines, detecting edge straight line intersection points by using a three-line intersection method, and obtaining stair inside corner edge points, namely intersection points of the horizontal edge of a stair step and the edge where the height and the width of the step are located;
s4, finding three stair edge lines beside the stair inner corner edge points obtained in S3, extracting the midpoint of the Hough line segment corresponding to the three stair edge lines as a representative point, and extracting the pixel coordinates of 4 edge points;
s5, calibrating the camera, solving an internal reference matrix according to the relation between a camera pixel coordinate system and the camera coordinate system, aligning the color image and the depth image, and obtaining the coordinates of the internal reference matrix in the camera coordinate system through the pixel coordinates of the 4 edge points extracted in the S4;
s6, establishing a stair coordinate system through the camera coordinates of the 4 edge points obtained in the S5; obtaining the coordinates of the optical center of the camera under a stair coordinate system through coordinate transformation;
and S7, under the stair coordinate system, counting stairs by estimating the distance from the optical center of the camera to the origin of the stair coordinate system and comparing the distance with a threshold value to obtain attitude information.
Further, the accuracy of the detected stair edge points is optimized in step S3 using a three-line intersection method.
Further, the specific steps of filtering out repeated and deviated straight lines in the step S3 are as follows: comparing the absolute value of the slope difference of every two detected straight lines, and if the absolute value meets the condition that | k is more than or equal to 0i+1-kiFinding Hough line segments fitting the two straight lines if the | is less than or equal to 1/5, comparing the length of the line segments, and removing the straight line where the length (i) is located if the length (i +1) is greater than or equal to the length (i); if the length (i +1) < length (i), removing the straight line where the length (i +1) is located; so repeatedAnd lines deviating from the edge will be filtered out, edge lines will be retained, where ki+1And kiThe lengths of the length (i +1) and length (i) are the lengths of the Hough line segment of the (i +1) th detection straight line and the ith detection straight line, respectively.
Further, the three-line intersection method comprises the following specific steps: firstly, finding the intersection point between every two edge straight lines in the image, and assuming that the coordinates of the three intersection points are (x)n+1,yn+1),(xn,yn),(xn-1,yn-1) If 0 ≦ xn+1-xnLess than or equal to 8 and less than or equal to 0 and less than or equal to xn-xn-1Less than or equal to 8 and less than or equal to 0 and less than or equal to xn+1-xn-1Less than or equal to 8 and less than or equal to 0n+1-ynLess than or equal to 8 and less than or equal to 0 and less than or equal to xn-yn-1Less than or equal to 8 and less than or equal to 0n+1-yn-1If the | is less than or equal to 8, the three points are considered as the intersection points of the stair edges, and the coordinates of the three-line connection points, namely the coordinates of the stair inner corner edge points are takenThe straight lines where the three points are located are stair edge lines, the middle points of Hough line segments where the three points are located on the stair edge lines are extracted to be edge points, wherein n-1, n and n +1 respectively represent the sequence numbers of intersection points of the edge straight lines in the image.
Further, in step S5, the coordinates of the lower edge point in the camera coordinate system are obtained by the following formula:
Xc=Zc(u-u0)/fx Yc=Zc(v-v0)/fy
wherein (u, v) is the pixel coordinate in the image, (Xc, Yc, Zc) is the coordinate in the camera coordinate system, fx、fy、u0、v0All are camera intrinsic parameters, fx、fyIs the focal length of the camera, u0、v0Is the imaging plane center coordinate.
Further, in step S6, the transformation from the camera coordinate system to the stair coordinate system into a rigid body transformation can be represented by a rotation matrix R and a translation matrix T, wherein the translation matrix T can be represented by coordinates of the stair coordinate origin in the camera coordinate system, and the vector is constructed by using the coordinates of the edge points in the camera coordinate system obtained in S5After unitization, a stair coordinate system is established by taking 0 as an origin and taking the directions of the three unit vectors as the directions of Z, X and Y axes, (v)1,v2,v3) The rotation matrix from the stair coordinate system to the camera coordinate system is obtained, and the rotation moment R from the camera coordinate system to the stair coordinate system can be obtained by inversion;
wherein O (x0, y0, z0), P1(x1, y1, z1), P2(x2, y2, z2), and P3(x3, y3, z3) are coordinates of 4 stair edge points obtained in S5 in the camera coordinate system.
Further, in step S7, until approaching a staircase, the reference frame is defined as the edge of the staircase of 0 step, that is, Ow0, until approaching the staircase of the first step, when the pose of the camera with respect to the staircase of 0 step is within a certain threshold, the pose is quantized to the distance between Oc and Ow0, the edge of the staircase of the first step Ow1 is selected as the reference frame, and the distance between the camera and Ow1 is estimated; if the distance is also within a certain threshold range, the robot starts to climb the first-step stairs and counts; when climbing the first-step stair, estimating the pose of the camera by taking the edge Ow2 of the second-step stair as a reference system until the first-step stair is climbed, and if the distance Ow2 is within a certain threshold range at the moment, starting climbing the second-step stair and counting; and analogizing in sequence, climbing the Nth stair by taking the (N +1) th stair as a reference system until no stair appears in the visual field, and stopping climbing at the moment, so that the pose information of the robot stair climbing can be obtained.
Compared with the prior art, the invention has the beneficial effects that:
(1) according to the method for quickly estimating the stair climbing pose of the indoor mobile robot based on kinect, the input stair image is converted into the gray image, and the calculated amount is reduced. And proper parameters are selected through a Gabor filter, so that the influence of illumination change and shadow on the image is effectively removed, and multi-scale and multi-direction information is reserved. The accuracy of the detected stair edge points is optimized by adopting a three-line intersection method, so that the accuracy of the stair edge point detection is improved, and the calculated amount is reduced.
(2) The method for quickly estimating the stair climbing pose of the indoor mobile robot based on kinect is also suitable for climbing 2-step stairs with the robot at a time, and only the camera can shoot the first step of stairs in front of the stairs where the robot is located, so that the method can be suitable for different climbing modes, the robustness of an algorithm is enhanced, and the application of the method in practice is promoted.
(3) The method for quickly estimating the stair climbing pose of the indoor mobile robot based on kinect provides a certain reference for indoor cross-layer navigation of an autonomous navigation system and provides relative pose information for climbing of the robot in a stair environment. The framework estimates the pose of the robot as it climbs the stair area with acceptable run time and accuracy. The effectiveness and robustness of the framework are proved through experiments, and the framework is convenient to popularize and apply.
Drawings
Fig. 1 is a flow chart of the present invention for extracting coordinates of stair edge points based on kinect.
FIG. 2 is a schematic diagram of the slope of the filtered repetitive, deviated fitting line of the present invention.
FIG. 3 is a schematic diagram of vectors for solving pose from stair edge points in the present invention.
Fig. 4 is a schematic diagram of a coordinate system used in counting stairs according to the present invention.
FIG. 5 is a flowchart of an estimation algorithm for performing a staircase count according to the present invention.
Fig. 6 is a schematic diagram illustrating the calculation of the threshold dm during the counting of stairs in the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be noted that the experimental methods described in the following embodiments are all conventional methods unless otherwise specified, and the reagents and materials, if not otherwise specified, are commercially available; in the description of the present invention, the terms "lateral", "longitudinal", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are only for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention.
Furthermore, the terms "horizontal", "vertical", "suspended" and the like do not imply that the components are absolutely horizontal or suspended, but may be slightly inclined. For example, "horizontal" merely means that the direction is more horizontal than "vertical" and does not mean that the structure must be perfectly horizontal, but may be slightly inclined.
In the description of the present application, it is further noted that, unless expressly stated or limited otherwise, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.
The present invention will be further described with reference to the accompanying drawings and embodiments, and as shown in fig. 1, the present invention is a flow of extracting coordinates of stair edge points in a method for quickly estimating a stair climbing pose of an indoor mobile robot based on kinect, where the flow includes:
s1: the method comprises the steps of obtaining an RGB image and a depth image of the stair through kinect, firstly converting the stair image into a gray level image, carrying out Gabor filtering on the gray level image, extracting an interested Region (ROI) from the gray level image after the Gabor filtering, and then carrying out edge extraction by adopting a Canny edge detector.
In step S1, in order to correctly extract the stair edge, Gabor filtering is performed by a standard convolution method, and noise and shadow effects are removed from the stair image.
The general form of a Gabor filter is defined as:
u′=u cosθ1+v sinθ1 (2)
v′=-u sinθ1+v cosθ1 (3)
in the formula, the coordinates of the pixel points are (u, v),for phase shift, θ1In direction, σ is a Gaussian envelopeThe standard deviation, γ, is the spatial aspect ratio, and λ, is the wavelength of the cosine factor. The standard deviation σ is not independent, it depends on the bandwidth (b) and wavelength (λ), the relationship of b and λ is:
in the present embodiment, the image obtains the best response when the bandwidth is close to 1, and the relationship between λ and σ at this time is 0.56 λ.
S2: the edge image obtained in S1 is detected by the Hough line, and an edge straight line is extracted. Wherein the Hough transform is at rho-theta0The straight line in the parameter plane is represented by: ρ ═ x cos θ0+y sinθ0。
Wherein at any point (x0, y0) on the image plane, mapping to ρ - θ0The parameter plane (Hough space) is:
ρθ=x0 cosθ0+y0 sinθ0 (5)
in step S2, Hough line segment detection is performed on the extracted stair edge to find out ρ - θ0The peak response point (the intersection point with the largest line in the parameter plane) of the parameter plane is the straight line segment with the largest number of collinear points corresponding to the image plane, and then the line segments are fitted into a straight line.
S3: according to the straight line detected by Hough transformation, three points can be detected by pairwise intersection of the three straight lines by a three-line intersection method. First, the repeated and deviated edge parts are removed from the fitted straight line, and then the corner edge points of the staircase where the three lines intersect are found out.
In order to filter out the repeated and edge-deviating portions, the following method is specifically adopted in this embodiment:
firstly, comparing the absolute value of the slope difference of every two detected straight lines, as shown in figure 2, if the absolute value satisfies | k more than 0 ≦ ki+1-kiAnd | is less than or equal to 1/5, and the repeated straight line is judged under the condition that the slope threshold value is met. Then finding Hough line segments fitting the two straight lines, comparing the length of the Hough line segments, if the length (i +1) ≧ length (i), then removing the straight line on which the length (i) is located. If length (i +1) < length (i), then the line in which length (i +1) lies is removed. Such repeated and edge-deviating lines will be filtered out and edge lines will be retained. Wherein k isi+1And kiThe lengths of the length (i +1) and length (i) are the lengths of the Hough line segment of the (i +1) th detection straight line and the ith detection straight line, respectively. The slope difference of 1/5 is the result of experimental test, and when the difference is 1/5, there is better filtering effect.
After the edge straight line is extracted, the three-line intersection algorithm is as follows:
firstly, find the intersection point between two of them in the image, and assume the coordinates of three intersection points as
(xn+1,yn+1),(xn,yn),(xn-1,yn-1);
If 0 ≦ xn+1-xnLess than or equal to 8 and less than or equal to 0 and less than or equal to xn-xn-1| is less than or equal to 8 and | x is less than or equal to 0n+1-xn-1Less than or equal to 8 and less than or equal to 0n+1-ynLess than or equal to 8 and less than or equal to 0n-yn-1Less than or equal to 8 and less than or equal to 0n+1-yn-1If the | is less than or equal to 8, the three points are considered as the intersection points of the stair edges, and the coordinates of the three-line connection points, namely the coordinates of the stair inner corner edge points are takenThe straight line where the three points are located is a stair edge line, the middle point of a Hough line segment on the stair edge line where the three points are located is extracted as an edge point, and n-1, n and n +1 respectively represent the sequence numbers of intersection points of the edge straight line in the image. The coordinate difference value in the formula is 8, and the coordinate difference value is obtained through experimental tests, so that the searching effect is good, and the calculated amount is ideal.
S4: three stair edge lines beside the stair are found from the edge points of the inner corners of the stairs, and the midpoint of the corresponding Hough line segment is extracted as a representative point. The straight line where the three points are located is the stair edge line, the middle point of the Hough line segment where the three points are located on the stair edge line is extracted, and the pixel coordinates of the 4 edge points can be obtained.
S5: calibrating the camera, solving an internal reference matrix according to the relation between a camera pixel coordinate system and the camera coordinate system, aligning the color image and the depth image, and obtaining the coordinates of the 4 edge points extracted in the step S4 in the camera coordinate system through the pixel coordinates of the 4 edge points. The specific process comprises the following steps:
calibrating the camera by a calibration plate and Zhang Zhengyou calibration method, and shooting 6 photos (generally not less than 3 photos and preferably 10-20 photos) of the calibration plate at different positions, different angles and different postures by using the same camera; and extracting world coordinates and pixel coordinates of the inner corner points of the calibration plate, and solving an internal reference matrix according to the relation between a camera pixel coordinate system and a camera coordinate system. Consider the reference to find 4 stair edge points from S4. The depth image obtained by kinect carries the depth information of the stairs (i.e. the distance Zc from the point on the stairs to the optical center of the camera, and the Z-axis coordinate of the point on the stairs in the camera coordinate system). Therefore, after the color image and the depth image are aligned, the coordinates (Xc, Yc, Zc) of the stair edge point under the camera coordinate system can be obtained through (7), (8) and (9), and the conversion from the pixel coordinate system to the camera coordinate system is realized.
Xc=Zc(u-u0)/fx (7)
Yc=Zc(v-v0)/fy (8)
(7) In the formulas (8) and (9), (u, v) are pixel coordinates in the image, (Xc, Yc, Zc) are coordinates under a stair edge point camera coordinate system, and fx、fy、u0、v0All are camera intrinsic parameters, fx、fyIs the focal length of the camera, u0、v0Is the imaging plane center coordinate.
And S6, establishing a stair coordinate system through the camera coordinates of the 4 edge points obtained in the S5, and obtaining the coordinates of the camera optical center under the stair coordinate system through coordinate transformation.
In the present embodiment, in which the transformation from the camera coordinate system to the stair coordinate system is a rigid body transformation, it can be represented by a rotation matrix R and a translation matrix T, where the translation matrix T can be represented by coordinates of the stair coordinate system origin in the camera coordinate system.
The pose is solved using the camera coordinates of the 4 stair edge points obtained in S5, as shown in fig. 3. O, P1, P2 and P3 are detected 4 stair edge points, and three vectors can be obtainedThese three vectors are unitized according to (10), (11), and (12) to obtain v1,v2,v3And establishing a stair coordinate system by taking the O point as an origin and taking the directions of the three unit vectors as the directions of Z, X and Y axes. (v)1,v2,v3) Namely, the rotation matrix from the stair coordinate system to the camera coordinate system, and the rotation matrix R from the camera coordinate system to the stair coordinate system can be obtained by inversion, as shown in formula (13). The coordinates of the origin of the stair in the coordinate system of the camera are O (X0, y0, z0) (i.e. the translation vector T from the origin of the coordinate system of the stair to the optical center of the camera, as shown in formula (14)), so the coordinates of the optical center of the camera Oc (X) in the coordinate system of the stair can be obtained according to formula (15)Oc,YOc,ZOc)。
R=-[v1,v2,v3]-1 (13)
In the formulas (10), (11), and (12), O (x0, y0, z0), P1(x1, y1, z1), P2(x2, y2, z2), and P3(x3, y3, z3) are coordinates of 4 stair edge points obtained in S5 in the camera coordinate system.
S7: the posture information is obtained by performing a staircase count by estimating the distance and comparing with a threshold value according to the staircase coordinate system established in the step S6 and the coordinates of the camera optical center in the staircase coordinate system.
As shown in fig. 4, Oc represents the camera optical center position, and Xc, Yc, and Zc represent the direction of the camera coordinate system. Ow0 denotes the initial stair reference frame, Xw, Yw, Zw denote the orientation of the stair coordinate system. Theta is the visual angle of the camera, and alpha and beta are the included angles between the camera and the horizontal edge of the stairs.
The step 0 stair edge (Ow0) is taken as a reference frame until approaching the stairs, and the first stair edge (Ow1) is taken as a reference frame and the distance from the camera to Ow1 is estimated when the pose of the camera relative to the step 0 stair is within a certain threshold value (the pose is quantized to the distance from Oc to Ow 0). If the distance is also within a certain threshold range, the robot starts to climb the first stair and count. And when the first-step stair begins to climb, estimating the pose of the camera by taking the edge (Ow2) of the second-step stair as a reference frame until the first-step stair climbs, and if the distance at the moment is within a certain threshold range from the Ow2, starting to climb the second-step stair and counting. And analogizing in sequence, climbing the stairs of the Nth order by taking the stairs of the (N +1) th order as a reference system until no stairs appear in the visual field, and stopping climbing.
Wherein, the algorithm flow chart is shown in fig. 5, d (i) represents the distance from the camera optical center to the stair origin, N represents the number of counting, i represents the number of samples, 30 samples are calculated, so that when d (i) < ═ d, every time d +1 distance is calculatedmAnd adding the count N once until all samples are counted, and accumulating the count until the count reaches N. Wherein dm is derived from the formula:
l, W, H are the length, width and height of the stair step, respectively, as shown in FIG. 6. And delta is a compensation term, and the value of the compensation term is determined by the installation position of the camera relative to the machine body and the length of the foot of the robot.
Similarly, the pose estimation method provided by the invention is also suitable for the robot to climb 2 steps at a time, as long as the camera can shoot a stair one step before the step where the robot is located. When the robot approaches the stairs and enters the threshold range, the robot crosses the 2 nd stair and counts, at the moment, the third stair is taken as a reference frame, if the robot enters the threshold range, the robot crosses to the 4 th stair and counts, and by analogy, the robot crosses to the 2N stair and counts for 2N times by taking the 2N +1 th stair as the reference frame.
Therefore, according to the method, the stair edge line is extracted through Hough line detection, the accuracy of the detected stair edge point is optimized by adopting a three-line intersection method, a step is counted by detecting the distance from the optical center of the camera to the origin of a stair coordinate system, and relative pose information is provided for climbing of the robot in a stair environment. In this embodiment, dm is 905, and all experiments are performed on an Intol (R) core (TM) i3-3120M @2.5GHZ processor with a memory of 4 GB. All image acquisitions are based on kinect 2.0. All processing is done on a 1920x1080 resolution color staircase image and a 512x424 color to depth aligned image. This process is implemented in a MATLAB environment. In the experiment, 9 stairs are climbed in total, the right side and the left side of the stairs are photographed respectively, and from 0 stair, 3 samples, 30 samples on one side and 60 stair samples are taken when the stairs of each step are climbed. The pose of the robot when climbing the stair area is estimated by the framework with acceptable running time and accuracy, the estimation result is also satisfactory, the average running time is 0.078s, and the average error is 13 mm. The method provided by the invention has certain time efficiency and stronger robustness. Therefore, the pose information of the indoor mobile robot during stair climbing can be quickly estimated, and the method is suitable for navigation of the robot during climbing in a stair environment.
The foregoing examples are provided for illustration and description of the invention only and are not intended to limit the invention to the scope of the described examples. Furthermore, it will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that many variations and modifications may be made in accordance with the teachings of the present invention, all of which fall within the scope of the invention as claimed.
Claims (7)
1. A method for quickly estimating the stair climbing pose of an indoor robot based on kinect is characterized by comprising the following steps:
s1, acquiring an RGB image and a depth image of a staircase based on a kinect sensor, performing image processing by using a gabor filter to acquire an ROI (region of interest), and then detecting the edge of the image by using a canny operator;
s2, detecting the edge image obtained from the S1 by using a Hough line, and extracting an edge straight line;
s3, filtering repeated and deviated straight lines, detecting edge straight line intersection points by using a three-line intersection method, and obtaining stair inside corner edge points, namely intersection points of the horizontal edge of a stair step and the edge where the height and the width of the step are located;
s4, finding three stair edge lines beside the stair inner corner edge points obtained in S3, extracting the midpoint of the Hough line segment corresponding to the three stair edge lines as a representative point, and extracting the pixel coordinates of 4 edge points;
s5, calibrating the camera, solving an internal reference matrix according to the relation between a camera pixel coordinate system and the camera coordinate system, aligning the color image and the depth image, and obtaining the coordinates of the internal reference matrix in the camera coordinate system through the pixel coordinates of the 4 edge points extracted in the S4;
s6, establishing a stair coordinate system through the camera coordinates of the 4 edge points obtained in the S5; obtaining the coordinates of the optical center of the camera under a stair coordinate system through coordinate transformation;
and S7, under the stair coordinate system, counting stairs by estimating the distance from the optical center of the camera to the origin of the stair coordinate system and comparing the distance with a threshold value to obtain attitude information.
2. The method for rapidly estimating the stair climbing pose of the indoor robot based on kinect as claimed in claim 1, wherein the accuracy of the detected stair edge points is optimized by using a three-line intersection method in step S3.
3. The method for rapidly estimating the stair climbing pose of the indoor robot based on kinect as claimed in claim 2, wherein the specific steps of filtering out repeated and deviated straight lines in the step S3 are as follows: comparing the absolute value of the slope difference of every two detected straight lines, and if the absolute value meets the condition that | k is more than or equal to 0i+1-kiIf the length is not less than 1/5, finding a Hough line segment for fitting the two straight lines, comparing the length of the Hough line segment, and if the length (i +1) is not less than length (i), removing the straight line where the length (i) is located; if length (i +1)<length (i), removing the straight line where length (i +1) is located; such repeated and edge-deviating lines will be filtered out and edge lines will be retained, where k isi+1And kiThe lengths of the length (i +1) and length (i) are the lengths of the Hough line segment of the (i +1) th detection straight line and the ith detection straight line, respectively.
4. The method for rapidly estimating the stair climbing pose of the indoor robot based on kinect as claimed in claim 2, wherein the three-line intersection method comprises the following specific steps: firstly, finding the intersection point between every two edge straight lines in the image, and assuming that the coordinates of the three intersection points are (x)n+1,yn+1),(xn,yn),(xn-1,yn-1) If 0 ≦ xn+1-xnLess than or equal to 8 and less than or equal to 0 and less than or equal to xn-xn-1Less than or equal to 8 and less than or equal to 0 and less than or equal to xn+1-xn-1Less than or equal to 8 and less than or equal to 0n+1-ynLess than or equal to 8 and less than or equal to 0n-yn-1Less than or equal to 8 and less than or equal to 0n+1-yn-1If the | is less than or equal to 8, the three points are considered as the intersection points of the stair edges, and the coordinates of the three-line connection points, namely the coordinates of the stair inner corner edge points are takenThe straight lines of the three points are the stair edge lines, and the stair edge lines of the three points are extractedThe middle point of the Hough line segment is an edge point, wherein n-1, n and n +1 respectively represent the serial number of the intersection point of the edge straight line in the image.
5. The method for rapidly estimating the stair climbing pose of the indoor robot based on kinect as claimed in claim 1, wherein in the step S5, the coordinates of the lower edge point of the camera coordinate system are obtained by the following formula:
Xc=Zc(u-u0)/fx Yc=Zc(v-v0)/fy
wherein (u, v) is the pixel coordinate in the image, (X)c,Yc,Zc) As coordinates in the camera coordinate system, fx、fy、u0、v0All are camera intrinsic parameters, fx﹑fyIs the focal length of the camera, u0﹑v0Is the imaging plane center coordinate.
6. The method as claimed in claim 1, wherein in step S6, the transformation from the camera coordinate system to the stair coordinate system is a rigid transformation represented by a rotation matrix R and a translation matrix T, wherein the translation matrix T is represented by coordinates of the stair coordinate origin in the camera coordinate system, and the vector is constructed by using the coordinates of the edge points obtained in S5 in the camera coordinatesAfter unitization, a stair coordinate system is established by taking O as an origin and taking the directions of the three unit vectors as the directions of Z, X and Y axes, (v)1,v2,v3) The rotation matrix from the stair coordinate system to the camera coordinate system is obtained, and the rotation moment R from the camera coordinate system to the stair coordinate system can be obtained by inversion;
wherein O (x0, y0, z0), P1(x1, y1, z1), P2(x2, y2, z2), and P3(x3, y3, z3) are coordinates of 4 stair edge points obtained in S5 in the camera coordinate system.
7. The method as claimed in claim 1, wherein in step S7, the stair climbing pose of the indoor robot is estimated by taking the 0 th stair edge (Ow0) as a reference frame until approaching the stair, and when the pose of the camera is within a certain threshold relative to the 0 th stair, the pose is quantized to the distance between Oc and Ow0, and the first stair edge Ow1 is selected as the reference frame and the distance between the camera and Ow1 is estimated; if the distance is also within a certain threshold range, the robot starts to climb the first-step stairs and counts; when climbing the first-step stair, estimating the pose of the camera by taking the edge Ow2 of the second-step stair as a reference system until the first-step stair is climbed, and if the distance Ow2 is within a certain threshold range at the moment, starting climbing the second-step stair and counting; and analogizing in sequence, climbing the Nth stair by taking the (N +1) th stair as a reference system until no stair appears in the visual field, and stopping climbing at the moment, so that the pose information of the robot stair climbing can be obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011485098.6A CN112597857B (en) | 2020-12-16 | 2020-12-16 | Indoor robot stair climbing pose rapid estimation method based on kinect |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011485098.6A CN112597857B (en) | 2020-12-16 | 2020-12-16 | Indoor robot stair climbing pose rapid estimation method based on kinect |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112597857A CN112597857A (en) | 2021-04-02 |
CN112597857B true CN112597857B (en) | 2022-06-14 |
Family
ID=75196757
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011485098.6A Active CN112597857B (en) | 2020-12-16 | 2020-12-16 | Indoor robot stair climbing pose rapid estimation method based on kinect |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112597857B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114814868A (en) * | 2022-04-06 | 2022-07-29 | 广东工业大学 | Double-paw climbing robot system and simultaneous positioning and mapping method thereof |
CN114663775B (en) * | 2022-05-26 | 2022-08-12 | 河北工业大学 | Method for identifying stairs in exoskeleton robot service environment |
CN114683290B (en) * | 2022-05-31 | 2022-09-16 | 深圳鹏行智能研究有限公司 | Method and device for optimizing pose of foot robot and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005087452A1 (en) * | 2004-03-17 | 2005-09-22 | Sony Corporation | Robot device, behavior control method for the robot device, and moving device |
CN109500818A (en) * | 2018-12-13 | 2019-03-22 | 广州供电局有限公司 | The speeling stairway method of crusing robot |
CN110414308A (en) * | 2019-05-16 | 2019-11-05 | 南京理工大学 | A kind of target identification method for dynamic foreign matter on transmission line of electricity |
CN111127497A (en) * | 2019-12-11 | 2020-05-08 | 深圳市优必选科技股份有限公司 | Robot and stair climbing control method and device thereof |
-
2020
- 2020-12-16 CN CN202011485098.6A patent/CN112597857B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005087452A1 (en) * | 2004-03-17 | 2005-09-22 | Sony Corporation | Robot device, behavior control method for the robot device, and moving device |
CN109500818A (en) * | 2018-12-13 | 2019-03-22 | 广州供电局有限公司 | The speeling stairway method of crusing robot |
CN110414308A (en) * | 2019-05-16 | 2019-11-05 | 南京理工大学 | A kind of target identification method for dynamic foreign matter on transmission line of electricity |
CN111127497A (en) * | 2019-12-11 | 2020-05-08 | 深圳市优必选科技股份有限公司 | Robot and stair climbing control method and device thereof |
Non-Patent Citations (1)
Title |
---|
基于视觉和激光传感器信息融合的楼梯结构参数估计;李艳杰等;《传感器与微***》;20180601(第06期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112597857A (en) | 2021-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112597857B (en) | Indoor robot stair climbing pose rapid estimation method based on kinect | |
CN110569704B (en) | Multi-strategy self-adaptive lane line detection method based on stereoscopic vision | |
CN110221603B (en) | Remote obstacle detection method based on laser radar multi-frame point cloud fusion | |
CN109785379B (en) | Method and system for measuring size and weight of symmetrical object | |
CN110223348B (en) | Robot scene self-adaptive pose estimation method based on RGB-D camera | |
CN106960454B (en) | Depth of field obstacle avoidance method and equipment and unmanned aerial vehicle | |
JP6415066B2 (en) | Information processing apparatus, information processing method, position and orientation estimation apparatus, robot system | |
CN105608671A (en) | Image connection method based on SURF algorithm | |
JP5430138B2 (en) | Shape measuring apparatus and program | |
CN107977996B (en) | Space target positioning method based on target calibration positioning model | |
CN101763643A (en) | Automatic calibration method for structured light three-dimensional scanner system | |
JP5297779B2 (en) | Shape measuring apparatus and program | |
JP6035620B2 (en) | On-vehicle stereo camera system and calibration method thereof | |
Momeni-k et al. | Height estimation from a single camera view | |
CN110956661A (en) | Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix | |
CN111996883B (en) | Method for detecting width of road surface | |
CN111724446B (en) | Zoom camera external parameter calibration method for three-dimensional reconstruction of building | |
CN115147344A (en) | Three-dimensional detection and tracking method for parts in augmented reality assisted automobile maintenance | |
Sun et al. | Automatic targetless calibration for LiDAR and camera based on instance segmentation | |
CN112767481B (en) | High-precision positioning and mapping method based on visual edge features | |
CN112985388B (en) | Combined navigation method and system based on large-displacement optical flow method | |
CN114299153A (en) | Camera array synchronous calibration method and system for ultra-large power equipment | |
Su et al. | An automatic calibration system for binocular stereo imaging | |
Short | 3-D Point Cloud Generation from Rigid and Flexible Stereo Vision Systems | |
CN114762019A (en) | Camera system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |