WO2012176249A1 - Dispositif, procédé et programme d'estimation d'une autoposition, et objet mobile - Google Patents

Dispositif, procédé et programme d'estimation d'une autoposition, et objet mobile Download PDF

Info

Publication number
WO2012176249A1
WO2012176249A1 PCT/JP2011/006846 JP2011006846W WO2012176249A1 WO 2012176249 A1 WO2012176249 A1 WO 2012176249A1 JP 2011006846 W JP2011006846 W JP 2011006846W WO 2012176249 A1 WO2012176249 A1 WO 2012176249A1
Authority
WO
WIPO (PCT)
Prior art keywords
position information
self
coincidence
movement amount
time
Prior art date
Application number
PCT/JP2011/006846
Other languages
English (en)
Japanese (ja)
Inventor
佑介 日永田
剛 末永
憲太郎 竹村
淳 高松
司 小笠原
Original Assignee
国立大学法人奈良先端科学技術大学院大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立大学法人奈良先端科学技術大学院大学 filed Critical 国立大学法人奈良先端科学技術大学院大学
Priority to JP2013521307A priority Critical patent/JP5892663B2/ja
Publication of WO2012176249A1 publication Critical patent/WO2012176249A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0272Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising means for registering the travel distance, e.g. revolutions of wheels
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device

Definitions

  • the present invention relates to a technique for estimating the self-position of a moving object, and more particularly to a technique for estimating the self-position of a moving object that operates in an unknown environment.
  • SLAM Simultaneous Localization and Mapping
  • Non-Patent Document 2 a landmark-based approach that considers outliers has been proposed (for example, Non-Patent Document 2).
  • Non-Patent Document 2 has a need to observe a sufficient number of landmarks and a problem of hiding due to dynamic obstacles.
  • Non-Patent Document 3 a method has been proposed in which a stationary object and a moving object are distinguished and matched based on the shape of a sensing target (Non-Patent Document 3) and the nearest point distance between frames (Non-Patent Document 4).
  • Non-Patent Documents 3 and 4 have a problem that the measurement accuracy strongly depends on the performance of the discriminator that determines whether or not the measurement accuracy is a moving object.
  • FIG. 23 is a diagram showing matching errors when using ICP.
  • a measurement point measured by a position sensor mounted on the robot and an actual position of an object existing around the moving body are superimposed on a two-dimensional coordinate space indicating the moving surface of the robot. It is plotted. Then, an error between the measurement point and the actual position occurs in the area surrounded by the dotted line of the circle in FIG.
  • An object of the present invention is to provide a technique capable of accurately estimating the self position of a moving object in an unknown environment where the moving object exists.
  • a self-position estimation apparatus is a self-position estimation apparatus that estimates a self-position of a moving body, and is position information that acquires position information of each object existing in a surrounding space of the moving body in time series
  • the second position information acquired at the second time which is the time before or after the first time, the first position information acquired at the first time by the acquisition unit and the position information acquisition unit.
  • a degree of coincidence calculation unit for calculating a degree of coincidence between the first position information and the second position information when the first position information and the second position information are translated, and a parallel movement amount and a rotation movement amount of the first position information.
  • an estimation unit that searches the parallel movement amount and the rotational movement amount to maximize the degree of coincidence and estimates the self position
  • the coincidence degree calculation unit configures the first position information The second place within a certain distance for a certain measurement point If there is measurement points constituting the information, the degree of match to impart a predetermined point, and calculates the total value of the grant and the point as the matching degree.
  • a moving body includes the above-described self-position estimation device and a position sensor that acquires the position information.
  • a self-position estimation method and a self-position estimation program have the same features as the above self-position estimation apparatus.
  • FIG. 6 is a diagram illustrating a reason why a matching error occurs between position information at time t ⁇ 1 and position information at time t in a dynamic environment when an L2 norm is used as an evaluation value.
  • (A) is an external view of SICK LSM100.
  • (B) is an external view of the mobile body by embodiment of this invention. It is the table
  • (A) is a diagram showing an experiment place, and (B) is a diagram showing a dynamic environment in which a person moves to and from this experiment place. A two-dimensional map of the surrounding space generated in a static environment in which no person goes to and from the experimental place is shown. It is the table
  • FIG. 1 is a block diagram of a moving body to which a self-position estimation apparatus according to an embodiment of the present invention is applied.
  • the self-position estimation apparatus includes a position information acquisition unit 11, a coincidence calculation unit 12, an estimation unit 13, a map generation unit 14, and an odometry calculation unit 15. Then, the self-position estimation device estimates the self-position of the moving body based on the position information acquired by the position sensor 16.
  • the moving body includes a self-position estimation device, a position sensor 16, and a rotation amount sensor 17.
  • the self-position estimation apparatus is configured by a computer including a CPU, a ROM, a RAM, and a hard disk, for example.
  • the position information acquisition unit 11, the matching degree calculation unit 12, the estimation unit 13, the map generation unit 14, and the odometry calculation unit 15 are realized by, for example, the CPU executing a self-position estimation program stored in the hard disk.
  • the self-position estimation program is recorded on a computer-readable recording medium such as a DVD-ROM and provided to the user. The user installs this recording medium in the computer, thereby causing the computer to function as a self-position estimation device.
  • the position information acquisition unit 11 acquires the position information of each object existing in the space around the moving object measured by the position sensor 16 in a time series at a constant cycle.
  • the position information is composed of measurement points indicating locations where objects exist.
  • a laser range finder is employed as the position sensor 16. Therefore, the measurement points include an angle from the reference direction and a distance from the origin in a two-dimensional local coordinate space in which the position of the position sensor 16 is the origin and the front direction of the position sensor 16 is the reference direction. Represented by dimensional polar coordinate data.
  • the y axis is set in the reference direction
  • the x axis is set in a direction that passes through the origin and is orthogonal to the reference direction.
  • the degree-of-match calculation unit 12 acquires the first position information acquired at the first time by the position information acquisition unit 11 at the second time acquired at the second time which is the time before or after the first time.
  • the degree of coincidence between the first position information and the second position information when the position information is translated and rotated is calculated.
  • the first time is assumed to be time t and the second time is assumed to be time t-1, but the present invention is not limited to this, and the first time is assumed to be time t-1 and the second time is assumed to be time t. Also good.
  • the time t indicates the acquisition timing of the position information acquired by the position sensor in time series. Moreover, the acquisition timing of position information is described as a frame as needed.
  • the degree of coincidence is expressed using an evaluation value for evaluating the degree of coincidence between the position information at time t and the position information at time t ⁇ 1.
  • the evaluation value E is represented by Expression (1).
  • a i represents each measurement point of the position information at time t
  • b j represents each measurement point of the position information at time t ⁇ 1.
  • R represents the rotational movement amount of each measurement point at time t with respect to time t ⁇ 1
  • T represents the parallel movement amount of each measurement point with respect to time t ⁇ 1.
  • Show. i is an index for specifying a measurement point of position information at time t
  • j is an index for specifying a measurement point of position information at time t-1.
  • is a distance considering the accuracy of the laser range finder.
  • n indicates the number of measurement points.
  • Equation (1) is a measurement point a i is R rotational movement, and, at the time obtained by T translated, when the measurement points a i does not exist within distance ⁇ around the measurement point b j, evaluation value It can be interpreted that a penalty of 1 is imposed on E. Therefore, equation (1) indicates that the smaller the evaluation value E, the higher the degree of coincidence between the position information at time t-1 and the position information at time t.
  • f (a i , ⁇ b j ⁇ ) 1 ( ⁇ j,
  • ⁇ ⁇ ) or 0 (otherwise) may be used.
  • the measurement point a i exists within the distance ⁇ with the measurement point b j as the center, 1 point is given to the evaluation value E, and the evaluation value E increases as the matching degree increases.
  • the norm means distance, and there are L2 norm, etc. in addition to L0 norm.
  • the L2 norm is the sum of squared distances, and has been widely used.
  • FIG. 5A is a conceptual diagram illustrating a method for calculating the evaluation value E.
  • FIG. 5B shows a state where the measurement points a 1 to a 3 constituting the position information at time t are rotationally moved.
  • FIG. 5C is a diagram showing a state in which the evaluation value E is calculated at the measurement points a 1 to a 3 and the measurement points b 1 to b 3 that have been rotated and moved.
  • the measurement points a 1 and a 2 are located within the distance ⁇ from the measurement points b 1 and b 2 , but the measurement point a 3 does not exist within the distance ⁇ from the measurement point b 3. . Therefore, the evaluation value E is 1.
  • the estimating unit 13 changes the parallel movement amount T and the rotational movement amount R of the position information at time t, searches for the parallel movement amount T and the rotational movement amount R that maximize the degree of coincidence, and Estimate the position. That is, the estimation unit 13 searches for the rotational movement amount R and the parallel movement amount T that minimize the evaluation value E shown in Expression (1).
  • a technique for performing such a search, creating a map based on the search result, and estimating the self-position is called SLAM.
  • the estimation unit 13 translates the local coordinate space of the position information at time t by the parallel movement amount T and rotates it by the rotational movement amount R, and at time t
  • the process of calculating the evaluation value E between the position information of ⁇ 1 and the position information at time t is repeated to search for the parallel movement amount T and the rotational movement amount R that minimize the evaluation value E.
  • the estimation unit 13 obtains the parallel movement amount T that minimizes the evaluation value E as the movement amount of the moving body from time t-1 to time t, and adds the movement amount to the position of the moving body at time t-1. Is obtained as the position of the moving body at time t.
  • the estimation unit 13 obtains the rotational movement amount R that minimizes the evaluation value E as the amount of change in the direction of the moving object from time t-1 to time t, and the amount of change in the direction of the moving object at time t ⁇ 1. Is added as the direction of the moving object at time t.
  • an angle of the reference direction of the moving object at the time t with respect to a predetermined reference direction in a two-dimensional global coordinate space indicating the surrounding space of the moving object is adopted. Can do.
  • the position of the moving body at time t for example, the position of the moving body in the global coordinate space can be employed.
  • the rotational movement amount R for example, a 2 ⁇ 2 matrix indicating the rotation of the moving body at time t with respect to the moving body at time t ⁇ 1 can be employed.
  • the parallel movement amount T for example, it is possible to adopt the values of the x component and the y component that indicate the movement amount of the moving body from the time t ⁇ 1 to the time t.
  • FIG. 6 is a diagram showing the reason why a matching error occurs between the position information at time t-1 and the position information at time t in a dynamic environment when the L2 norm is adopted as the evaluation value E.
  • the moving object 41 is a moving object such as a human.
  • the stationary object 42 is a stationary object such as a wall. If only the stationary object 42 exists in the position information at the time t and the time t-1, when the minimum value of the evaluation value E is obtained, the stationary object 42 at the time t-1 matches the stationary object 42 at the time t. And no error occurs.
  • the present inventors have found that when the evaluation value E is calculated using the L0 norm, the matching error can be reduced. Therefore, in this embodiment, the evaluation value E is calculated using the L0 norm.
  • the norm indicates a distance
  • the L2 norm which is a sum of squares of the distance
  • a non-linear least square method can be used when obtaining an optimal solution.
  • the process for obtaining the minimum evaluation value E using the L0 norm is classified as a discrete optimization problem, and thus is NP-hard. Therefore, when the L0 norm is adopted, it is necessary to search for the minimum evaluation value E by changing the parallel movement amount T and the rotational movement amount R of the position information measured at time t in a brute force manner.
  • the map generation unit 14 plots the position and orientation of the moving object at time t estimated by the estimation unit 13 in the global coordinate space, and uses the plotted position and orientation as a reference at time t.
  • a two-dimensional map of the surrounding space is generated by plotting the position information.
  • the position sensor 16 is constituted by a laser range finder, acquires position information in time series at a constant cycle, and supplies the position information to the position information acquisition unit 11.
  • As the laser range finder SICK's LMS100 can be adopted.
  • FIG. 7A is an external view of the LCK 100 of SICK.
  • FIG. 7B is an external view of a moving body according to the embodiment of the present invention. As shown in FIG. 7B, the position sensor 16 is attached to the front side of the moving body.
  • FIG. 8 is a table showing the specifications of the position sensor 16. As shown in FIG. 8, the position sensor 16 has an angle of view of 270 degrees, a resolution of 0.5 degrees, a measurable range of 20 m, a frequency of 30 Hz, and the number of measurement points that can be acquired. 540.
  • the odometry calculation unit 15 calculates the odometry of the moving body according to the measurement data supplied from the rotation amount sensor 17 attached to the wheel of the moving body, and supplies the odometry to the estimation unit 13.
  • the odometry is the self-position of the moving body at time t with reference to time t ⁇ 1 estimated according to the measurement data from the rotation amount sensor 17.
  • the odometry calculation unit 15 calculates odometry in time series with the same cycle as the position sensor 16.
  • the rotation amount sensor 17 is attached to each of the pair of left and right wheels of the moving body, and supplies measurement data indicating the rotation amount of the wheels of the moving body to the odometry calculation unit 15. Here, the rotation amount sensor 17 acquires measurement data at the same cycle as the position sensor 16.
  • the moving body includes a pair of left and right wheels 91 and 91 provided at the front, and a pair of left and right wheels 92 and 92 provided at the rear.
  • the pair of rotation amount sensors 17 are attached to the wheels 92 and 92, respectively, and measure the rotation amounts of the wheels 92 and 92, respectively.
  • the moving body is attached to the frame 93 provided on the upper part of the wheels 91, 92, the computer 94 placed on the front side of the bottom surface 931 of the frame 93, and the back side of the bottom surface 931.
  • the position sensor 16 is attached to a bar 932 provided on the upper part of the frame 93.
  • the computer 94 realizes the functions of the position information acquisition unit 11 to the odometry calculation unit 15 described above.
  • the computer 94 is mounted with a drive control program for the moving body, and controls the driving of the moving body.
  • the computer 94 refers to the two-dimensional map generated by the map generation unit 14 and drives the motor 95 so as to move toward a predetermined goal while avoiding a collision with an object.
  • the motor 95 is constituted by a pair of motors attached to the wheels 92 and 92, for example, and is driven based on a signal from the computer 94.
  • the pair of left and right motors 95 turns the moving body by changing the amount of rotation of the wheels 92 and 92 in accordance with a signal from the computer 94.
  • FIG. 2 is a flowchart showing the operation of the self-position estimation apparatus according to the embodiment of the present invention.
  • the position information acquisition unit 11 acquires position information at time t (S1).
  • the estimation unit 13 generates a plurality of search candidates by changing the parallel movement amount T and the rotational movement amount R of the position information at time t, and sends the evaluation value E of each search candidate to the matching degree calculation unit 12.
  • a search candidate that minimizes the evaluation value E is calculated (S2).
  • the estimation unit 13 estimates the self position of the moving object from the rotational movement amount R and the parallel movement amount T of the search candidate searched in S2 (S3).
  • the map generation unit 14 plots the self-position estimated by the estimation unit 13 in a two-dimensional coordinate space, and generates a two-dimensional map of the surrounding space of the moving object (S4).
  • the process of S4 ends, the process returns to S1, and the processes of S1 to S4 are repeated.
  • FIG. 3 is a flowchart showing details of the search process shown in S2 of FIG.
  • the estimation unit 13 assigns 0 to the variable k and initializes the variable k (S21).
  • the estimation unit 13 generates n1 (n1 is an integer of 2 or more) search candidates. Specifically, the estimation unit 13 obtains a prior probability of the position of the moving object from the self position of the moving object estimated by the odometry calculation unit 15, and randomly determines the rotational movement amount R of the n1 pattern according to the prior probability. The parallel movement amount T is determined, and the position information at time t is shifted with respect to the position information at time t-1 using the determined rotational movement amount R and parallel movement amount T of the n1 pattern, and n1 search candidates are obtained. Generate (S22).
  • n1 search candidates are produced
  • FIG. 9 is a diagram for explaining processing for generating search candidates.
  • the process proceeds from (A) to (D).
  • Each point in FIG. 9 represents the position of a point obtained by collecting the elements of the rotational movement amount R and the parallel movement amount T of the position information at time t.
  • n1 search candidates are generated by the process of S22.
  • the degree-of-match calculation unit 12 calculates an evaluation value E with respect to the position information at time t using Equation (1) for each of the n1 search candidates (S23).
  • the estimation unit 13 calculates the weight value wj of each of the n1 search candidates from the evaluation value E of the n1 search candidates (S25).
  • the weight value wj is defined by Equation (2).
  • wj exp ((m ⁇ E (R, T)) / k) (2)
  • m and k are constants.
  • m for example, the assumed maximum value of the evaluation value E can be adopted.
  • j is an index for specifying a search candidate. As shown in Expression (2), the weight value wj increases as the evaluation value E decreases.
  • the estimation unit 13 extracts n2 (n2 ⁇ n1) search candidates from the n1 search candidates based on the weight value wj (S26).
  • the estimation unit 13 may perform a lottery process in which the lottery probability becomes higher as the value of the weight value wj is larger, and may extract n2 search candidates. Search candidates may be extracted.
  • n2 search candidates are extracted from the n1 search candidates shown in FIG. 9A as shown in FIG. 9B.
  • the estimation unit 13 in S27, the estimation unit 13 generates n3 search candidates for each of the n2 search candidates. In this case, the estimation unit 13 determines the rotational movement amount R and the parallel movement amount T of the n3 pattern according to the normal distribution of ( ⁇ x, ⁇ y, ⁇ ) with reference to the origin of n2 search candidates, and n3 pieces The search candidates may be generated.
  • n3 search candidates are generated with reference to the origin of the three search candidates.
  • the number of n3 pieces may be changed according to the value of the weight value wj of n2 search candidates.
  • the values of n3 (1) to n3 (3) are determined according to the weight value w1.
  • search candidates may be generated. Note that n3 (1)> n3 (2)> n3 (3).
  • n3 search candidates are generated based on n2 search candidates by the process of S27, and a total of n2 ⁇ n3 search candidates are generated.
  • the estimation unit 13 increments the variable k by 1 (S28), and returns the process to S23.
  • the estimation unit 13 specifies a search candidate having the smallest evaluation value E (S29).
  • the evaluation value E is calculated for all search candidates calculated in S27.
  • the estimation unit 13 repeats the processing of S23 to 28 K times, and specifies a search candidate having the smallest evaluation value E. Then, when generating search candidates, priority sampling is performed in which search candidates having a rotational movement amount R and a parallel movement amount T that are likely to be solutions are generated preferentially. Therefore, it becomes possible to efficiently obtain search candidates as solutions.
  • the search candidates are narrowed down from n1 to n2, and the subsequent processing is performed.
  • this embodiment employs a technique of narrowing down the solution and proceeding to the next step, such as maximuma-posteriori (MAP) estimation instead of Bayesian estimation. For this reason, there remains concern about the accuracy of the solution, but the L0 norm is robust against moving objects, so there is no problem even if MAP estimation is applied. As a result, an increase in the number of search candidates can be suppressed, and the process for searching for the minimum evaluation value E can be speeded up.
  • MAP maximuma-posteriori
  • S22 corresponds to the first process
  • S25 and S26 correspond to the second process
  • S27 corresponds to the third process
  • S24 and S29 correspond to the fourth process.
  • LSH Location Sensitive Hashing
  • hash value Alexandr Andoni, Mayur Datar, Nicole Immorlica, Piotr Indyk, and Vahab Mirrokni: “Locality-SensitiveHashingSchemeBasedtable Distributions ”, Nearest Neighbor Methods in Learning and Vision, MIT Press, 2006.
  • the process of searching for the minimum evaluation value E is difficult to optimize, and is often calculated by the brute force method. Accordingly, the amount of calculation increases explosively depending on the search range and resolution. Therefore, as a method for calculating the evaluation value E, it is preferable to employ a method as fast as possible.
  • the calculation load when obtaining the vicinity of r is large.
  • the amount of calculation of a simple algorithm for finding the vicinity of r is O (n), but the amount of calculation can be reduced to O (logn) by using kd-tree or Voronoi diagram.
  • LSH has the property of taking the same hash value with higher probability as the data distance is closer. Considering that only data classified into the same hash value is a candidate for a neighboring point, the amount of calculation can be greatly reduced.
  • FIG. 4 is a flowchart showing details of the evaluation value E calculation process.
  • the coincidence calculation unit 12 generates a plurality of hash tables from the position information at time t ⁇ 1 (S41).
  • the hash table is a table indicating whether or not there is a measurement point at each position in the lattice space by projecting each measurement point of the position information at time t ⁇ 1 onto the lattice space using a predetermined hash function.
  • the lattice space is a space generated by dividing the local coordinate space of the position information at time t ⁇ 1 into a lattice shape based on the distance ⁇ .
  • the hash function h is expressed by Expression (3).
  • h (b) ⁇ (p ⁇ b + q) / ⁇ > (3)
  • ⁇ > represents a maximum integer of (p ⁇ b + q) / ⁇ or less.
  • p is determined by a normal distribution with 1 as an average.
  • q is randomly determined in the range of 0 to ⁇ by a uniform distribution.
  • b indicates a measurement point.
  • hx (b) ⁇ (p ⁇ bx + qx) / ⁇ > (3-1)
  • hy (b) ⁇ (p ⁇ by + qy) / ⁇ > (3-2)
  • qx and qy are x and y components of q, and are randomly determined in the range of 0 to ⁇ by uniform distribution.
  • bx and by are x and y components of the measurement point b.
  • the measurement point b is located in the local coordinate space defined by the x axis and the y axis. Accordingly, the coincidence calculation unit 12 obtains the value of the x coordinate in the lattice space of the measurement point b using hx (b) as shown in the equation (3-1), and as shown in the equation (3-2). Then, the value of the y coordinate in the lattice space of the measurement point b is obtained using hy (b).
  • FIG. 10 is a conceptual diagram showing a hash table.
  • FIG. 10A shows a lattice space set in the local coordinate space of the position information at time t-1.
  • FIG. 10B shows hash tables T1 to T3 generated based on the lattice space shown in FIG.
  • the measurement point b is plotted at a position before the calculation of ⁇ > in the expressions (3-1) and (3-2).
  • a measurement point b exists on the grid in the first row and the first column. Accordingly, when the hash value hx (b), hy (b) is obtained by performing ⁇ > operation on the measurement point b, the measurement point b is one line 1 in the hash table T1, as shown in FIG. 10B. It will be located in the bin of the row.
  • a label of 1 indicating the presence of the measurement point b is set in the bin of the first row and the first column of the hash table T1.
  • the measurement point b does not exist in the grid in the first row and the third column. Therefore, as shown in FIG. 10B, for example, a label of 0 indicating that the measurement point b does not exist is set in the bin of the first row and the third column of the hash table T1.
  • the measurement point b is projected onto the lattice space using the equations (3-1) and (3-2), and the hash table T1 is generated.
  • the coincidence calculation unit 12 prepares a plurality of types of hash functions represented by the expressions (3-1) and (3-2), and generates a plurality of hash tables.
  • Hash tables T1 to T3 are generated.
  • the coincidence calculation unit 12 may generate a plurality of types of hash functions by setting the values of p, qx, and qy shown in equations (3-1) and (3-2) to different values. .
  • hx (b) ⁇ (bx + qx) / p ⁇ > (3-1 ′)
  • hy (b) ⁇ (by + qy) / p ⁇ > (3-2 ′)
  • the coincidence calculation unit 12 projects each measurement point a of the position information at time t onto the projection space using the hash functions h1 to h3, and the hash value h1 (a) of the measurement point a. ⁇ H3 (a) is obtained.
  • the coincidence degree calculation unit 12 evaluates the evaluation value without imposing a penalty on the evaluation value E if any one of the obtained hash values h1 (a) to h3 (a) exists in the corresponding hash table T1 to T3. E is calculated (S43).
  • a label of 1 is set in the bin of the hash table T1 indicated by the hash value h1 (a)
  • 0 is set in the bins of the hash tables T2 and T3 indicated by the hash values h2 (a) and h3 (a).
  • the measurement point b is regarded as being located within the distance ⁇ from the measurement point a, and no penalty is imposed on the evaluation value E.
  • the coincidence degree calculation unit 12 performs such processing for each measurement point a, and calculates the total value of the assigned penalty as the evaluation value E.
  • the amount of calculation for referring to the hash table can be set to O (1).
  • the position information is two-dimensional.
  • the position information is not limited to this and may be three-dimensional.
  • a depth sensor may be employed as the position sensor 16.
  • the surrounding space may be represented by a three-dimensional global coordinate space, and the estimated position of the moving body may be plotted to adopt a three-dimensional map.
  • a wheelchair-type mobile robot EMC-230 was adopted as a moving body, and a laser range finder (LRF) having the specifications shown in FIG. 8 was attached thereto for sensing.
  • LRF laser range finder
  • Information sensed by the LRF is expressed in polar coordinates and is equally spaced in the angular direction. Since the density of points becomes sparse as the distance from the sensor increases, segmentation is performed and interpolation is performed so that the distance between points in the segment is equal to or less than ⁇ .
  • the first floor of the Nara Institute of Science and Technology Information Building was used as the experiment site.
  • the dynamic environment was measured by LRF, and the positional information at time t and time t-1 was matched.
  • the dynamic environment was measured while the moving body was stationary.
  • the dynamic environment was a hallway with a width of about 10m, and a range of 20m in depth, and about 25 pedestrians were moving.
  • FIG. 11 (A) is a diagram showing an experimental place
  • FIG. 11 (B) is a diagram showing a dynamic environment in which a person moves to and from this experimental place.
  • FIG. 12 shows a two-dimensional map of the surrounding space generated in a static environment where no person is traveling to and from the experiment site. This two-dimensional map is generated using RBPF-SLAM and can be regarded as a true value.
  • the search range when searching for the minimum evaluation value E was set to plus or minus 20 [cm] in the x-axis direction and y-axis direction, respectively, and the angle was set to plus or minus 0.3 [rad].
  • the minimum value of the evaluation value E was searched.
  • FIG. 13 is a table showing experimental results when the L2 norm, M-estimator, and L0 norm are used as the evaluation value E.
  • the matching error is 1 [cm] at the maximum, and is found to be within the measurement error of the LRF.
  • position information for 200 frames was measured by LRF, and the position information of adjacent frames was matched using the L2 norm, M-estimator, and L0 norm to generate a two-dimensional map.
  • FIG. 14 is a two-dimensional map created when the L2 norm is adopted.
  • FIG. 15 is a two-dimensional map created when the M-estimator is employed.
  • FIG. 16 is a two-dimensional map created when the L0 norm is adopted.
  • the calculation time was compared when LSH was adopted, when Brute-force was adopted, and when kd-tree was adopted.
  • the evaluation value E was L0 norm.
  • the search range is the same as the above value.
  • Total 24,000 evaluation values E are calculated, the total calculation time (Average calculation time), the number of L0 norms that can be calculated per second (Frequency of calculation aec.), And the average of the measurement points where r neighbors were found The number (Average
  • FIG. 17 is a table summarizing experimental results when experiments for comparing calculation times using True-force, kd-tree, and LSH are performed.
  • the total calculation time is shorter when LSH is used compared to the true-force and kd-tree.
  • the LSH has as few as 15 average points (average match points) of the measurement points where r neighboring points can be found. This is because the neighborhood point search by LSH is approximate.
  • FIG. 18 is a two-dimensional map generated using RBPF-SLAM in a static environment. This two-dimensional map is regarded as a true value.
  • FIG. 19 is a two-dimensional map generated by matching position information with an evaluation value E using an L0 norm using position information measured while moving a moving body in a dynamic environment.
  • the points present in this two-dimensional map indicate the trajectory of the moving person.
  • FIG. 20 is a two-dimensional map generated by matching the positional information with the evaluation value E using the L2 norm. Comparing the edge shapes of both two-dimensional maps, it can be seen that the L0 norm generates a two-dimensional map relatively accurately even in a dynamic environment.
  • FIG. 21 is a two-dimensional map generated using RBPF-SLAM. This two-dimensional map is regarded as a true value.
  • FIG. 22 is a two-dimensional map generated using the L0 norm. This two-dimensional map is generated under the same environment as in FIG. The distortion appearing in FIG. 22 can be improved by applying a technique such as loop closing.
  • the above self-position estimation apparatus is a self-position estimation apparatus that estimates the self-position of a moving body, and obtains position information that obtains position information of each object existing in the surrounding space of the moving body in time series. And the first position information acquired at the first time by the position information acquisition unit into the second position information acquired at the second time which is the time before or after the first time in time series.
  • a degree of coincidence calculation unit for calculating a degree of coincidence between the first position information and the second position information when the first position information and the second position information are translated, and a change in a parallel movement amount and a rotation movement amount of the first position information; And searching for the parallel movement amount and the rotational movement amount that maximize the degree of coincidence, and estimating the self-position, wherein the coincidence degree calculating unit constitutes the first position information For a certain measurement point, the second position information is within a certain distance. If there is Narusuru certain measuring point, the degree of match to impart a predetermined point, and calculates the total value of the grant and the point as the matching degree.
  • the first position information is shifted with respect to the second position information, and the parallel movement amount and the rotation movement amount that maximize the matching degree are searched.
  • the degree of coincidence is given a predetermined point if a measurement point having the second position information is located within a certain distance with respect to a measurement point constituting the first position information. That is, in this configuration, the degree of coincidence is calculated using a robust L0 norm even in a dynamic environment including a moving object as well as a stationary object. Therefore, the self position of the moving object can be accurately estimated in an unknown environment including the moving object.
  • the degree-of-match calculation unit generates a lattice space by dividing the peripheral space into a lattice shape based on the fixed distance, and uses the predetermined hash function to calculate each measurement point of the second position information. Projecting onto the lattice space, generating a hash table indicating the presence or absence of the measurement points at each position in the lattice space, and projecting each measurement point of the first position information onto the lattice space using the hash function When the position of a certain measurement point in the lattice space is present in the hash table, the point is preferably given to the degree of coincidence.
  • each measurement point of the second position information is projected onto the lattice space using a predetermined hash function, and a hash table indicating the presence / absence of each measurement point is generated. Then, each measurement point of the first position information is also projected onto the lattice space using a hash function, and if the position of the projected measurement point exists in the hash table, it is in the vicinity of a certain measurement point constituting the first position information. A certain measurement point constituting the second position information is considered to exist, and a predetermined point is given to the degree of coincidence. As described above, in this configuration, since the degree of coincidence is calculated by comparing the hash values of the measurement points of the first and second position information, the parallel movement amount and the rotational movement amount that maximize the degree of coincidence are calculated. It can be obtained at high speed.
  • the coincidence calculation unit generates a plurality of the hash tables using a plurality of different hash functions, and projects a measurement point having the first position information onto the lattice space using each hash function.
  • the estimation unit randomly changes the parallel movement amount and the rotational movement amount to generate n1 (n1 is a positive integer) search candidates from the first position information; Based on the matching degree of the search candidates, a weight value of each search candidate is calculated, and based on the calculated weight value, n2 (n2 ⁇ n1) search candidates are extracted, and the extracted n2 pieces For each search candidate, a third process of generating n3 (n3 is a positive integer) search candidates by changing the parallel movement amount and the rotational movement amount based on a predetermined probability distribution, and the n3 It is preferable to include a fourth process for calculating the weight value for each search candidate and obtaining a search candidate having the maximum calculated weight value as a search solution.
  • the first position information is shifted with focus on the position that is likely to be a solution, a plurality of search candidates are generated, and the degree of coincidence with the second position information is obtained. Therefore, search candidates that maximize the degree of matching can be calculated efficiently.
  • the second and third processes are repeated a predetermined number of times, and a search candidate that maximizes the weight value is obtained as the search solution.
  • An odometry calculating unit that calculates odometry of the moving body is further provided, and the first process calculates the parallel movement amount and the rotational movement amount based on a prior probability of the position of the moving body obtained from the odometry. It is preferable to generate n1 search candidates by randomly changing the search candidates.
  • search candidates can be generated by shifting the first position information with focus on positions that are likely to be solutions, and search accuracy can be further improved.
  • the above moving body includes any one of (1) to (6) and a position sensor that acquires the position information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Optics & Photonics (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Navigation (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

La présente invention concerne une unité de calcul de degré de coïncidence (12) qui calcule le degré de coïncidence de deux éléments d'informations de position lorsque des informations de position acquises à un temps (t) par une unité d'acquisition d'informations de position (11) sont soumises à une translation parallèle et à un mouvement de rotation par rapport à des informations de position acquises à un temps (t-1). Une unité d'estimation (13) estime une autoposition par la modification de la quantité de translation parallèle (T) et la quantité de mouvement de rotation (R) des informations de position au temps (t) afin de rechercher la quantité de translation parallèle (T) et la quantité de mouvement de rotation (R) pour lesquelles le degré de coïncidence est le plus grand. Lorsqu'un certain point de mesure des informations de position au temps (t) est présent à une distance (ε) d'un certain point de mesure des informations de position au temps (t-1), l'unité de calcul du degré de coïncidence (12) donne un point prédéfini au degré de coïncidence.
PCT/JP2011/006846 2011-06-21 2011-12-07 Dispositif, procédé et programme d'estimation d'une autoposition, et objet mobile WO2012176249A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2013521307A JP5892663B2 (ja) 2011-06-21 2011-12-07 自己位置推定装置、自己位置推定方法、自己位置推定プログラム、及び移動体

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011-137623 2011-06-21
JP2011137623 2011-06-21

Publications (1)

Publication Number Publication Date
WO2012176249A1 true WO2012176249A1 (fr) 2012-12-27

Family

ID=47422131

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/006846 WO2012176249A1 (fr) 2011-06-21 2011-12-07 Dispositif, procédé et programme d'estimation d'une autoposition, et objet mobile

Country Status (2)

Country Link
JP (1) JP5892663B2 (fr)
WO (1) WO2012176249A1 (fr)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015151770A1 (fr) * 2014-03-31 2015-10-08 株式会社日立産機システム Système de génération de carte tridimensionnelle
JP2016074053A (ja) * 2014-10-06 2016-05-12 トヨタ自動車株式会社 ロボット
JP2016081327A (ja) * 2014-10-17 2016-05-16 村田機械株式会社 移動量推定装置、自律移動体、及び移動量の推定方法
JP2016095590A (ja) * 2014-11-12 2016-05-26 村田機械株式会社 移動量推定装置、自律移動体、及び移動量の推定方法
JP2016103158A (ja) * 2014-11-28 2016-06-02 村田機械株式会社 移動量推定装置、自律移動体、及び移動量の推定方法
WO2017057054A1 (fr) * 2015-09-30 2017-04-06 ソニー株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP2017130017A (ja) * 2016-01-20 2017-07-27 ヤフー株式会社 情報処理装置、情報処理方法及びプログラム
WO2019044500A1 (fr) * 2017-09-04 2019-03-07 日本電産株式会社 Système d'estimation de position et corps mobile comprenant ledit système
CN110095788A (zh) * 2019-05-29 2019-08-06 电子科技大学 一种基于灰狼优化算法的rbpf-slam改进方法
JP2021176052A (ja) * 2020-05-01 2021-11-04 株式会社豊田自動織機 自己位置推定装置
WO2022064665A1 (fr) * 2020-09-28 2022-03-31 日本電気株式会社 Dispositif de mesure, dispositif de traitement d'informations, procédé d'ajustement de position et support lisible par ordinateur
JP2022553248A (ja) * 2019-10-16 2022-12-22 ホアウェイ・テクノロジーズ・カンパニー・リミテッド 自律的な乗り物のリアルタイムの位置特定のための方法およびシステム

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596409B (zh) * 2018-07-16 2021-07-20 江苏智通交通科技有限公司 提升交通危险人员事故风险预测精度的方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08261719A (ja) * 1995-03-17 1996-10-11 Toshiba Corp 相対的移動量算出装置及び相対的移動量算出方法
JP2008040677A (ja) * 2006-08-03 2008-02-21 Toyota Motor Corp 自己位置推定装置
JP2010262546A (ja) * 2009-05-09 2010-11-18 Univ Of Fukui 2次元図形マッチング方法
JP2011048706A (ja) * 2009-08-28 2011-03-10 Fujitsu Ltd センサフュージョンによる地図の自動生成、およびそのように自動生成された地図を用いて移動体の移動をするための、装置、方法、ならびにプログラム

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002095534A2 (fr) * 2001-05-18 2002-11-28 Biowulf Technologies, Llc Procedes de selection de caracteristiques dans une machine a enseigner
EP1738232A4 (fr) * 2004-04-22 2009-10-21 Frontline Robotics Architecture de systeme de commande ouverte pour systemes mobiles autonomes
US20070156471A1 (en) * 2005-11-29 2007-07-05 Baback Moghaddam Spectral method for sparse principal component analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08261719A (ja) * 1995-03-17 1996-10-11 Toshiba Corp 相対的移動量算出装置及び相対的移動量算出方法
JP2008040677A (ja) * 2006-08-03 2008-02-21 Toyota Motor Corp 自己位置推定装置
JP2010262546A (ja) * 2009-05-09 2010-11-18 Univ Of Fukui 2次元図形マッチング方法
JP2011048706A (ja) * 2009-08-28 2011-03-10 Fujitsu Ltd センサフュージョンによる地図の自動生成、およびそのように自動生成された地図を用いて移動体の移動をするための、装置、方法、ならびにプログラム

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YUSUKE HIEIDA ET AL.: "Realtime SLAM Using LO-norm Minimization Under Dynamic Crowded Environments", NARA INSTITUTE OF SCIENCE AND TECHNOLOGY, 24 March 2011 (2011-03-24), Retrieved from the Internet <URL:http://hdl.handle.net/10061/6230> [retrieved on 20120123] *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015151770A1 (fr) * 2014-03-31 2015-10-08 株式会社日立産機システム Système de génération de carte tridimensionnelle
JPWO2015151770A1 (ja) * 2014-03-31 2017-05-25 株式会社日立産機システム 三次元地図生成システム
JP2016074053A (ja) * 2014-10-06 2016-05-12 トヨタ自動車株式会社 ロボット
JP2016081327A (ja) * 2014-10-17 2016-05-16 村田機械株式会社 移動量推定装置、自律移動体、及び移動量の推定方法
US10119804B2 (en) 2014-11-12 2018-11-06 Murata Machinery, Ltd. Moving amount estimating apparatus, autonomous mobile body, and moving amount estimating method
JP2016095590A (ja) * 2014-11-12 2016-05-26 村田機械株式会社 移動量推定装置、自律移動体、及び移動量の推定方法
JP2016103158A (ja) * 2014-11-28 2016-06-02 村田機械株式会社 移動量推定装置、自律移動体、及び移動量の推定方法
WO2017057054A1 (fr) * 2015-09-30 2017-04-06 ソニー株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et programme
US10803600B2 (en) 2015-09-30 2020-10-13 Sony Corporation Information processing device, information processing method, and program
CN108449945A (zh) * 2015-09-30 2018-08-24 索尼公司 信息处理设备、信息处理方法和程序
JPWO2017057054A1 (ja) * 2015-09-30 2018-08-02 ソニー株式会社 情報処理装置、情報処理方法、およびプログラム
JP2017130017A (ja) * 2016-01-20 2017-07-27 ヤフー株式会社 情報処理装置、情報処理方法及びプログラム
WO2019044500A1 (fr) * 2017-09-04 2019-03-07 日本電産株式会社 Système d'estimation de position et corps mobile comprenant ledit système
JPWO2019044500A1 (ja) * 2017-09-04 2020-10-01 日本電産株式会社 位置推定システム、および当該位置推定システムを備える移動体
CN110095788A (zh) * 2019-05-29 2019-08-06 电子科技大学 一种基于灰狼优化算法的rbpf-slam改进方法
JP2022553248A (ja) * 2019-10-16 2022-12-22 ホアウェイ・テクノロジーズ・カンパニー・リミテッド 自律的な乗り物のリアルタイムの位置特定のための方法およびシステム
JP7358636B2 (ja) 2019-10-16 2023-10-10 ホアウェイ・テクノロジーズ・カンパニー・リミテッド 自律的な乗り物のリアルタイムの位置特定のための方法およびシステム
JP2021176052A (ja) * 2020-05-01 2021-11-04 株式会社豊田自動織機 自己位置推定装置
JP7322799B2 (ja) 2020-05-01 2023-08-08 株式会社豊田自動織機 自己位置推定装置
WO2022064665A1 (fr) * 2020-09-28 2022-03-31 日本電気株式会社 Dispositif de mesure, dispositif de traitement d'informations, procédé d'ajustement de position et support lisible par ordinateur
JP7452681B2 (ja) 2020-09-28 2024-03-19 日本電気株式会社 測定装置、情報処理装置、位置合わせ方法、及びプログラム

Also Published As

Publication number Publication date
JP5892663B2 (ja) 2016-03-23
JPWO2012176249A1 (ja) 2015-04-27

Similar Documents

Publication Publication Date Title
JP5892663B2 (ja) 自己位置推定装置、自己位置推定方法、自己位置推定プログラム、及び移動体
US10096129B2 (en) Three-dimensional mapping of an environment
US10549430B2 (en) Mapping method, localization method, robot system, and robot
JP7270338B2 (ja) リアルタイムのマッピングと位置確認のための方法及び装置
JP6405778B2 (ja) 対象追跡方法及び対象追跡装置
US9864927B2 (en) Method of detecting structural parts of a scene
EP2915138B1 (fr) Systèmes et procédés permettant de fusionner des cartes multiples pour un pistage basé sur la vision informatique
KR100866380B1 (ko) 물체인식을 바탕으로 한 로봇의 자기위치 추정 방법
CN104851094A (zh) 一种基于rgb-d的slam算法的改进方法
KR20090088516A (ko) 물체인식 및 인식된 물체를 포함하는 주변 환경 정보를바탕으로 한 로봇의 자기 위치 추정 방법
KR20100072590A (ko) 동적 환경에서 모바일 플랫폼의 지도 작성방법
JP2005528707A (ja) データ組間の特徴マッピング
Lee et al. Vision-based kidnap recovery with SLAM for home cleaning robots
Ahn et al. A practical approach for EKF-SLAM in an indoor environment: fusing ultrasonic sensors and stereo camera
Andreasson et al. Mini-SLAM: Minimalistic visual SLAM in large-scale environments based on a new interpretation of image similarity
Daraei et al. Velocity and shape from tightly-coupled LiDAR and camera
KR101460313B1 (ko) 시각 특징과 기하 정보를 이용한 로봇의 위치 추정 장치 및 방법
Schmidt et al. The visual SLAM system for a hexapod robot
Alcantarilla et al. Learning visibility of landmarks for vision-based localization
Shoukat et al. Cognitive robotics: Deep learning approaches for trajectory and motion control in complex environment
Wadenbäck et al. Ego-motion recovery and robust tilt estimation for planar motion using several homographies
Sohn et al. Sequential modelling of building rooftops by integrating airborne LiDAR data and optical imagery: preliminary results
Gouda et al. Vision based slam for humanoid robots: A survey
Chai et al. Fast vision-based object segmentation for natural landmark detection on Indoor Mobile Robot
Dayoub et al. A sparse hybrid map for vision-guided mobile robots

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11868093

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2013521307

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11868093

Country of ref document: EP

Kind code of ref document: A1