WO2019154435A1 - 建图方法、图像采集和处理***和定位方法 - Google Patents

建图方法、图像采集和处理***和定位方法 Download PDF

Info

Publication number
WO2019154435A1
WO2019154435A1 PCT/CN2019/075741 CN2019075741W WO2019154435A1 WO 2019154435 A1 WO2019154435 A1 WO 2019154435A1 CN 2019075741 W CN2019075741 W CN 2019075741W WO 2019154435 A1 WO2019154435 A1 WO 2019154435A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
parameter
connection
posture
guided vehicle
Prior art date
Application number
PCT/CN2019/075741
Other languages
English (en)
French (fr)
Inventor
孙宇
罗磊
周韬宇
肖尚华
Original Assignee
上海快仓智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201822023605.9U external-priority patent/CN211668521U/zh
Application filed by 上海快仓智能科技有限公司 filed Critical 上海快仓智能科技有限公司
Priority to JP2019531677A priority Critical patent/JP6977921B2/ja
Publication of WO2019154435A1 publication Critical patent/WO2019154435A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions

Definitions

  • the invention generally relates to the field of intelligent warehousing, and in particular to a mapping method, an image acquisition processing system and a positioning method which can be used for intelligent warehousing.
  • the physical coordinate system is measured by common distance units, such as meters, decimeters, and centimeters, and is allowed to be described in integer, decimal, and fractional forms. For example, 1 meter, 1 centimeter, 1 centimeter, 0.55 meter, 0.2 decimeter, 1.4 centimeter, one-half meter, etc., the coordinate system direction is generally parallel to the building wall, or parallel to the southeast and northwest.
  • Automated guided vehicles AGV that transport goods in smart warehouses often require precise positioning of their positions.
  • the accuracy of the existing positioning methods usually does not meet the requirements of the work, especially when it is necessary to accurately determine the position and attitude parameters of the AGV. This is not conducive to the operation and control of the operator.
  • the present invention provides a method for constructing a site, including: establishing or acquiring a coordinate system of the site; scanning the site, obtaining a picture of the calibration point, to be determined a picture of the bit position, and a position parameter and a posture parameter corresponding to the picture; correcting the to-be-positioned position based on the picture of the calibration point, the picture of the position to be positioned, the position parameter, and the attitude parameter The position and attitude parameters of the picture.
  • the positional parameter comprises an abscissa and an ordinate, preferably comprising a vertical coordinate, the attitude parameter comprising a heading angle, preferably comprising a pitch angle and a roll angle.
  • the step of modifying comprises: constructing a set of connection points, each of the connection points comprising a picture, the positional parameter corresponding to the one picture, and the attitude parameter, and Whether the picture corresponds to a calibration point; and based on the set of connection points, correcting the position parameter and the posture parameter of the picture of the position to be located.
  • the step of modifying includes: obtaining, from the set of the connection points, two connection points whose distance does not exceed a predetermined value as a connection, establishing a set of connections; and collecting the connection Each of the two connection points included in the connection, calculating a connection confidence between the two connection points, and filtering out those connections whose connection confidence is higher than a predetermined threshold as a set of connection connections; Constructing a connection set, correcting the position parameter and the posture parameter of the picture of the position to be located.
  • the modifying step further comprises: performing a gradient descent method on the set of connection connections, wherein when the initialization step of the gradient descent method is performed, the picture of the connection point of the uncalibrated point is The positional parameter and the attitude parameter are used as initial iteration parameters of the gradient descent method.
  • the modifying step further comprises performing the gradient descent method until the iterative rate of change is below a predetermined threshold.
  • the method further comprises: the coordinate system, a picture of the calibration point, a picture of the position to be located, a position parameter and a posture parameter of a picture of the calibration point, and a corrected
  • the location parameter and the posture parameter of the picture of the location to be located are stored in a database or in a file to establish a map.
  • the coordinate system is a physical coordinate system.
  • the predetermined value is half of the length or width of the picture.
  • the present invention also provides an automatic guided vehicle for image acquisition, comprising: a base; a camera mounted on the base and configured to collect a picture of an area under the base; a measuring component, The measurement assembly is mounted on the base and configured to measure or calculate positional parameters and attitude parameters of the automated guided vehicle corresponding to the picture.
  • the automated guided vehicle further includes a lighting device mounted on the base and configured to illuminate an area under the base for the camera to capture a picture.
  • the automated guided vehicle further includes a control device mounted on the base, the camera and the measuring assembly being coupled to the control device, the control device being configured to control the trolley Traveling to the marker point and the location to be located to capture a picture of the marker point and a picture of the location to be located.
  • an automated guided vehicle further includes processing means coupled to the camera and the measuring component, and correcting the to-be-positioned based on the picture, and the positional parameters and attitude parameters Position and pose parameters of the position of the picture.
  • the processing device corrects a position parameter and a posture parameter of a picture of the position to be located by constructing a set of connection points, each of the connection points including a picture, and the Corresponding to the location parameter and the gesture parameter corresponding to a picture, and whether the picture corresponds to a calibration point; from the set of connection points, acquiring two connection points whose distance does not exceed a predetermined value as a connection, establishing a connection a set of two connection points included in each of the set of connections, calculating a connection confidence between the two connection points, and filtering out those connections whose connection confidence is above a predetermined threshold, As a set of construction connection; performing a gradient descent method on the set of connection connections until the iteration change rate is lower than a predetermined advance, wherein when the initialization step of the gradient descent method is performed, the picture of the connection point of the uncalibrated point is The positional parameter and the attitude parameter are used as initial iteration parameters of the gradient descent method.
  • an automated guided vehicle further includes a hood mounted on the base for softening light emitted by the illuminating device, the illuminating device preferably being mounted around the hood .
  • the measuring component is an inertial navigation measuring component.
  • the positional parameter comprises an abscissa and an ordinate, preferably comprising a vertical coordinate
  • the attitude parameter comprising a heading angle, preferably comprising a pitch angle and a roll angle
  • the measuring component comprises a laser SLAM measuring device and/or a visual SLAM measuring device.
  • the processing device is configured to configure the coordinate system, a picture of the calibration point, a picture of the position to be located, a position parameter and a posture parameter of a picture of the calibration point, and a correction
  • the position parameter and the posture parameter of the picture of the to-be-positioned position are stored in a database or a file to establish a map library.
  • the present invention also provides an image acquisition and processing system comprising: an automated guided vehicle as described above; and a processing device coupled to the camera and the measurement component, and based on the picture, and the The position parameter and the attitude parameter correct the position parameter and the attitude parameter of the picture.
  • the processing device is configured to perform the mapping method described above.
  • the present invention also provides a mapping and positioning system for an automated guided vehicle, comprising: a camera, the camera is configured to capture an image under the automated guided vehicle; and a lighting device configured to illuminate Below the automated guided vehicle; an inertial navigation measuring component configured to measure a positional parameter and a posture parameter of the automated guided vehicle; a processing device, the camera and the inertial navigation measuring component are coupled To the processing device, the control device is configured to correct a position parameter and a posture parameter of the picture based on the image, the position parameter, and the posture parameter.
  • the processing device is configured to perform the mapping method according to any one of claims 1-10.
  • the present invention also provides an apparatus for mapping a site, comprising: a device configured to establish or acquire a coordinate system of the site; configured to scan the site, obtain a picture of a calibration point, and obtain a picture of a plurality of locations to be located And means for position and posture parameters corresponding to the picture; means for modifying a position parameter and a posture parameter of the picture of the position to be located based on the picture, the position parameter and the attitude parameter.
  • the present invention also provides a positioning method, comprising: loading or obtaining a map obtained by the method described in any one of the above; acquiring or obtaining a picture of a position to be located and a position parameter and a posture parameter corresponding to the picture; A map that retrieves the closest picture to the image of the location to be located.
  • the positioning method further comprises: calculating a confidence, a position parameter offset, and a pose parameter offset between the picture of the position to be located and the picture of the closest distance using a phase correlation method.
  • the picture closest to the distance is discarded, and the distance from the picture of the position to be located is re-retrieved and the confidence is higher than a preset value. picture of.
  • FIG. 1 is a flow chart of a method of building a picture in accordance with one embodiment of the present invention
  • FIG. 2 is a schematic diagram of physical coordinates in accordance with one embodiment of the present invention.
  • FIG. 3 is a schematic diagram of logical coordinates in accordance with one embodiment of the present invention.
  • connection point 4 is a schematic view of a connection point in accordance with one embodiment of the present invention.
  • Figure 5 is a schematic illustration of a calibration point in accordance with one embodiment of the present invention.
  • FIG. 6 is a flowchart of a method for correcting a position parameter and a posture parameter of a position picture to be positioned according to an embodiment of the present invention
  • Figure 8 is a schematic illustration of a connection in accordance with one embodiment of the present invention.
  • Figure 9 shows a screenshot of the map after the physical coordinate system and the logical coordinate system are mapped
  • Figure 10 is a schematic illustration of an automated guided vehicle for image acquisition, in accordance with one embodiment of the present invention.
  • FIG. 11 is a flow chart of a positioning method in accordance with one embodiment of the present invention.
  • Figure 12 is a block diagram of a computer program product in accordance with one embodiment of the present invention.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include one or more of the described features, either explicitly or implicitly.
  • the meaning of “plurality” is two or more unless specifically and specifically defined.
  • connection or integral connection: it can be mechanical connection, electrical connection or communication with each other; it can be directly connected or indirectly connected through an intermediate medium, which can be the internal connection of two elements or the interaction of two elements. relationship.
  • intermediate medium can be the internal connection of two elements or the interaction of two elements. relationship.
  • the first feature "on” or “under” the second feature may include direct contact of the first and second features, and may also include first and second features, unless otherwise explicitly defined and defined. It is not in direct contact but through additional features between them.
  • the first feature “above”, “above” and “above” the second feature includes the first feature directly above and above the second feature, or merely indicating that the first feature level is higher than the second feature.
  • the first feature “below”, “below” and “below” the second feature includes the first feature directly above and above the second feature, or merely indicating that the first feature level is less than the second feature.
  • mapping method 100 in accordance with a first embodiment of the present invention will be described, for example, for mapping a venue.
  • a coordinate system of the venue is established or acquired.
  • the coordinate system may be a physical coordinate system or a logical coordinate system, which is within the scope of the present invention.
  • the definition of the coordinate system usually includes the position of the origin, the direction of the XY coordinate axis, and so on.
  • the site to be located can be measured, and the physical coordinate system is established.
  • the physical coordinate system is measured by common distance units, such as meters, decimeters, and centimeters, and is allowed to be described in integer, decimal, and fractional forms, such as 1 meter, 1 Decimeter, 1 cm, 0.55 m, 0.2 decimeter, 1.4 cm, one-half m, etc.
  • the coordinate system direction is generally parallel to the building wall, or parallel to the southeast and northwest directions, and the coordinates established in accordance with the above principles are in the system. It is called the physical coordinate system, as shown in Figure 2.
  • the coordinate system set according to the actual situation of the business is called the logical coordinate system in this system.
  • the logical coordinate system and the physical coordinate system may differ, for example, in that the logical coordinate system is generally described by an integer, such as (1, 2), (5, 10), and coordinates.
  • the direction of the system does not necessarily coincide with the physical coordinate system, and the distance unit of the logical coordinate system is not necessarily a common physical unit, but is defined by the actual operation needs, such as point A, point B, point C, point B logic in Figure 3.
  • the coordinates are (3,7), the logical coordinates of point A are (3,8), the logical coordinates of point C are (4,7), and the point of the lower left corner is the origin.
  • the logical position and the physical position may be completely identical, or there may be a certain conversion relationship between the two.
  • the reason why there is a logical position is to facilitate the planning of business logic or to facilitate the calculation of the map. For example, in the case of shelf placement, the position of the shelf is saved in the position of the logical coordinate system, such as (3, 7) position, if If the physical position is used, the above description (4.05, 9.45) will appear, which is not conducive to the understanding and operation of the operator. If the physical position is required, the conversion can be performed by the conversion relationship. Generally, the conversion is multiplied by a coefficient.
  • the logical position spacing It is called the logical position spacing and can be different in the X direction and the Y direction.
  • the shelf in the warehouse is 1.3 meters * 1.3 meters
  • the shelf spacing is 0.05 meters
  • you can define the logical position spacing is 1.35 meters
  • if the shelf is 1.2 meters * 1.0 meters
  • you can define the logical position spacing in the X axis direction is 1.25
  • the meter has a 1.05 meter in the Y-axis direction, so that the device that needs physical positioning finds the corresponding physical position shelf.
  • the above conversion is only a conventional conversion method, and there are more complicated conversion methods, such as coordinate system rotation conversion, non-linear conversion and other conversion methods, and the space is not detailed in this case.
  • the above description of the logical coordinate system is merely exemplary and not limiting.
  • the logical coordinate system refers to the coordinate system set according to the actual situation of the business.
  • the positional parameters in the logical coordinate system are not limited to integers, and may also have decimals. These are all within the scope of the invention. If the physical coordinate system or logical coordinate system of the site has been established in advance, it can be obtained from the corresponding file or database.
  • the physical coordinate system is taken as an example for explanation below.
  • step S102 scanning the site, acquiring a picture of the calibration point (for definition of the calibration point, see below), a picture of the position to be located (preferably a picture of a plurality of positions to be located), and the target The position parameter and the attitude parameter corresponding to the fixed point picture and the position picture to be located.
  • an automatic guided cart equipped with the apparatus of the present invention can be used to scan the site to obtain a picture of the location to be located, a calibration point picture, and position and attitude parameters corresponding to the above two pictures.
  • the position to be positioned here can be determined according to the actual working conditions, for example, the position that the automatic guided vehicle needs to reach.
  • the position parameter is, for example, an abscissa and an ordinate (ie, a horizontal position, such as a coordinate of a picture center, or a picture) of a picture at a certain calibration point or a position to be positioned in a physical coordinate system.
  • the coordinates of a certain angle may of course also be the horizontal distance and the longitudinal distance with respect to a certain base point; for example, the angle of the acquired picture, for example the angle with respect to the horizontal or vertical axis (ie the heading angle).
  • parameters such as a pitch angle, a roll angle, and a vertical height corresponding to the picture (that is, a pitch angle, a roll angle, a vertical height, and the like when the car is automatically guided to obtain a photo) may also be acquired.
  • the above-described data can be provided by using an inertial navigation measuring device mounted on the automatic guided cart of the present invention.
  • the inertial navigation measuring device includes, for example, a wheel encoder, an accelerometer (1 to 3 axes), a gyroscope (1 to 3 axes), a magnetic flux sensor (1 to 3 axes), a barometric pressure sensor, and a feedback heading angle, a pitch angle, and a roll. Measuring equipment for angle, horizontal position, and vertical position. Using the data obtained by the wheel encoder, accelerometer, gyroscope, magnetic flux sensor, and air pressure sensor, the heading angle (ie, the angle of the picture relative to the horizontal or vertical axis), the pitch angle, the roll angle, and the level can be obtained by calculation.
  • the heading angle ie, the angle of the picture relative to the horizontal or vertical axis
  • the pitch angle, the roll angle, and the level can be obtained by calculation.
  • Position, vertical position superimpose the above data into the picture to form (picture, heading angle (ie picture angle), pitch angle, roll angle, horizontal position (ie x-axis abscissa and y-axis ordinate), vertical position , is the calibration point of the seven-tuple data combination, as shown in Figure 4, referred to as the connection point in the system, as a subsequent mapping input.
  • the connection point does not need to have all the data, for example, a combination of four data including (picture, heading angle, horizontal position, whether it is a calibration point) can achieve the object of the present invention.
  • the calibration point it represents the point where the coordinates have been precisely determined. As shown in Figure 3, points A, B, and C, the coordinates of these points have been confirmed, artificially defined, a priori.
  • FIG. 5 An example of a calibration point is shown in Figure 5, where the calibration point A is shown, its logical coordinates are (5, 8), and the physical coordinates are (3.75, 4.10).
  • the calibration point is not limited to having both logical coordinates and physical coordinates.
  • a variety of means can be employed to identify and validate calibration points. For example, there is a cross line on the image, the position is marked on it, and the calibration point and its position coordinates can be recognized after the image is collected. Also, there is encoded information such as a barcode or a two-dimensional code, which can be used after image acquisition.
  • the program decodes, and the decoded content is the position coordinate of the calibration point.
  • the position parameter of the calibration point picture is the position parameter of the calibration point, which is not measured by the inertial navigation measuring device.
  • the position parameter of the calibration point picture is the position parameter of the calibration point, which is not measured by the inertial navigation measuring device.
  • step S103 the position parameter and the posture parameter of the picture of the position to be located are corrected based on the picture of the calibration point, the picture of the position to be positioned, the position parameter, and the posture parameter.
  • the position parameter and the attitude parameter are obtained by measurement, for example, by the inertial navigation measuring device, and there is a measurement error in the working condition in the field, and further correction is needed to improve the accuracy.
  • the picture of the calibration point can be used as a good benchmark to correct the position and posture parameters of the picture to be located.
  • step S103 One embodiment of step S103 is described below with reference to FIG.
  • each connection point includes a combination of seven-tuple data (picture, heading angle (ie, picture angle), pitch angle, roll angle, horizontal position, vertical position, whether it is a calibration point), or includes (picture The quaternion data combination of the heading angle, horizontal position, and whether it is a calibration point.
  • picture heading angle
  • pitch angle roll angle
  • horizontal position vertical position
  • vertical position whether it is a calibration point
  • picture angle ie, picture angle
  • connection points Use these connection points to construct a collection of connection points.
  • the parameter of "is it a calibration point” if the calibration point appears in the picture and the a priori position parameter of the calibration point is obtained normally, the parameter is "is a calibration point”; otherwise the parameter is "uncalibrated point” ". It can also be represented by a logic 0 or 1.
  • a connection set is established and output based on the set of connection points.
  • the principle of group pair operation is, for example, the position of the two pictures is not more than a predetermined value, for example, the length or width of the picture is not exceeded. 50%, 30% or 20%.
  • the horizontal position of the connection point A is (0, 0) and the horizontal position of the connection point B is (5, 0)
  • the distance from A to B is 5.
  • the picture size is 10*10, then A and B are consistent.
  • a standard that does not exceed 50% of the picture size may constitute a pair.
  • such a combination is called a connection, and each connection includes two connection points, and all connections that the output can form are called connection sets in the system.
  • connection point A is referred to as a reference point
  • connection point B is called adjacency.
  • Point taking the reference point as the origin, taking the adjacent point as the offset, taking the reference point picture and the adjacent point picture as input, for example, performing the phase correlation method to obtain the connection confidence (conf) (characterizing the similarity between the two), The x-direction relative displacement (delta_x), the y-direction relative displacement (delta_y), and the relative rotation angle (theta).
  • connection confidence is the output of the phase correlation method, which is calculated by calculating the sharpness of the peak value of the phase correlation method, or the distribution near the peak. If the distribution is normal, then the peak and the mean are known.
  • the cross-correlation result is calculated by calculating the correlation between the two pictures according to the above phase correlation method.
  • the cross-power spectrum calculation is involved, and the cross-correlation function can be used to obtain the cross-correlation level under different displacement conditions. It is assumed that the cross-correlation level obeys the normal distribution and can be calculated by statistical methods.
  • the relevant parameters of the state distribution by dividing the parameter and the maximum cross-correlation value, can calculate the connection confidence.
  • the set of connection connections does not contain a connection where both points are calibration points.
  • the gray area in the figure is picture A
  • the green area is picture B, which illustrates the overlapping area of the two pictures
  • the coincidence is calculated by phase correlation.
  • the cross-correlation results calculated by the two pictures A and B in Fig. 7 are: confidence 131.542, relative displacement 33.4 in the x direction, phase shift 10.7 in the y direction, and 0.3 degree rotation.
  • Figure 8 shows a schematic of the connection including the reference point and the adjacency point.
  • step S1034 a gradient descent method is performed on the map connection set, and the position parameter and the pose parameter of the picture of the position to be located are corrected.
  • the horizontal and vertical coordinates and the angle of the calibration point picture are unchanged, and the gradient adjustment is based on the non-calibration point picture parameter, and the calibration picture can be regarded as a constant.
  • the construction connection set can be defined as a connection that does not contain two points that are calibration points. Because this adjustment has no meaning, the calibration point should not be adjusted, and the gradient will not be solved.
  • the optimization function is as shown in Equation 1, for example:
  • N indicates that the set of connection connections contains a total of N connections
  • i represents the i-th connection in the set of connection connections
  • a i represents the reference point of the i-th connection
  • B i represents the adjacent point of the i-th connection
  • R i Represents the cross-correlation result of the ith connection
  • g ⁇ (A i , B i ) can be understood as the angular difference between the reference point and the adjacent point under the inertial navigation measurement component
  • g ⁇ (A i , B i )-u ⁇ ( R i ) can be understood as the difference between the angle difference under the inertial navigation measurement component and the relative rotation angle in the cross-correlation result (the rotation angle in the cross-correlation result is the theta calculated by the phase correlation method, and this value represents the adjacent point picture
  • the weight of the cross-correlation result angle (the connection of two uncalibrated points, the change It should be equal, or equal, because the two codes are equal, but the degree of change between the calibration point and the non-calibration point is not equal.
  • the degree of change of the non-calibration point is significantly greater than the calibration point, so Through the weight control.
  • the weight can be given according to the actual situation).
  • the weight can be taken as 1 and adjusted according to the same level; for the connection of the calibrated and non-calibrated points, the weight can also take 1 because the calibration point is constant and does not participate in the gradient. The calculation can be considered that the gradient is constant. If you want to fine-tune the calibration point, the weight ratio of the connection between the calibration point and the non-calibration point can be as high as 99 to 1.
  • g x (A i , B i ) can be understood as the x-direction coordinate difference between the reference point and the adjacent point under the inertial navigation measurement component
  • g x (A i , B i )- u x (R i ) can be understood as the difference between the x-direction coordinate difference under the inertial navigation measurement component and the x-direction relative displacement in the cross-correlation result (the x-direction relative displacement in the cross-correlation result is the delta_x calculated by the phase correlation method).
  • this value characterizes how much distance the adjacent point image needs to translate in the x direction to align with the reference point image),
  • f x is the x-axis weight function, used to represent the different connection point properties during the x-axis coordinate fitting process (eg : the calibration point and the non-calibration point) have different weights in the map iteration (as an example, usually the weight of the calibration point is relatively large, for example, 1000, the weight of the non-calibration point is relatively small, for example, 1);
  • v x is the cross-correlation result relative to
  • the adjustment weight of the relative displacement on the x-axis can be, for example, a value of one.
  • g y (A i , B i ) can be understood as the difference in the y-direction coordinate between the reference point and the adjacent point under the inertial navigation measurement component
  • g y (A i , B i )- u y (R i ) can be understood as the difference between the y-direction coordinate difference under the inertial navigation measurement component and the y-direction relative displacement in the cross-correlation result (the y-direction relative displacement in the cross-correlation result is the delta_y calculated by the phase correlation method).
  • f y is the x-axis weight function used to represent the different connection point properties during the y-axis coordinate fitting process (eg : the calibration point and the non-calibration point) have different weights in the map iteration (as an example, usually the weight of the calibration point is relatively large, for example, 1000, the weight of the non-calibration point is relatively small, for example, 1); v y is the relative correlation result
  • the adjustment weight of the relative displacement on the y-axis can be, for example, a value of one.
  • ⁇ 1 , ⁇ 2 , and ⁇ 3 represent the weights of the change of theta, x, and y in the final fitting result, and some scenes are sensitive to the change of theta, and can be adjusted to ⁇ 1 .
  • ⁇ 1 , ⁇ 2 , ⁇ 3 are all 1.
  • Equation 1 The argument in Equation 1 is By deriving the individual independent variables of Equation 1, the direction of the gradient of each independent variable is obtained, or a set of gradients for gradient descent.
  • the initialization step of the gradient descent method is performed, and the positional parameter and the attitude parameter of the inertial navigation annotation are taken as the initial position of the picture.
  • the input of the gradient descent method is the last iteration set, one is the gradient, and the other is the step size, wherein the gradient is obtained by deriving the formula 1.
  • the iterative initial set is assigned by the position parameter and the attitude parameter of the inertial navigation annotation, for example.
  • the step size is fixed or variable.
  • the step length is decreased in the direction of the gradient to optimize using Equation 1.
  • the step size algorithm can be customized as needed, and the system preferably uses a fixed step size for gradient descent.
  • the execution is repeated until the iteration change rate is less than the set threshold, and the system sets, for example, a threshold of 0.1%.
  • the rate of change is, for example, the difference between the last calculated value and the value calculated by this iteration, and the value of the change is the rate of change.
  • the physical coordinates and the attitude parameters of each picture base point (for example, the center point) are obtained as the position parameter and the attitude parameter of the corrected position to be positioned.
  • the x-axis coordinates, the y-axis coordinates, and the heading angle of the picture are used in the execution of the gradient descent method described above.
  • vertical coordinates, pitch angles and roll angles corresponding to the picture may also be included, especially in the case of uneven terrain. These are all within the scope of the invention.
  • a plurality of picture acquisitions are performed on some or all of the calibration points, and positional parameters and attitude parameters corresponding to each picture acquisition are obtained.
  • the method further includes: the coordinate system, a picture of the calibration point, a picture of the position to be located, a position parameter and a posture parameter of a picture of the calibration point, and a modified
  • the location parameter and the pose parameter of the picture of the location to be located are stored in a database or in a file to establish a map.
  • the set of connections and/or the set of connection connections are simultaneously stored in the database or file as part of the map.
  • Figure 9 shows an illustration of a map established in accordance with the present invention.
  • a stable mapping between the physical coordinate system and the logical coordinate system is completed for subsequent positioning.
  • the automatic guided vehicle 10 includes: a base 6; a light emitting device 5-2 mounted on the base and configured to illuminate an area under the base; a camera 5-3, the camera is mounted On the pedestal and configured to capture a picture of an area under the pedestal, such as a picture of an area illuminated by the illuminating device; a measurement assembly 3 mounted on the pedestal and configured to The positional parameters and attitude parameters of the automated guided vehicle corresponding to the picture may be measured or calculated.
  • the driving wheel 1 is mounted on the base 6, including a motor, a speed reducer and an encoder, wherein the motor provides driving force, the speed reducer amplifies the driving force, and the encoder is used to obtain the turning angle of the motor, thereby obtaining the level of the automatic guiding vehicle or the driving wheel. position.
  • the driving wheel 2 cooperates with the driving wheel 1 to complete the motion control.
  • the measuring component 3 is, for example, an inertial navigation measuring device, which can provide one or several of instantaneous speed, instantaneous angle, instantaneous position, such as abscissa, ordinate, vertical coordinate, heading angle, pitch angle and roll angle.
  • the encoder of the driving wheel may also be part of the measuring component 3.
  • the control unit 4 is mounted on the base 6 and coupled to the measuring unit 3 and the camera 5-3.
  • the control device 4 is configured to control the carriage to travel to a marker point and a location to be located to acquire a picture of the marker point and a picture of the location to be located, and to synchronize the camera 5-3 and the measurement component 3.
  • the measurement component 3 is capable of measuring a position parameter and a posture parameter of the car, that is, obtaining a position parameter and a posture parameter corresponding to the picture, while the camera collects a picture.
  • the camera 5-3 is, for example, a lower-view camera, together with the light-emitting device 5-2 and the hood 5-1, forms an image capturing device 5, wherein the camera 5-3 is used to acquire an image of the under-automobile, the light-emitting device 5-2 Mounted on the base to illuminate the lower-view camera shooting area.
  • a hood 5-1 is mounted on the pedestal for softening the light of the illuminating device and preventing the occurrence of reflection.
  • the illumination device is preferably mounted around the hood.
  • the automated guided vehicle 10 further comprises processing means (not shown) coupled to said camera 5-3 and said measuring component 3 for receiving pictures captured by said camera And measuring the position parameter and the attitude parameter measured by the component, and correcting the position parameter and the posture parameter of the picture of the position to be positioned based on the picture, and the position parameter and the attitude parameter.
  • processing means (not shown) coupled to said camera 5-3 and said measuring component 3 for receiving pictures captured by said camera And measuring the position parameter and the attitude parameter measured by the component, and correcting the position parameter and the posture parameter of the picture of the position to be positioned based on the picture, and the position parameter and the attitude parameter.
  • the processing device may be integrated into the automated guided vehicle 10, or physically separate from the automated guided vehicle, and communicated with other components by wire or wirelessly. These are all within the scope of the invention.
  • the processing device corrects the position parameter and the posture parameter of the picture of the position to be located by the following method:
  • connection points Constructing a set of connection points, each of the connection points including a picture, the position parameter corresponding to the one picture and the attitude parameter, and whether the picture corresponds to a calibration point;
  • connection points two connection points whose distance does not exceed a predetermined value as a connection, establishing a set of connections;
  • Equation 1-7 Performing a gradient descent method on the set of connection connections until the iterative rate of change is lower than a predetermined prefetch, wherein the positional parameter and the pose parameter of the picture of the connection point of the uncalibrated point are performed when the initialization step of the gradient descent method is performed As an initial iteration parameter of the gradient descent method.
  • the specific calculation process is shown in Equation 1-7.
  • the measuring component is an inertial navigation measuring component, the positional parameter comprising an abscissa and an ordinate, preferably comprising a vertical coordinate, the attitude parameter comprising a heading angle, preferably comprising a pitch angle and Roll angle.
  • the measuring component comprises a laser SLAM measuring device and/or a visual SLAM measuring device.
  • the processing device is configured to configure the coordinate system, a picture of the calibration point, a picture of the position to be located, a position parameter and a posture parameter of a picture of the calibration point, and
  • the corrected position parameter and posture parameter of the picture of the position to be located are stored in a database or in a file to establish a map.
  • the present invention also provides an image acquisition and processing system comprising: an automated guided vehicle as described above; and a processing device in communication with the camera and the measurement component and configured to be based on the picture, And the position parameter and the attitude parameter, and correcting the position parameter and the attitude parameter of the picture.
  • the processing device is not provided, for example, on the automated guided vehicle.
  • processing device is configured, for example, to perform the mapping method 100 as described above.
  • the present invention also provides a mapping and positioning system for an automated guided vehicle, comprising: a camera, the camera is configured to capture an image under the automated guided vehicle; and a lighting device configured to illuminate Below the automated guided vehicle; an inertial navigation measurement assembly configured to measure positional and attitude parameters of the automated guided vehicle; processing device, the camera and the inertial navigation measurement component are coupled To the processing device, the position parameter and the attitude parameter of the picture are corrected based on the image, the position parameter, and the posture parameter.
  • a mapping and positioning system for an automated guided vehicle comprising: a camera, the camera is configured to capture an image under the automated guided vehicle; and a lighting device configured to illuminate Below the automated guided vehicle; an inertial navigation measurement assembly configured to measure positional and attitude parameters of the automated guided vehicle; processing device, the camera and the inertial navigation measurement component are coupled To the processing device, the position parameter and the attitude parameter of the picture are corrected based on the image, the position parameter, and the posture parameter.
  • processing device is configured, for example, to perform the mapping method 100 as described above.
  • the present invention also provides an apparatus for mapping a site, comprising: a device configured to establish or acquire a coordinate system of the site; configured to scan the site, obtain a picture of a calibration point, and obtain a picture of a plurality of locations to be located And means for position and posture parameters corresponding to the picture; means for modifying a position parameter and a posture parameter of the picture of the position to be located based on the picture, the position parameter and the attitude parameter.
  • the present invention Based on the map established by method 100, the present invention also provides a positioning method 200.
  • a positioning method 200 in accordance with the present invention is described below with reference to FIG.
  • step S201 loading or obtaining a map obtained by the method 100 of the present invention can be performed, for example, by loading or reading a map file or a database.
  • step S202 a picture of the location to be located and a position parameter and a posture parameter corresponding to the picture are acquired or obtained. For example, during the operation of the AGV, the position parameter and the attitude parameter corresponding to the picture are measured while the picture is being acquired.
  • step S203 in the map, retrieving a picture that is closest to the picture of the location to be located
  • the positioning method 200 further includes: using a phase correlation method to calculate a confidence level, a position parameter offset, and a posture parameter deviation between a picture of the position to be located and a picture of the closest distance. shift.
  • the picture closest to the distance is discarded, and the distance from the picture of the position to be located is re-retrieved (excluding the discarded The picture is in the picture) with a higher confidence than the preset value.
  • the position of the image to be located can be obtained by using the retrieved image position and the offset of the phase correlation method, and then the positioning position of the device is updated. , that is, the positioning is successful. After the positioning is successful, the next search position is this positioning position.
  • Figure 12 is a block diagram of a computer program product 900 arranged in accordance with at least some embodiments of the present invention.
  • the signal bearing medium 902 can be implemented as or include a computer readable medium 906, a computer recordable medium 908, a computer communication medium 910, or a combination thereof that stores a configurable processing unit to perform programming of all or some of the previously described processes. Instruction 904.
  • the instructions may include, for example, one or more executable instructions for causing one or more processors to: establish or acquire a coordinate system of the venue; scan the venue, obtain a picture of the calibration point, a location to be located a picture, and a position parameter and a posture parameter corresponding to the picture; correcting the picture of the to-be-positioned position based on the picture of the calibration point, the picture of the position to be located, the position parameter, and the attitude parameter The position and attitude parameters.
  • Computer programs are implemented as one or more programs running on one or more processors (eg, in one or more micro One or more programs running on the processor, implemented as firmware, or implemented as almost any combination thereof, and in accordance with the present disclosure, designing the circuit and/or writing code for the software and/or firmware will be in this Within the skill of the skilled person in the field. For example, if the user determines that speed and accuracy are of the utmost importance, the user can select primary hardware and/or firmware media; if flexibility is paramount, the user can select the primary software implementation; or, alternatively, alternatively The user can select some combination of hardware, software, and/or firmware.
  • signal bearing media include, but are not limited to, the following: recordable type media, such as floppy disks, hard drives, compact discs (CDs), digital video discs (DVDs), digital tapes, computer memories, and the like; and transmission type media, such as Digital and/or analog communication media (eg, fiber optic cables, waveguides, wired communication links, wireless communication links, etc.).
  • a typical data processing system typically includes one or more of the following: a system unit housing, a video display device, a memory such as a volatile and non-volatile memory, such as a microprocessor and A processor of a digital signal processor, a computing entity such as an operating system, a driver, a graphical user interface, and an application, one or more interactive devices such as a trackpad or touch screen, and/or includes a feedback loop and a control motor (eg, A control system for sensing position and/or rate feedback; a control motor for moving and/or adjusting components and/or quantities.
  • a typical data processing system may be implemented using any suitable commercially available components, such as those commonly found in data computing/communication and/or network computing/communication systems.

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种对场地进行建图的方法,包括:建立或获取所述场地的坐标系(S101);扫描所述场地,获取标定点的图片、待定位位置的图片、以及与所述图片对应的位置参数和姿态参数(S102);基于所述标定点的图片、所述待定位位置的图片、所述位置参数和所述姿态参数,修正所述待定位位置的图片的所述位置参数和姿态参数(S103)。

Description

建图方法、图像采集和处理***和定位方法 技术领域
本发明大致涉及智能仓储领域,尤其涉及一种可用于智能仓储的建图方法、图像采集处理***和定位方法。
背景技术
现有的智能仓库中,经常需要对定位场地进行测量,建立物理坐标系,物理坐标系以常见距离单位作为度量单位,比如米、分米、厘米,允许以整数、小数、分数形式进行描述,比如1米、1分米、1厘米,0.55米、0.2分米、1.4厘米、二分之一米等,坐标系方向一般同建筑物围墙平行,或者与东南西北方向相平行。
在智能仓库中运送货物的自动引导车AGV,经常需要对其位置进行精确的定位。但现有的定位方法,其精度通常达不到工作的要求,尤其是当需要精确地确定AGV的位置参数和姿态参数时,更是这样。这很不利于操作人员的操作和控制。
因此,现有技术中迫切需要一种能够更精确地进行建图和定位的方法和装置。
背景技术部分的内容仅仅是发明人所知晓的技术,并不当然代表本领域的现有技术。
发明内容
针对现有技术存在问题中的一个或多个,本发明提供一种对场地进行建图的方法,包括:建立或获取所述场地的坐标系;扫描所述场地,获取标定点的图片、待定位位置的图片、以及与所述图片对应的位置参数和姿态参数;基于所述标定点的图片、所述待定位位置的图片、所述位置参数和所述姿态参数,修正所述待定位位置的图片的所述位置参数和姿态参数。
根据本发明的一个方面,所述位置参数包括横坐标和纵坐标,优选地包括垂直坐标,所述姿态参数包括航向角,优选地包括俯仰角和横滚角。
根据本发明的一个方面,所述修正的步骤包括:构造连接点的集合,每个所述连接点包括一幅图片、与所述一幅图片对应的所述位置参数和所述姿态参数、以及所述图片是否对应标定点;基于所述连接点的集合,修正所述待定位位置的图片的所述位置参数和姿态参数。
根据本发明的一个方面,其中所述修正的步骤包括:从所述连接点的集合中,获取距离不超过预定值的两个连接点作为一连接,建立连接的集合;对所述连接的集合中的每一个连接所包括的两个连接点,计算所述两个连接点之间的连接置信度,并过滤出连接置信度高于预定阈值的那些连接,作为建图连接集合;基于所述建图连接集合,修正所述待定位位置的图片的所述位置参数和姿态参数。
根据本发明的一个方面,所述修正步骤还包括:在所述建图连接集合上执行梯度下降法,其中在执行梯度下降法的初始化步骤时,将非标定点的连接点的图片的所述位置参数和姿态参数作为所述梯度下降法的初始迭代参数。
根据本发明的一个方面,其中所述修正步骤还包括:执行所述梯度下降法,直至迭代变化率低于预定阈值。
根据本发明的一个方面,其中对标定点中的一些或者全部,进行多次图片采集,并获取与每次图片采集对应的位置参数和姿态参数。。
根据本发明的一个方面,该方法还包括:将所述坐标系、所述标定点的图片、所述待定位位置的图片、所述标定点的图片的位置参数和姿态参数、以及修正后的所述待定位位置的图片的位置参数和姿态参数存储到数据库中或文件中,建立地图库。
根据本发明的一个方面,所述坐标系为物理坐标系。
根据本发明的一个方面,所述预定值为所述图片的长度或宽度的一半。
本发明还提供一种用于图像采集的自动引导车,包括:基座;摄像头,所述摄像头安装在所述基座上并配置成可采集所述基座下方的区域的图片;测量组件,所述测量组件安装在所述基座上,并配置成可测量或计算与所述图片对应的所述自动引导车的位置参数以及姿态参数。
根据本发明的一个方面,自动引导车还包括发光装置,所述发光装置安装在所述基座上并配置成可照亮所述基座下方的区域,供所述摄像头采集图片。
根据本发明的一个方面,该自动引导车还包括安装在所述基座上的控制装置,所述摄像头和所述测量组件均耦合至所述控制装置,所述控制装置配置成控制所述小车行进至标记点和待定位位置以采集所述标记点的图片和所述待定位位置的图片。
根据本发明的一个方面,自动引导车还包括处理装置,所述处理装置与所述摄像头和所述测量组件耦合,并基于所述图片、以及所述位置参数以及姿态参数,修正所述待定位位置的图片的位置参数和姿态参数。
根据本发明的一个方面,所述处理装置通过以下的方法修正所述待定位位置的图片的位置参数和姿态参数:构造连接点的集合,每个所述连接点包括一幅图片、与所述一幅图片对应的所述位置参数和所述姿态参数、以及所述图片是否对应标定点;从所述连接点的集合中,获取距离不超过预定值的两个连接点作为一连接,建立连接的集合;对所述连接的集合中的每一个连接所包括的两个连接点,计算所述两个连接点之间的连接置信度,并过滤出连接置信度高于预定阈值的那些连接,作为建图连接集合;在所述建图连接集合上执行梯度下降法,直至迭代变化率低于预定预支,其中在执行梯度下降法的初始化步骤时,将非标定点的连接点的图片的所述位置参数和姿态参数作为所述梯度下降法的初始迭代参数。
根据本发明的一个方面,自动引导车还包括遮光罩,所述遮光罩安装在所述基座上,用于柔化所述发光装置发出的光线,所述发光装置优选环绕所述遮光罩安装。
根据本发明的一个方面,其中所述测量组件是惯性导航测量组件。
根据本发明的一个方面,其中所述位置参数包括横坐标和纵坐标,优选地包括垂直坐标,所述姿态参数包括航向角,优选地包括俯仰角和横滚角。
根据本发明的一个方面,其中所述测量组件包括激光SLAM测量装置和/或视觉SLAM测量装置。
根据本发明的一个方面,其中所述处理装置配置成将所述坐标系、所述标定 点的图片、所述待定位位置的图片、所述标定点的图片的位置参数和姿态参数、以及修正后的所述待定位位置的图片的位置参数和姿态参数存储到数据库中或文件中,建立地图库。
本发明还提供一种图像采集和处理***,包括:如上所述的自动引导车;和处理装置,所述处理装置与所述摄像头和所述测量组件耦合,并基于所述图片、以及所述位置参数以及姿态参数,修正所述图片的位置参数和姿态参数。
根据本发明的一个方面,其中所述处理装置配置成可执行上所述的建图方法。
本发明还提供一种用于自动引导车的建图和定位***,包括:摄像头,所述摄像头设置成可采集所述自动引导车下方的图像;发光装置,所述发光装置配置成可照亮所述自动引导车的下方;惯性导航测量组件,所述惯性导航测量组件配置为可测量所述自动引导车的位置参数以及姿态参数;处理装置,所述摄像头和所述惯性导航测量组件均耦合至所述处理装置,所述控制装置配置成基于所述图像、所述位置参数以及姿态参数,修正所述图片的位置参数和姿态参数。
根据本发明的一个方面,其中所述处理装置配置成可执行如权利要求1-10中任一项所述的建图方法。
本发明还提供一种对场地进行建图的设备,包括:配置成建立或获取所述场地的坐标系的装置;配置成扫描所述场地、获取标定点的图片以及多个待定位位置的图片、以及与所述图片对应的位置参数和姿态参数的装置;配置成基于所述图片、所述位置参数和所述姿态参数,修正所述待定位位置的图片的位置参数和姿态参数的装置。
本发明还提供一种定位方法,包括:加载或获得通过上述任一项所述的方法获得的地图;采集或获得待定位位置的图片以及与该图片对应的位置参数和姿态参数;根据所述地图,检索与该待定位位置的图片距离最近的图片。
根据本发明的一个方面,该定位方法还包括:使用相位相关法计算所述待定位位置的图片与所述距离最近的图片之间的置信度、位置参数偏移和姿态参数偏移。
根据本发明的一个方面,当使用相位相关法计算得到的置信度低于预设值时,丢弃该距离最近的图片,重新检索与该待定位位置的图片距离最近且置信度高于预设值的图片。
附图说明
附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明的实施例一起用于解释本发明,并不构成对本发明的限制。在附图中:
图1是根据本发明一个实施例的建图方法的流程图;
图2是根据本发明一个实施例的物理坐标的示意图;
图3是根据本发明一个实施例的逻辑坐标的示意图;
图4是根据本发明一个实施例的连接点的示意图;
图5是根据本发明一个实施例的标定点的示意图;
图6是根据本发明一个实施例的修正待定位位置图片的位置参数和姿态参数 的方法流程图;
图7是根据本发明一个实施例通过相位相关法计算的图片重合的实例;
图8是根据本发明一个实施例的连接的示意图;
图9示出了完成物理坐标系和逻辑坐标系映射后的地图截图;
图10是根据本发明一个实施例的用于图像采集的自动引导车的示意图;
图11是根据本发明一个实施例的定位方法的流程图;和
图12是根据本发明一个实施例的计算机程序产品的框图。
具体实施方式
在下文中,仅简单地描述了某些示例性实施例。正如本领域技术人员可认识到的那样,在不脱离本发明的精神或范围的情况下,可通过各种不同方式修改所描述的实施例。因此,附图和描述被认为本质上是示例性的而非限制性的。
在本发明的描述中,需要理解的是,术语"中心"、"纵向"、"横向"、"长度"、"宽度"、"厚度"、"上"、"下"、"前"、"后"、"左"、"右"、"坚直"、"水平"、"顶"、"底"、"内"、"外"、"顺时针"、"逆时针"等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。此外,术语"第一"、"第二"仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有"第一"、"第二"的特征可以明示或者隐含地包括一个或者更多个所述特征。在本发明的描述中,"多个"的含义是两个或两个以上,除非另有明确具体的限定。
在本发明的描述中,需要说明的是,除非另有明确的规定和限定,术语"安装"、"相连"、"连接"应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接:可以是机械连接,也可以是电连接或可以相互通讯;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本发明中的具体含义。
在本发明中,除非另有明确的规定和限定,第一特征在第二特征之"上"或之"下"可以包括第一和第二特征直接接触,也可以包括第一和第二特征不是直接接触而是通过它们之间的另外的特征接触。而且,第一特征在第二特征"之上"、"上方"和"上面"包括第一特征在第二特征正上方和斜上方,或仅仅表示第一特征水平高度高于第二特征。第一特征在第二特征"之下"、"下方"和"下面"包括第一特征在第二特征正上方和斜上方,或仅仅表示第一特征水平高度小于第二特征。
下文的公开提供了许多不同的实施方式或例子用来实现本发明的不同结构。为了简化本发明的公开,下文中对特定例子的部件和设置进行描述。当然,它们仅仅为示例,并且目的不在于限制本发明。此外,本发明可以在不同例子中重复参考数字和/或参考字母,这种重复是为了简化和清楚的目的,其本身不指示所讨论各种实施方式和/或设置之间的关系。此外,本发明提供了的各种特定的工 艺和材料的例子,但是本领域普通技术人员可以意识到其他工艺的应用和/或其他材料的使用。
以下结合附图对本发明的优选实施例进行说明,应当理解,此处所描述的优选实施例仅用于说明和解释本发明,并不用于限定本发明。
首先参考图1描述根据本发明第一实施例的建图方法100,例如可用于对场地进行建图。
在步骤S101,建立或获取所述场地的坐标系。所述坐标系可以是物理坐标系,也可以是逻辑坐标系,这都在本发明的范围内。坐标系的定义,通常包括原点的位置、XY坐标轴的方向等。
例如可以对需要定位的场地进行测量,建立物理坐标系,物理坐标系以常见距离单位作为度量单位,比如米、分米、厘米,允许以整数、小数、分数形式进行描述,比如1米、1分米、1厘米,0.55米、0.2分米、1.4厘米、二分之一米等,坐标系方向一般同建筑物围墙平行,或者与东南西北方向相平行,遵循以上原则建立的坐标在本***中称为物理坐标系,如图2所示。
按照业务实际情况设定的坐标系,在本***中称作逻辑坐标系。示例性的而非限制性的,逻辑坐标系和物理坐标系的不同之处例如可以在于,逻辑坐标系一般是以整数作为描述的,比如(1,2)、(5,10),并且坐标系方向不一定与物理坐标系重合,而且逻辑坐标系的距离单位并不一定是常见物理单位,而是以实际作业需要进行定义,比如图3中A点、B点、C点,B点逻辑坐标为(3,7),A点逻辑坐标为(3,8),C点逻辑坐标为(4,7),以左下角点位为原点,每个逻辑位置间距为1.35米计算的话,A点的物理坐标则为(4.05,9.45)。因此逻辑位置和物理位置可以完全一致,也可以两者存在一定的换算关系。之所以有逻辑位置的原因是为了方便规划业务逻辑或者方便建图计算,比如以货架摆放为例,货架的位置都是以逻辑坐标系位置进行保存的,比如(3,7)位置,如果使用物理位置的话,就会出现上述(4.05,9.45)的描述,很不利于操作人员的理解和操作,如果需要物理位置时,可以通过换算关系进行换算,一般换算的时候是乘以一个系数,叫做逻辑位置间距,并且在X方向和Y方向上可以不同。比如仓库内的货架是1.3米*1.3米,货架间距为0.05米,就可以定义逻辑位置间距为1.35米,如果货架是1.2米*1.0米,那么可以定义逻辑位置间距在X轴方向上为1.25米,Y轴方向上为1.05米,从而使得需要进行物理定位的设备找到对应的物理位置货架。以上的换算仅为常规的换算方式,还有更为复杂的换算方法,比如坐标系旋转换算,非线性换算等换算方法,在此限于篇幅不详细展开。以上关于逻辑坐标系的描述仅是示例性的而非限制性的。逻辑坐标系是指按照业务实际情况设定的坐标系。在本发明的概念下,逻辑坐标系下的位置参数不限于整数,也可以带有小数。这些都在本发明的保护范围内。如果事先已经建立好了场地的物理坐标系或者逻辑坐标系,可以从相应的文件或者数据库中获取即可。下文以物理坐标系为例进行说明。
在步骤S102,扫描所述场地,获取标定点的图片(关于标定点的定义,请参见下文)、待定位位置的图片(优选的是多个待定位的位置的图片)、以及与所述 标定点图片和待定位位置图片对应的位置参数和姿态参数。
例如可以使用搭载本发明的设备(将在下文描述)的自动引导小车,对所述场地进行扫描,获取待定位位置的图片、标定点图片、以及上面两种图片对应的位置参数和姿态参数。这里所谓的待定位的位置,可以根据实际工况确定,例如是自动引导车需要到达的位置。
参考图2为例进行描述,所述位置参数例如是在某一个标定点或者需要定位的位置的图片在物理坐标系下的横坐标和纵坐标(即水平位置,例如图片中心的坐标,或者图片某一个角的坐标),当然也可以是相对于某一个基点的水平距离和纵向距离;所述姿态参数例如获取的图片的角度,例如相对于横轴或纵轴的角度(即航向角)。根据本发明的一个优选实施例,还可以获取所述图片对应的俯仰角、横滚角、垂直高度等参数(即自动引导小车获取照片时的俯仰角、横滚角、垂直高度等)。根据本发明的一个优选实施例,可以采用本发明的自动引导小车上搭载的惯性导航测量设备来提供上述数据。惯性导航测量设备例如包括轮子编码器、加速度计(1~3轴)、陀螺仪(1~3轴)、磁通量传感器(1~3轴)、气压传感器以及能够反馈航向角、俯仰角、横滚角、水平位置、垂直位置的测量设备。利用轮子编码器、加速度计、陀螺仪、磁通量传感器、气压传感器获得的数据,经过计算,即可得到航向角(即图片相对于水平轴或纵轴的角度)、俯仰角、横滚角、水平位置、垂直位置,将得到的以上数据叠加至图片,形成(图片、航向角(即图片角度)、俯仰角、横滚角、水平位置(即x轴横坐标和y轴纵坐标)、垂直位置,是否是标定点)的七元组数据组合,如图4所示,在本***中称为连接点,作为后续建图输入。当然,本领域技术人员能够理解,所述连接点不需要具备全部的数据,例如包括(图片、航向角、水平位置、是否是标定点)的四元数据组合就可以实现本发明的目的。需要注意的是,根据本发明的一个优选实施例,对于标定点的图片和相应的位置参数和姿态参数,可以尽可能多的采集,有助于更精确的建立定位地图和更加精准的定位,采集时,可以多次经过同一区域,多次采集,也可以使得定位地图更加精确。当然本发明的保护范围不限于物理坐标系下的坐标,也可以是逻辑坐标系下的坐标。
关于标定点,其代表那些坐标已经经过精确确定的点位。如图3中所标示的A点、B点、C点,这些点位的坐标已经经过确认,是人为定义的,先验的。
图5中示出了标定点的一个实例,其中显示出标定点A、其逻辑坐标为(5,8),物理坐标为(3.75,4.10)。当然本发明中,标定点并不限于必须同时具有逻辑坐标和物理坐标。可以采取多种手段来识别和确认标定点。例如一种是图像上有十字线,上面标注了位置,图像采集之后可以识别出标定点及其位置坐标;还一种是上面有编码信息,例如条形码或者二维码,在图像采集之后可以使用程序解码,解码出来的内容就是该标定点的位置坐标。根据本发明一个实施例,由于标定点的坐标是事先经过确认的,所以在步骤S102中,标定点图片的位置参数采用的是该标定点的位置参数,而非惯性导航测量设备所测量出来的标定点图片的位置参数。
在步骤S103,基于所述标定点的图片、所述待定位位置的图片、所述位置参 数和所述姿态参数,修正所述待定位位置的图片的所述位置参数和姿态参数。
对于待定位位置的图片,其位置参数和姿态参数是通过测量而得到的,例如通过惯性导航测量设备测量和得到,在现场的工况中存在测量误差,需要进一步修正来提高其精度。而标定点的图片可以作为很好的基准,用来修正待定位位置的图片的位置参数和姿态参数。
下面参考图6来描述步骤S103的一个实施例。
在步骤S1031:构造连接点的集合。如上所述,每个连接点,包括(图片、航向角(即图片角度)、俯仰角、横滚角、水平位置、垂直位置,是否是标定点)的七元组数据组合,或者包括(图片、航向角、水平位置、是否是标定点)的四元数据组合。利用这些连接点来构造连接点集合。关于“是否是标定点”的参数,如果该图片中出现了标定点并正常地获得了该标定点的先验位置参数,该参数即为“是标定点”;否则该参数为“非标定点”。也可以用逻辑0或1来表示。
在步骤S1032,基于所述连接点集合来建立和输出连接集合。输入连接点集合,按照连接点中包括的水平位置即xy轴坐标,进行组对操作,组对操作的原则例如为:两个图片的标注位置距离不超过预定值,例如不超过图片长度或宽度的50%、30%或者20%。举例说明,连接点A水平位置为(0,0),连接点B水平位置为(5,0),那么A到B的距离为5,如果图片大小为10*10,那么A、B就符合不超过50%图片大小的标准,可以组成一对,在本***中这样的组合叫做连接,每个连接包括两个连接点,输出能够组成的全部连接,在本***中称为连接集合。
在步骤S1033:输入所生成的连接集合,对于连接集合中的每一个连接,提取连接里面两个连接点A、B,为了方便描述,将连接点A称为基准点,连接点B称为邻接点,以基准点为原点,以邻接点为偏移,将基准点图片和邻接点图片作为输入,例如执行相位相关法,得到连接置信度(conf)(表征二者之间的相似程度),x方向相对位移(delta_x),y方向相对位移(delta_y),旋转相对角度(theta),本***中将(conf,delta_x,delta_y,theta)组成的4元组称为互相关结果,放入对应的连接保存,将置信度大于一定阈值(例如10,可理解为该互相关结果随机出现的概率小于正态分布10西格玛位置所表示的概率值)的连接保留,输出过滤后的置信度大于该阈值的、含有互相关结果的新连接集合,在本***中称为建图连接集合。上文的连接置信度是相位相关法的输出,是通过计算相位相关法取值峰值的尖锐程度,或者说尖峰附近的分布计算的,假设分布是正态的,那么知道了峰值和均值,就可以算出来置信度。互相关结果是按照上面相位相关法,通过计算两个图片的相关度计算出来的。执行相位相关法的过程中,涉及到互功率谱计算,利用互功率谱函数,可以获得在不同的位移条件下的互相关水平,假设互相关水平服从正态分布,可以通过统计方法计算出来正态分布的相关参数,利用该参数和最大的互相关数值相除,即可计算连接置信度。
根据一个实施例,建图连接集合不含有两个点都是标定点的连接。
如图7所示,图中灰色区域是图片A,绿色区域是图片B,说明了两个图片的重合区域,该重合是通过相位相关计算出来的。例如图7中两个图片A和B计算的互相关结果为:置信度131.542,x方向相对位移33.4,y方向相位位移10.7, 旋转角度0.3度。
图8示出了连接的示意图,其中包括基准点和邻接点。
在步骤S1034:在建图连接集合上执行梯度下降法,修正所述待定位位置的图片的位置参数和姿态参数。其中根据一个实施例,标定点图片的横纵坐标和角度不变,梯度调整是以非标定点图片参数作为变量,标定图片可以认为是常量。或者建图连接集合可以定义成不含有两个点都是标定点的连接,因为这样调整没有意义,标定点本来就不应该被调整,求解梯度的时候也不会被求解。优化函数例如如公式1所示:
公式1:
Figure PCTCN2019075741-appb-000001
公式2:
Figure PCTCN2019075741-appb-000002
公式3:
Figure PCTCN2019075741-appb-000003
公式4:
Figure PCTCN2019075741-appb-000004
公式5:
Figure PCTCN2019075741-appb-000005
公式6:
Figure PCTCN2019075741-appb-000006
公式7:
Figure PCTCN2019075741-appb-000007
其中N表示建图连接集合一共含有N个连接,i表示建图连接集合中的第i个连接,A i表示第i个连接的基准点,B i表示第i个连接的邻接点,R i表示第i个连接的互相关结果,
Figure PCTCN2019075741-appb-000008
表示基准点的航向角,
Figure PCTCN2019075741-appb-000009
表示邻接点的航向角,
Figure PCTCN2019075741-appb-000010
表示互相关结果中的旋转相对角度,g θ(A i,B i)可以理解为基准点与邻接点在惯性导航测量组件下的角度差异,g θ(A i,B i)-u θ(R i)可以理解为惯性导航测量组件下的角度差异同互相关结果中的旋转相对角度的差异(互相关结果中的旋转角度就是通过相位相关法计算出来的theta,这个值表征了邻接点图片需要旋转多少角度才能和基准点图片平行),其中f θ为航向角权重函数,用于表示航向角拟合过程中,不同的连接点属性(比如:标定点和非标定点)在地图迭代中的权重不同(作为一个实例,通常标定点的权重比较大,例如1000,非标定点的权重比较小,例如1);v θ为互相关结果中角度差异的权重函数,用于表示不同的连接属性(比如:两个非标定点之间的连接、标定点与非标定点之间的连接)对于互相关结果角度的权重(两个非标定点的连接的话,变化程度应该是对等的,或者说叫做均等的,因为两个码的地位是相等的,但是标定点和非标定点变化程度是不对等的,非标定点 的变化程度,显著大于标定点,因此要通过权重控制。可以根据实际情况来给定权重)。根据一个优选实施例,对于两个非标定点的连接,权重可取1,按照同等的水平调整;对于标定和非标定点的连接,权重也可以取1,因为标定点是常量,不会参与梯度计算,可以认为梯度是不变的。如果考虑对标定点进行微调的话,标定点与非标定点的连接的权重比值可以高达99比1。
其余公式描述均和以上描述类似,分别计算了惯性导航测量组件下的X轴方向差异同互相关结果中的X轴方向相对位移的差异,以及惯性导航测量组件下的Y轴方向差异同互相关结果中的Y轴方向相对位移的差异,上述权重函数均可根据业务情况、算法适配情况调整。
Figure PCTCN2019075741-appb-000011
表示基准点的x轴坐标,
Figure PCTCN2019075741-appb-000012
表示邻接点的x轴坐标,
Figure PCTCN2019075741-appb-000013
表示互相关结果中的x方向相对位移,g x(A i,B i)可以理解为基准点与邻接点在惯性导航测量组件下的x方向坐标差异,g x(A i,B i)-u x(R i)可以理解为惯性导航测量组件下的x方向坐标差异同互相关结果中的x方向相对位移的差异(互相关结果中的x方向相对位移就是通过相位相关法计算出来的delta_x,这个值表征了邻接点图片需要沿x方向平移多少距离才能和基准点图片对齐),其中f x为x轴权重函数,用于表示x轴坐标拟合过程中,不同的连接点属性(比如:标定点和非标定点)在地图迭代中的权重不同(作为一个实例,通常标定点的权重比较大,例如1000,非标定点的权重比较小,例如1);v x为互相关结果相对于x轴相对位移的调整权重,例如可以取值1。
Figure PCTCN2019075741-appb-000014
表示基准点的y轴坐标,
Figure PCTCN2019075741-appb-000015
表示邻接点的y轴坐标,
Figure PCTCN2019075741-appb-000016
表示互相关结果中的y方向相对位移,g y(A i,B i)可以理解为基准点与邻接点在惯性导航测量组件下的y方向坐标差异,g y(A i,B i)-u y(R i)可以理解为惯性导航测量组件下的y方向坐标差异同互相关结果中的y方向相对位移的差异(互相关结果中的y方向相对位移就是通过相位相关法计算出来的delta_y,这个值表征了邻接点图片需要沿y方向平移多少距离才能和基准点图片对齐),其中f y为x轴权重函数,用于表示y轴坐标拟合过程中,不同的连接点属性(比如:标定点和非标定点)在地图迭代中的权重不同(作为一个实例,通常标定点的权重比较大,例如1000,非标定点的权重比较小,例如1);v y为互相关结果相对于y轴相对位移的调整权重,例如可以取值1。
λ 1、λ 2、λ 3分别表示theta、x、y变化量在最终拟合结果中的权重,有的场景对于theta的变化比较敏感,可以调高λ 1。根据一个优选实施例,λ 1、λ 2、λ 3均为1。
公式1中的自变量为
Figure PCTCN2019075741-appb-000017
通过对公式1的各个自变量进行求导,得到了各个自变量梯度下降的方向,或者说是一组梯度集合,用于进行梯度下降
执行梯度下降法的初始化步骤,将惯性导航标注的位置参数和姿态参数作为图片的初始位置。梯度下降法的输入一个是上次迭代集合,一个是梯度,一个是 步长,其中梯度是通过对公式1求导得到的,迭代初始集合是通过例如惯性导航标注的位置参数和姿态参数赋值的,步长是固定的或者可变的。
确定了梯度和迭代初始集合之后,向梯度方向进行步长长度下降,来利用公式1进行优化。可根据需要自定义步长算法,本***优选采用固定步长进行梯度下降。重复执行,直到迭代变化率小于设定阈值,本***例如设定阈值为0.1%。变化率例如是上次计算得到的值和本次迭代计算得到的值的差值,除以上次的值,就是变化率。最终得到每一个图片基点(例如中心点)的物理坐标以及姿态参数,作为修正后的待定位位置的位置参数和姿态参数。
注意,在建图连接集合上执行梯度下降法的过程中,标定点图片的位置参数和姿态参数不进行变化。
上面所描述的执行梯度下降法中用到了图片的x轴坐标、y轴坐标和航向角。根据本发明的一个优选实施例,也可以包括图片对应的垂直坐标、俯仰角和横滚角,尤其是在场地高低不平的情况中,这非常有帮助。这些都在本发明的保护范围内。
根据本发明的一个优选实施例,对标定点中的一些或者全部,进行多次图片采集,并获取与每次图片采集对应的位置参数和姿态参数。通过对标定点的图片多次采集,可使得迭代结果更加精确,增加了连接的数目。
根据本发明的一个优选实施例,还包括:将所述坐标系、所述标定点的图片、所述待定位位置的图片、所述标定点的图片的位置参数和姿态参数、以及修正后的所述待定位位置的图片的位置参数和姿态参数存储到数据库中或文件中,建立地图。根据一个优选实施例,同时将所述连接的集合和/或所述建图连接集合存储到所述数据库或文件中,作为地图的一部分。图9示出了根据本发明建立的地图的图示。
优选地,另外对于迭代后的地图进行人工校验和微调,就完成了物理坐标系和逻辑坐标系的稳定映射,用于后续定位。
下面参考附图10描述根据本发明另一个实施例的用于图像采集的自动引导车10。如图10所示,其中示出了自动引导车10的内部部件,而为了清晰起见省略了其外壳等部件。自动引导车10包括:基座6;发光装置5-2,所述发光装置安装在所述基座上并配置成可照亮所述基座下方的区域;摄像头5-3,所述摄像头安装在所述基座上并配置成可采集基座下方区域的图片,例如被所述发光装置照亮的区域的图片;测量组件3,所述测量组件安装在所述基座上,并配置成可测量或计算与所述图片对应的所述自动引导车的位置参数以及姿态参数。
主动轮1安装在机座6上,包括电机、减速器、编码器,其中电机提供驱动力,减速器放大驱动力,编码器用于获取电机转动角度,从而可以获得自动引导车或者主动轮的水平位置。主动轮2同主动轮1配合完成运动控制。所述测量组件3例如是惯性导航测量装置,可以提供瞬时速度、瞬时角度、瞬时位置的一个或者几个,例如横坐标、纵坐标、垂直坐标、航向角、俯仰角和横滚角。根据本发明的一个实施例,所述主动轮的编码器也可以是所述测量组件3的一部分。控制装置4安装在所述基座6上,同测量组件3和摄像头5-3耦合。所述控制装置 4配置成控制所述小车行进至标记点和待定位位置以采集所述标记点的图片和所述待定位位置的图片,并且能够同步所述摄像头5-3和所述测量组件3,使得在摄像头采集图片的同时,所述测量组件3能够测量所述小车的位置参数和姿态参数,也就是获得与所述图片相对应的位置参数和姿态参数。
所述摄像头5-3例如是下视摄像头,连同发光装置5-2和遮光罩5-1一起形成取像装置5,其中摄像头5-3用于获取自动引导车下方图像,发光装置5-2安装在基座上,用于照亮下视摄像头拍摄区域。遮光罩5-1安装在所述基座上,用于将发光装置的光线变得更为柔和,防止反光现象的发生。所述发光装置优选环绕所述遮光罩安装。
根据本发明的一个优选实施例,自动引导车10还包括处理装置(未示出),所述处理装置与所述摄像头5-3和所述测量组件3耦合,以接收所述摄像头采集的图片和测量组件测量的位置参数和姿态参数,并基于所述图片、以及所述位置参数以及姿态参数,修正所述待定位位置的图片的位置参数和姿态参数。本领域技术人员能够理解,处理装置可以集成在所述自动引导车10中,也可以在物理上与所述自动引导车分离,通过有线或者无线的方式与其他部件进行通讯。这些都在本发明的范围内。
根据本发明的一个优选实施例,所述处理装置通过以下的方法修正所述待定位位置的图片的位置参数和姿态参数:
构造连接点的集合,每个所述连接点包括一幅图片、与所述一幅图片对应的所述位置参数和所述姿态参数、以及所述图片是否对应标定点;
从所述连接点的集合中,获取距离不超过预定值的两个连接点作为一连接,建立连接的集合;
对所述连接的集合中的每一个连接所包括的两个连接点,计算所述两个连接点之间的连接置信度,并过滤出连接置信度高于预定阈值的那些连接,作为建图连接集合;
在所述建图连接集合上执行梯度下降法,直至迭代变化率低于预定预支,其中在执行梯度下降法的初始化步骤时,将非标定点的连接点的图片的所述位置参数和姿态参数作为所述梯度下降法的初始迭代参数。具体的计算过程如公式1-7所示。
根据本发明的一个优选实施例,所述测量组件是惯性导航测量组件,所述位置参数包括横坐标和纵坐标,优选地包括垂直坐标,所述姿态参数包括航向角,优选地包括俯仰角和横滚角。
根据本发明的一个优选实施例,所述测量组件包括激光SLAM测量装置和/或视觉SLAM测量装置。
根据本发明的一个优选实施例,所述处理装置配置成将所述坐标系、所述标定点的图片、所述待定位位置的图片、所述标定点的图片的位置参数和姿态参数、以及修正后的所述待定位位置的图片的位置参数和姿态参数存储到数据库中或文件中,建立地图。
本发明还提供一种图像采集和处理***,包括:如上所述的自动引导车;和 处理装置,所述处理装置与所述摄像头和所述测量组件相通讯,并配置成基于所述图片、以及所述位置参数以及姿态参数,修正所述图片的位置参数和姿态参数。其中所述处理装置例如并非设置在所述自动引导车上。
其中所述处理装置例如配置成可执行如上所述的建图方法100。
本发明还提供一种用于自动引导车的建图和定位***,包括:摄像头,所述摄像头设置成可采集所述自动引导车下方的图像;发光装置,所述发光装置配置成可照亮所述自动引导车的下方;惯性导航测量组件,所述惯性导航测量组件配置为可测量所述自动引导车的位置参数和姿态参数;处理装置,所述摄像头和所述惯性导航测量组件均耦合至所述处理装置,基于所述图像、所述位置参数以及姿态参数,修正所述图片的位置参数和姿态参数。
其中所述处理装置例如配置成可执行如上所述的建图方法100。
本发明还提供一种对场地进行建图的设备,包括:配置成建立或获取所述场地的坐标系的装置;配置成扫描所述场地、获取标定点的图片以及多个待定位位置的图片、以及与所述图片对应的位置参数和姿态参数的装置;配置成基于所述图片、所述位置参数和所述姿态参数,修正所述待定位位置的图片的位置参数和姿态参数的装置。
基于通过方法100建立的地图,本发明还提供一种定位方法200。下面参考图11来描述根据本发明的定位方法200。
如图11所示,在步骤S201,加载或获得通过本发明的方法100获得的地图,例如可以通过加载或读取地图文件或数据库来进行。
在步骤S202,采集或获得待定位位置的图片以及与该图片对应的位置参数和姿态参数。例如在AGV运行过程中,在采集图片的同时,测量与该图片对应的位置参数和姿态参数。
在步骤S203,在所述地图中,检索与该待定位位置的图片距离最近的图片
根据本发明的一个优选实施例,所述定位方法200还包括:使用相位相关法计算所述待定位位置的图片与所述距离最近的图片之间的置信度、位置参数偏移和姿态参数偏移。
根据本发明的一个优选实施例,当使用相位相关法计算得到的置信度低于预设值时,丢弃该距离最近的图片,重新检索与该待定位位置的图片距离最近(不包括被丢弃的图片在内)且置信度高于预设值的图片。当找到距离最近且置信度高于预设值的图片时,利用检索到的图片位置,加上相位相关法的偏移量,就可以得出待定位图片的位置参数,然后更新设备的定位位置,即定位成功。定位成功后,下一次检索的位置就是这个定位位置。
图12是依照本发明的至少一些实施例布置的计算机程序产品900的框图。信号承载介质902可以被实现为或者包括计算机可读介质906、计算机可记录介质908、计算机通信介质910或者它们的组合,其存储可配置处理单元以执行先前描述的过程中的全部或一些的编程指令904。这些指令可以包括例如用于使一个或多个处理器执行如下处理的一个或多个可执行指令:建立或获取所述场地的坐标系;扫描所述场地,获取标定点的图片、待定位位置的图片、以及与所述图 片对应的位置参数和姿态参数;基于所述标定点的图片、所述待定位位置的图片、所述位置参数和所述姿态参数,修正所述待定位位置的图片的所述位置参数和姿态参数。
虽然前面的详细描述已经通过使用框图、流程图和/或示例阐述了装置和/或方法的各种示例,但是这样的框图、流程图和/或示例包含一个或多个功能和/或操作,本领域技术人员将理解,这样的框图、流程图或示例内的每个功能和/或操作可用范围广泛的硬件、软件、固件或它们的几乎任何组合单个地和/或共同地来实施。在一个示例中,本文中所述的主题的几个部分可经由专用集成电路(ASIC)、现场可编程门阵列(FPGA)、数字信号处理器(DSP)或其它集成格式来实施。然而,本领域技术人员将认识到,本文中所公开的示例的一些方面整个地或部分地可在集成电路中被等效地实施,被实施作为在一个或多个计算机上运行的一个或多个计算机程序(例如,在一个或多个计算机***上运行的一个或多个程序),被实施作为在一个或多个处理器上运行的一个或多个程序(例如,在一个或多个微处理器上运行的一个或多个程序),被实施作为固件,或者被实施作为它们的几乎任何组合,并且根据本公开,设计电路和/或编写用于软件和/或固件的代码将在本领域技术人员的熟练技能内。例如,如果用户确定速度和精度是最重要的,则用户可选择主要硬件和/或固件媒介物;如果灵活性是最重要的,则用户可选择主要软件实施方式;或者,再一次可替代地,用户可选择硬件、软件和/或固件的某一组合。
另外,本领域技术人员将意识到,本文中所述的主题的机制能够以各种形式作为程序产品分布,并且本文中所述的主题的说明性示例不管用于实际实现该分布的信号承载介质的具体类型如何都适用。信号承载介质的示例包括但不限于以下:可记录类型的介质,诸如软盘、硬盘驱动器、压缩盘(CD)、数字视频盘(DVD)、数字带、计算机存储器等;以及传输类型的介质,诸如数字和/或模拟通信介质(例如,光纤电缆、波导、有线通信链路、无线通信链路等)。
本领域技术人员将认识到,以本文中所阐述的方式描述装置和/或方法、其后使用工程实践将这样的所述的装置和/或方法集成到数据处理***中在本领域内是常见的。也就是说,本文中所述的装置和/或方法的至少一部分可经由合理量的实验被集成到数据处理***中。本领域技术人员将认识到,典型的数据处理***一般包括以下中的一个或多个:***单元壳体、视频显示装置、诸如易失性和非易失性存储器的存储器、诸如微处理器和数字信号处理器的处理器、诸如操作***的计算实体、驱动器、图形用户界面、以及应用程序、诸如触控板或触摸屏的一个或多个交互装置、和/或包括反馈回路和控制电机(例如,用于感测位置和/或速率的反馈;用于移动和/或调整部件和/或量的控制电机)的控制***。典型的数据处理***可利用任何合适的市售部件来实施,诸如常见于数据计算/通信和/或网络计算/通信***中的那些。
最后应说明的是:以上所述仅为本发明的优选实施例而已,并不用于限制本发明,尽管参照前述实施例对本发明进行了详细的说明,对于本领域的技术人员来说,其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分 技术特征进行等同替换。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (29)

  1. 一种对场地进行建图的方法,包括:
    建立或获取所述场地的坐标系;
    扫描所述场地,获取标定点的图片、待定位位置的图片、以及与所述图片对应的位置参数和姿态参数;
    基于所述标定点的图片、所述待定位位置的图片、所述位置参数和所述姿态参数,修正所述待定位位置的图片的所述位置参数和姿态参数。
  2. 如权利要求1所述的方法,其中所述位置参数包括横坐标和纵坐标,优选地包括垂直坐标,所述姿态参数包括航向角,优选地包括俯仰角和横滚角。
  3. 如权利要求1-2中任一项所述的方法,其中所述修正的步骤包括:
    构造连接点的集合,每个所述连接点包括一幅图片、与所述一幅图片对应的所述位置参数和所述姿态参数、以及所述图片是否对应标定点;
    基于所述连接点的集合,修正所述待定位位置的图片的所述位置参数和姿态参数。
  4. 如权利要求3所述的方法,其中所述修正的步骤包括:
    从所述连接点的集合中,获取距离不超过预定值的两个连接点作为一连接,建立连接的集合;
    对所述连接的集合中的每一个连接所包括的两个连接点,计算所述两个连接点之间的连接置信度,并过滤出连接置信度高于预定阈值的那些连接,作为建图连接集合;
    基于所述建图连接集合,修正所述待定位位置的图片的所述位置参数和姿态参数。
  5. 如权利要求4所述的方法,其中所述修正步骤还包括:
    在所述建图连接集合上执行梯度下降法,其中在执行梯度下降法的初始化步骤时,将非标定点的连接点的图片的所述位置参数和姿态参数作为所述梯度下降法的初始迭代参数。
  6. 如权利要求5所述的方法,其中所述修正步骤还包括:执行所述梯度下降法,直至迭代变化率低于预定阈值;
  7. 如权利要求1-6所述的方法,其中对标定点中的一些或者全部,进行多次图片采集,并获取与每次图片采集对应的位置参数和姿态参数。
  8. 如权利要求1-7中任一项所述的方法,还包括:将所述坐标系、所述标定点的图片、所述待定位位置的图片、所述标定点的图片的位置参数和姿态参数、以及修正后的所述待定位位置的图片的位置参数和姿态参数存储到数据库中或文件中,建立地图,优选的将所述连接的集合和/或所述建图连接集合存储到所述数据库或文件中。
  9. 如权利要求1-8中任一项所述的方法,其中,所述坐标系为物理坐标系。
  10. 如权利要求4所述的方法,其中所述预定值为所述图片的长度或宽度的一半。
  11. 一种用于图像采集的自动引导车,包括:
    基座;
    摄像头,所述摄像头安装在所述基座上并配置成可采集所述基座下方的区域的图片;
    测量组件,所述测量组件安装在所述基座上,并配置成可测量或计算与所述图片对应的所述自动引导车的位置参数以及姿态参数。
  12. 如权利要求11所述的自动引导车,还包括发光装置,所述发光装置安装在所述基座上并配置成可照亮所述基座下方的区域,供所述摄像头采集图片。
  13. 如权利要求11或12所述的自动引导车,还包括安装在所述基座上的控制装置,所述摄像头和所述测量组件均耦合至所述控制装置,所述控制装置配置成控制所述小车行进至标记点和待定位位置以采集所述标记点的图片和所述待定位位置的图片。
  14. 如权利要求13所述的自动引导车,还包括处理装置,所述处理装置与所述摄像头和所述测量组件耦合,并基于所述图片、以及所述位置参数以及姿态参数,修正所述待定位位置的图片的位置参数和姿态参数。
  15. 如权利要求14所述的自动引导车,所述处理装置配置成通过以下的方法修正所述待定位位置的图片的位置参数和姿态参数:
    构造连接点的集合,每个所述连接点包括一幅图片、与所述一幅图片对应的所述位置参数和所述姿态参数、以及所述图片是否对应标定点;
    从所述连接点的集合中,获取距离不超过预定值的两个连接点作为一连接,建立连接的集合;
    对所述连接的集合中的每一个连接所包括的两个连接点,计算所述两个连接点之间的连接置信度,并过滤出连接置信度高于预定阈值的那些连接,作为建图连接集合;
    在所述建图连接集合上执行梯度下降法,直至迭代变化率低于预定阈值,其中在执行梯度下降法的初始化步骤时,将非标定点的连接点的图片的所述位置参数和姿态参数作为所述梯度下降法的初始迭代参数。
  16. 如权利要求11-15中任一项所述的自动引导车,还包括遮光罩,所述遮光罩安装在所述基座上,用于柔化所述发光装置发出的光线,所述发光装置优选环绕所述遮光罩安装。
  17. 如权利要求11-16中任一项所述的自动引导车,其中所述测量组件是惯性导航测量组件。
  18. 如权利要求11-17中任一项所述的自动引导车,其中所述位置参数包括横坐标和纵坐标,优选地包括垂直坐标,所述姿态参数包括航向角,优选地包括俯仰角和横滚角。
  19. 如权利要求11-18中任一项所述的自动引导车,其中所述测量组件包括激光SLAM测量装置和/或视觉SLAM测量装置。
  20. 如权利要求14或15所述的自动引导车,其中所述处理装置配置成将所述坐标系、所述标定点的图片、所述待定位位置的图片、所述标定点的图片的位置参数和姿态参数、以及修正后的所述待定位位置的图片的位置参数和姿态参数存储到数据库中或文件中,建立地图,优选的将所述连接的集合和/或所述建图连 接集合存储到所述数据库或文件中。
  21. 一种图像采集和处理***,包括:
    如权利要求11所述的自动引导车;和
    处理装置,所述处理装置与所述摄像头和所述测量组件耦合,并基于所述图片、以及所述位置参数以及姿态参数,修正所述图片的位置参数和姿态参数。
  22. 如权利要求20所述的图像采集和处理***,其中所述处理装置配置成可执行权利要求1-10中任一项所述的建图方法。
  23. 一种用于自动引导车的建图和定位***,包括:
    摄像头,所述摄像头设置成可采集所述自动引导车下方的图像;
    发光装置,所述发光装置配置成可照亮所述自动引导车的下方;
    惯性导航测量组件,所述惯性导航测量组件配置为可测量所述自动引导车的位置参数以及姿态参数;
    处理装置,所述摄像头和所述惯性导航测量组件均耦合至所述处理装置,所述控制装置配置成基于所述图像、所述位置参数以及姿态参数,修正所述图片的位置参数和姿态参数。
  24. 如权利要求22所述的建图和定位***,其中所述处理装置配置成可执行如权利要求1-10中任一项所述的建图方法。
  25. 一种对场地进行建图的设备,包括:
    配置成建立或获取所述场地的坐标系的装置;
    配置成扫描所述场地、获取标定点的图片以及多个待定位位置的图片、以及与所述图片对应的位置参数和姿态参数的装置;
    配置成基于所述图片、所述位置参数和所述姿态参数,修正所述待定位位置的图片的位置参数和姿态参数的装置。
  26. 一种定位方法,包括:
    加载或获得通过权利要求1-10中任一项所述的方法获得的地图;
    采集或获得待定位位置的图片以及与该图片对应的位置参数和姿态参数;
    根据所述地图,检索与该待定位位置的图片距离最近的图片。
  27. 如权利要求26所述的定位方法,还包括:使用相位相关法计算所述待定位位置的图片与所述距离最近的图片之间的置信度、位置参数偏移和姿态参数偏移。
  28. 如权利要求26所述的定位方法,其中,当使用相位相关法计算得到的置信度低于预设值时,丢弃该距离最近的图片,重新检索与该待定位位置的图片距离最近且置信度高于预设值的图片。
  29. 一种计算机可读存储介质,包括存储于其上的计算机可执行指令,所述可执行指令在被处理器执行时实施如权利要求1-10中任一项所述的用于建图的方法。
PCT/CN2019/075741 2018-05-31 2019-02-21 建图方法、图像采集和处理***和定位方法 WO2019154435A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2019531677A JP6977921B2 (ja) 2018-05-31 2019-02-21 マッピング方法、画像収集処理システム及び測位方法

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
CN201820865300.X 2018-05-31
CN201820865300 2018-05-31
CN201810551792 2018-05-31
CN201810551792.X 2018-05-31
CN201811475564.5 2018-12-04
CN201822023605.9U CN211668521U (zh) 2018-05-31 2018-12-04 用于图像采集的自动引导车、以及图像采集和处理***
CN201822023605.9 2018-12-04
CN201811475564.5A CN110006420B (zh) 2018-05-31 2018-12-04 建图方法、图像采集和处理***和定位方法

Publications (1)

Publication Number Publication Date
WO2019154435A1 true WO2019154435A1 (zh) 2019-08-15

Family

ID=67548786

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/075741 WO2019154435A1 (zh) 2018-05-31 2019-02-21 建图方法、图像采集和处理***和定位方法

Country Status (1)

Country Link
WO (1) WO2019154435A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104835173A (zh) * 2015-05-21 2015-08-12 东南大学 一种基于机器视觉的定位方法
CN105437251A (zh) * 2016-01-04 2016-03-30 杭州亚美利嘉科技有限公司 一种定位机器人位置的方法及装置
CN106643801A (zh) * 2016-12-27 2017-05-10 纳恩博(北京)科技有限公司 一种定位准确度的检测方法及电子设备
CN107607110A (zh) * 2017-07-29 2018-01-19 刘儿兀 一种基于图像和惯导技术的定位方法及***
CN107702714A (zh) * 2017-07-31 2018-02-16 广州维绅科技有限公司 定位方法、装置及***

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104835173A (zh) * 2015-05-21 2015-08-12 东南大学 一种基于机器视觉的定位方法
CN105437251A (zh) * 2016-01-04 2016-03-30 杭州亚美利嘉科技有限公司 一种定位机器人位置的方法及装置
CN106643801A (zh) * 2016-12-27 2017-05-10 纳恩博(北京)科技有限公司 一种定位准确度的检测方法及电子设备
CN107607110A (zh) * 2017-07-29 2018-01-19 刘儿兀 一种基于图像和惯导技术的定位方法及***
CN107702714A (zh) * 2017-07-31 2018-02-16 广州维绅科技有限公司 定位方法、装置及***

Similar Documents

Publication Publication Date Title
CN211668521U (zh) 用于图像采集的自动引导车、以及图像采集和处理***
CN109002161B (zh) 对眼动追踪器进行硬件校准的装置和方法
TWI661210B (zh) 座標系統制定方法、裝置及資料結構產品
MX2014003390A (es) Sistema para la deteccion automatica de la huella de la estructura de imagenes oblicuas.
CN111750804B (zh) 一种物体测量的方法及设备
US9584768B2 (en) Information processing apparatus, information processing method and computer-readable storage medium
US20150153172A1 (en) Photography Pose Generation and Floorplan Creation
US11416004B2 (en) System and method for validating readings of orientation sensor mounted on autonomous ground vehicle
US20230386065A1 (en) Systems and methods for processing captured images
CN116091724A (zh) 一种建筑数字孪生建模方法
CN112833890A (zh) 地图构建方法、装置、设备、机器人及存储介质
AU2018204650A1 (en) Node placement planning
WO2019154435A1 (zh) 建图方法、图像采集和处理***和定位方法
CN113610782A (zh) 一种建筑物变形监测方法、设备及存储介质
Tsoy et al. Exhaustive simulation approach for a virtual camera calibration evaluation in gazebo
CN112945266A (zh) 激光导航机器人及其机器人的里程计校准方法
JP2017199259A (ja) 資材認識装置、および、資材認識方法
CN113628284B (zh) 位姿标定数据集生成方法、装置、***、电子设备及介质
WO2019154444A2 (zh) 建图方法、图像采集和处理***和定位方法
JP2022136365A (ja) 植物評価装置、植物評価方法、及びプログラム
CN112651393A (zh) 兴趣点数据处理方法、装置、设备及存储介质
KR20180116738A (ko) 지시선을 이용하여 공간 정보를 측정하기 위한 방법, 그 방법을 이용한 장치
JP6293293B2 (ja) マルチセンサ計測装置のルーティンを確立する方法
CN111984029B (zh) 一种无人机控制方法、装置及电子设备
Chen et al. Improving Indoor Tracking Accuracy Through Sensor Fusion: A Low-Cost, Neural Network-Assisted Visual System for Real-Time Position Estimation

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019531677

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19751744

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20/04/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19751744

Country of ref document: EP

Kind code of ref document: A1