WO2019154435A1 - 建图方法、图像采集和处理***和定位方法 - Google Patents
建图方法、图像采集和处理***和定位方法 Download PDFInfo
- Publication number
- WO2019154435A1 WO2019154435A1 PCT/CN2019/075741 CN2019075741W WO2019154435A1 WO 2019154435 A1 WO2019154435 A1 WO 2019154435A1 CN 2019075741 W CN2019075741 W CN 2019075741W WO 2019154435 A1 WO2019154435 A1 WO 2019154435A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- picture
- parameter
- connection
- posture
- guided vehicle
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 79
- 238000013507 mapping Methods 0.000 title claims abstract description 25
- 238000012545 processing Methods 0.000 title claims description 37
- 238000005259 measurement Methods 0.000 claims description 24
- 238000011478 gradient descent method Methods 0.000 claims description 22
- 230000008859 change Effects 0.000 claims description 14
- 238000001914 filtration Methods 0.000 claims description 6
- 239000003550 marker Substances 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 5
- NJPPVKZQTLUDBO-UHFFFAOYSA-N novaluron Chemical compound C1=C(Cl)C(OC(F)(F)C(OC(F)(F)F)F)=CC=C1NC(=O)NC(=O)C1=C(F)C=CC=C1F NJPPVKZQTLUDBO-UHFFFAOYSA-N 0.000 claims description 5
- 230000000007 visual effect Effects 0.000 claims description 3
- 238000005096 rolling process Methods 0.000 claims 1
- 238000006073 displacement reaction Methods 0.000 description 14
- 238000006243 chemical reaction Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 3
- 230000015654 memory Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 239000003638 chemical reducing agent Substances 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000004907 flux Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000010363 phase shift Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000004260 weight control Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
Definitions
- the invention generally relates to the field of intelligent warehousing, and in particular to a mapping method, an image acquisition processing system and a positioning method which can be used for intelligent warehousing.
- the physical coordinate system is measured by common distance units, such as meters, decimeters, and centimeters, and is allowed to be described in integer, decimal, and fractional forms. For example, 1 meter, 1 centimeter, 1 centimeter, 0.55 meter, 0.2 decimeter, 1.4 centimeter, one-half meter, etc., the coordinate system direction is generally parallel to the building wall, or parallel to the southeast and northwest.
- Automated guided vehicles AGV that transport goods in smart warehouses often require precise positioning of their positions.
- the accuracy of the existing positioning methods usually does not meet the requirements of the work, especially when it is necessary to accurately determine the position and attitude parameters of the AGV. This is not conducive to the operation and control of the operator.
- the present invention provides a method for constructing a site, including: establishing or acquiring a coordinate system of the site; scanning the site, obtaining a picture of the calibration point, to be determined a picture of the bit position, and a position parameter and a posture parameter corresponding to the picture; correcting the to-be-positioned position based on the picture of the calibration point, the picture of the position to be positioned, the position parameter, and the attitude parameter The position and attitude parameters of the picture.
- the positional parameter comprises an abscissa and an ordinate, preferably comprising a vertical coordinate, the attitude parameter comprising a heading angle, preferably comprising a pitch angle and a roll angle.
- the step of modifying comprises: constructing a set of connection points, each of the connection points comprising a picture, the positional parameter corresponding to the one picture, and the attitude parameter, and Whether the picture corresponds to a calibration point; and based on the set of connection points, correcting the position parameter and the posture parameter of the picture of the position to be located.
- the step of modifying includes: obtaining, from the set of the connection points, two connection points whose distance does not exceed a predetermined value as a connection, establishing a set of connections; and collecting the connection Each of the two connection points included in the connection, calculating a connection confidence between the two connection points, and filtering out those connections whose connection confidence is higher than a predetermined threshold as a set of connection connections; Constructing a connection set, correcting the position parameter and the posture parameter of the picture of the position to be located.
- the modifying step further comprises: performing a gradient descent method on the set of connection connections, wherein when the initialization step of the gradient descent method is performed, the picture of the connection point of the uncalibrated point is The positional parameter and the attitude parameter are used as initial iteration parameters of the gradient descent method.
- the modifying step further comprises performing the gradient descent method until the iterative rate of change is below a predetermined threshold.
- the method further comprises: the coordinate system, a picture of the calibration point, a picture of the position to be located, a position parameter and a posture parameter of a picture of the calibration point, and a corrected
- the location parameter and the posture parameter of the picture of the location to be located are stored in a database or in a file to establish a map.
- the coordinate system is a physical coordinate system.
- the predetermined value is half of the length or width of the picture.
- the present invention also provides an automatic guided vehicle for image acquisition, comprising: a base; a camera mounted on the base and configured to collect a picture of an area under the base; a measuring component, The measurement assembly is mounted on the base and configured to measure or calculate positional parameters and attitude parameters of the automated guided vehicle corresponding to the picture.
- the automated guided vehicle further includes a lighting device mounted on the base and configured to illuminate an area under the base for the camera to capture a picture.
- the automated guided vehicle further includes a control device mounted on the base, the camera and the measuring assembly being coupled to the control device, the control device being configured to control the trolley Traveling to the marker point and the location to be located to capture a picture of the marker point and a picture of the location to be located.
- an automated guided vehicle further includes processing means coupled to the camera and the measuring component, and correcting the to-be-positioned based on the picture, and the positional parameters and attitude parameters Position and pose parameters of the position of the picture.
- the processing device corrects a position parameter and a posture parameter of a picture of the position to be located by constructing a set of connection points, each of the connection points including a picture, and the Corresponding to the location parameter and the gesture parameter corresponding to a picture, and whether the picture corresponds to a calibration point; from the set of connection points, acquiring two connection points whose distance does not exceed a predetermined value as a connection, establishing a connection a set of two connection points included in each of the set of connections, calculating a connection confidence between the two connection points, and filtering out those connections whose connection confidence is above a predetermined threshold, As a set of construction connection; performing a gradient descent method on the set of connection connections until the iteration change rate is lower than a predetermined advance, wherein when the initialization step of the gradient descent method is performed, the picture of the connection point of the uncalibrated point is The positional parameter and the attitude parameter are used as initial iteration parameters of the gradient descent method.
- an automated guided vehicle further includes a hood mounted on the base for softening light emitted by the illuminating device, the illuminating device preferably being mounted around the hood .
- the measuring component is an inertial navigation measuring component.
- the positional parameter comprises an abscissa and an ordinate, preferably comprising a vertical coordinate
- the attitude parameter comprising a heading angle, preferably comprising a pitch angle and a roll angle
- the measuring component comprises a laser SLAM measuring device and/or a visual SLAM measuring device.
- the processing device is configured to configure the coordinate system, a picture of the calibration point, a picture of the position to be located, a position parameter and a posture parameter of a picture of the calibration point, and a correction
- the position parameter and the posture parameter of the picture of the to-be-positioned position are stored in a database or a file to establish a map library.
- the present invention also provides an image acquisition and processing system comprising: an automated guided vehicle as described above; and a processing device coupled to the camera and the measurement component, and based on the picture, and the The position parameter and the attitude parameter correct the position parameter and the attitude parameter of the picture.
- the processing device is configured to perform the mapping method described above.
- the present invention also provides a mapping and positioning system for an automated guided vehicle, comprising: a camera, the camera is configured to capture an image under the automated guided vehicle; and a lighting device configured to illuminate Below the automated guided vehicle; an inertial navigation measuring component configured to measure a positional parameter and a posture parameter of the automated guided vehicle; a processing device, the camera and the inertial navigation measuring component are coupled To the processing device, the control device is configured to correct a position parameter and a posture parameter of the picture based on the image, the position parameter, and the posture parameter.
- the processing device is configured to perform the mapping method according to any one of claims 1-10.
- the present invention also provides an apparatus for mapping a site, comprising: a device configured to establish or acquire a coordinate system of the site; configured to scan the site, obtain a picture of a calibration point, and obtain a picture of a plurality of locations to be located And means for position and posture parameters corresponding to the picture; means for modifying a position parameter and a posture parameter of the picture of the position to be located based on the picture, the position parameter and the attitude parameter.
- the present invention also provides a positioning method, comprising: loading or obtaining a map obtained by the method described in any one of the above; acquiring or obtaining a picture of a position to be located and a position parameter and a posture parameter corresponding to the picture; A map that retrieves the closest picture to the image of the location to be located.
- the positioning method further comprises: calculating a confidence, a position parameter offset, and a pose parameter offset between the picture of the position to be located and the picture of the closest distance using a phase correlation method.
- the picture closest to the distance is discarded, and the distance from the picture of the position to be located is re-retrieved and the confidence is higher than a preset value. picture of.
- FIG. 1 is a flow chart of a method of building a picture in accordance with one embodiment of the present invention
- FIG. 2 is a schematic diagram of physical coordinates in accordance with one embodiment of the present invention.
- FIG. 3 is a schematic diagram of logical coordinates in accordance with one embodiment of the present invention.
- connection point 4 is a schematic view of a connection point in accordance with one embodiment of the present invention.
- Figure 5 is a schematic illustration of a calibration point in accordance with one embodiment of the present invention.
- FIG. 6 is a flowchart of a method for correcting a position parameter and a posture parameter of a position picture to be positioned according to an embodiment of the present invention
- Figure 8 is a schematic illustration of a connection in accordance with one embodiment of the present invention.
- Figure 9 shows a screenshot of the map after the physical coordinate system and the logical coordinate system are mapped
- Figure 10 is a schematic illustration of an automated guided vehicle for image acquisition, in accordance with one embodiment of the present invention.
- FIG. 11 is a flow chart of a positioning method in accordance with one embodiment of the present invention.
- Figure 12 is a block diagram of a computer program product in accordance with one embodiment of the present invention.
- first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
- features defining “first” or “second” may include one or more of the described features, either explicitly or implicitly.
- the meaning of “plurality” is two or more unless specifically and specifically defined.
- connection or integral connection: it can be mechanical connection, electrical connection or communication with each other; it can be directly connected or indirectly connected through an intermediate medium, which can be the internal connection of two elements or the interaction of two elements. relationship.
- intermediate medium can be the internal connection of two elements or the interaction of two elements. relationship.
- the first feature "on” or “under” the second feature may include direct contact of the first and second features, and may also include first and second features, unless otherwise explicitly defined and defined. It is not in direct contact but through additional features between them.
- the first feature “above”, “above” and “above” the second feature includes the first feature directly above and above the second feature, or merely indicating that the first feature level is higher than the second feature.
- the first feature “below”, “below” and “below” the second feature includes the first feature directly above and above the second feature, or merely indicating that the first feature level is less than the second feature.
- mapping method 100 in accordance with a first embodiment of the present invention will be described, for example, for mapping a venue.
- a coordinate system of the venue is established or acquired.
- the coordinate system may be a physical coordinate system or a logical coordinate system, which is within the scope of the present invention.
- the definition of the coordinate system usually includes the position of the origin, the direction of the XY coordinate axis, and so on.
- the site to be located can be measured, and the physical coordinate system is established.
- the physical coordinate system is measured by common distance units, such as meters, decimeters, and centimeters, and is allowed to be described in integer, decimal, and fractional forms, such as 1 meter, 1 Decimeter, 1 cm, 0.55 m, 0.2 decimeter, 1.4 cm, one-half m, etc.
- the coordinate system direction is generally parallel to the building wall, or parallel to the southeast and northwest directions, and the coordinates established in accordance with the above principles are in the system. It is called the physical coordinate system, as shown in Figure 2.
- the coordinate system set according to the actual situation of the business is called the logical coordinate system in this system.
- the logical coordinate system and the physical coordinate system may differ, for example, in that the logical coordinate system is generally described by an integer, such as (1, 2), (5, 10), and coordinates.
- the direction of the system does not necessarily coincide with the physical coordinate system, and the distance unit of the logical coordinate system is not necessarily a common physical unit, but is defined by the actual operation needs, such as point A, point B, point C, point B logic in Figure 3.
- the coordinates are (3,7), the logical coordinates of point A are (3,8), the logical coordinates of point C are (4,7), and the point of the lower left corner is the origin.
- the logical position and the physical position may be completely identical, or there may be a certain conversion relationship between the two.
- the reason why there is a logical position is to facilitate the planning of business logic or to facilitate the calculation of the map. For example, in the case of shelf placement, the position of the shelf is saved in the position of the logical coordinate system, such as (3, 7) position, if If the physical position is used, the above description (4.05, 9.45) will appear, which is not conducive to the understanding and operation of the operator. If the physical position is required, the conversion can be performed by the conversion relationship. Generally, the conversion is multiplied by a coefficient.
- the logical position spacing It is called the logical position spacing and can be different in the X direction and the Y direction.
- the shelf in the warehouse is 1.3 meters * 1.3 meters
- the shelf spacing is 0.05 meters
- you can define the logical position spacing is 1.35 meters
- if the shelf is 1.2 meters * 1.0 meters
- you can define the logical position spacing in the X axis direction is 1.25
- the meter has a 1.05 meter in the Y-axis direction, so that the device that needs physical positioning finds the corresponding physical position shelf.
- the above conversion is only a conventional conversion method, and there are more complicated conversion methods, such as coordinate system rotation conversion, non-linear conversion and other conversion methods, and the space is not detailed in this case.
- the above description of the logical coordinate system is merely exemplary and not limiting.
- the logical coordinate system refers to the coordinate system set according to the actual situation of the business.
- the positional parameters in the logical coordinate system are not limited to integers, and may also have decimals. These are all within the scope of the invention. If the physical coordinate system or logical coordinate system of the site has been established in advance, it can be obtained from the corresponding file or database.
- the physical coordinate system is taken as an example for explanation below.
- step S102 scanning the site, acquiring a picture of the calibration point (for definition of the calibration point, see below), a picture of the position to be located (preferably a picture of a plurality of positions to be located), and the target The position parameter and the attitude parameter corresponding to the fixed point picture and the position picture to be located.
- an automatic guided cart equipped with the apparatus of the present invention can be used to scan the site to obtain a picture of the location to be located, a calibration point picture, and position and attitude parameters corresponding to the above two pictures.
- the position to be positioned here can be determined according to the actual working conditions, for example, the position that the automatic guided vehicle needs to reach.
- the position parameter is, for example, an abscissa and an ordinate (ie, a horizontal position, such as a coordinate of a picture center, or a picture) of a picture at a certain calibration point or a position to be positioned in a physical coordinate system.
- the coordinates of a certain angle may of course also be the horizontal distance and the longitudinal distance with respect to a certain base point; for example, the angle of the acquired picture, for example the angle with respect to the horizontal or vertical axis (ie the heading angle).
- parameters such as a pitch angle, a roll angle, and a vertical height corresponding to the picture (that is, a pitch angle, a roll angle, a vertical height, and the like when the car is automatically guided to obtain a photo) may also be acquired.
- the above-described data can be provided by using an inertial navigation measuring device mounted on the automatic guided cart of the present invention.
- the inertial navigation measuring device includes, for example, a wheel encoder, an accelerometer (1 to 3 axes), a gyroscope (1 to 3 axes), a magnetic flux sensor (1 to 3 axes), a barometric pressure sensor, and a feedback heading angle, a pitch angle, and a roll. Measuring equipment for angle, horizontal position, and vertical position. Using the data obtained by the wheel encoder, accelerometer, gyroscope, magnetic flux sensor, and air pressure sensor, the heading angle (ie, the angle of the picture relative to the horizontal or vertical axis), the pitch angle, the roll angle, and the level can be obtained by calculation.
- the heading angle ie, the angle of the picture relative to the horizontal or vertical axis
- the pitch angle, the roll angle, and the level can be obtained by calculation.
- Position, vertical position superimpose the above data into the picture to form (picture, heading angle (ie picture angle), pitch angle, roll angle, horizontal position (ie x-axis abscissa and y-axis ordinate), vertical position , is the calibration point of the seven-tuple data combination, as shown in Figure 4, referred to as the connection point in the system, as a subsequent mapping input.
- the connection point does not need to have all the data, for example, a combination of four data including (picture, heading angle, horizontal position, whether it is a calibration point) can achieve the object of the present invention.
- the calibration point it represents the point where the coordinates have been precisely determined. As shown in Figure 3, points A, B, and C, the coordinates of these points have been confirmed, artificially defined, a priori.
- FIG. 5 An example of a calibration point is shown in Figure 5, where the calibration point A is shown, its logical coordinates are (5, 8), and the physical coordinates are (3.75, 4.10).
- the calibration point is not limited to having both logical coordinates and physical coordinates.
- a variety of means can be employed to identify and validate calibration points. For example, there is a cross line on the image, the position is marked on it, and the calibration point and its position coordinates can be recognized after the image is collected. Also, there is encoded information such as a barcode or a two-dimensional code, which can be used after image acquisition.
- the program decodes, and the decoded content is the position coordinate of the calibration point.
- the position parameter of the calibration point picture is the position parameter of the calibration point, which is not measured by the inertial navigation measuring device.
- the position parameter of the calibration point picture is the position parameter of the calibration point, which is not measured by the inertial navigation measuring device.
- step S103 the position parameter and the posture parameter of the picture of the position to be located are corrected based on the picture of the calibration point, the picture of the position to be positioned, the position parameter, and the posture parameter.
- the position parameter and the attitude parameter are obtained by measurement, for example, by the inertial navigation measuring device, and there is a measurement error in the working condition in the field, and further correction is needed to improve the accuracy.
- the picture of the calibration point can be used as a good benchmark to correct the position and posture parameters of the picture to be located.
- step S103 One embodiment of step S103 is described below with reference to FIG.
- each connection point includes a combination of seven-tuple data (picture, heading angle (ie, picture angle), pitch angle, roll angle, horizontal position, vertical position, whether it is a calibration point), or includes (picture The quaternion data combination of the heading angle, horizontal position, and whether it is a calibration point.
- picture heading angle
- pitch angle roll angle
- horizontal position vertical position
- vertical position whether it is a calibration point
- picture angle ie, picture angle
- connection points Use these connection points to construct a collection of connection points.
- the parameter of "is it a calibration point” if the calibration point appears in the picture and the a priori position parameter of the calibration point is obtained normally, the parameter is "is a calibration point”; otherwise the parameter is "uncalibrated point” ". It can also be represented by a logic 0 or 1.
- a connection set is established and output based on the set of connection points.
- the principle of group pair operation is, for example, the position of the two pictures is not more than a predetermined value, for example, the length or width of the picture is not exceeded. 50%, 30% or 20%.
- the horizontal position of the connection point A is (0, 0) and the horizontal position of the connection point B is (5, 0)
- the distance from A to B is 5.
- the picture size is 10*10, then A and B are consistent.
- a standard that does not exceed 50% of the picture size may constitute a pair.
- such a combination is called a connection, and each connection includes two connection points, and all connections that the output can form are called connection sets in the system.
- connection point A is referred to as a reference point
- connection point B is called adjacency.
- Point taking the reference point as the origin, taking the adjacent point as the offset, taking the reference point picture and the adjacent point picture as input, for example, performing the phase correlation method to obtain the connection confidence (conf) (characterizing the similarity between the two), The x-direction relative displacement (delta_x), the y-direction relative displacement (delta_y), and the relative rotation angle (theta).
- connection confidence is the output of the phase correlation method, which is calculated by calculating the sharpness of the peak value of the phase correlation method, or the distribution near the peak. If the distribution is normal, then the peak and the mean are known.
- the cross-correlation result is calculated by calculating the correlation between the two pictures according to the above phase correlation method.
- the cross-power spectrum calculation is involved, and the cross-correlation function can be used to obtain the cross-correlation level under different displacement conditions. It is assumed that the cross-correlation level obeys the normal distribution and can be calculated by statistical methods.
- the relevant parameters of the state distribution by dividing the parameter and the maximum cross-correlation value, can calculate the connection confidence.
- the set of connection connections does not contain a connection where both points are calibration points.
- the gray area in the figure is picture A
- the green area is picture B, which illustrates the overlapping area of the two pictures
- the coincidence is calculated by phase correlation.
- the cross-correlation results calculated by the two pictures A and B in Fig. 7 are: confidence 131.542, relative displacement 33.4 in the x direction, phase shift 10.7 in the y direction, and 0.3 degree rotation.
- Figure 8 shows a schematic of the connection including the reference point and the adjacency point.
- step S1034 a gradient descent method is performed on the map connection set, and the position parameter and the pose parameter of the picture of the position to be located are corrected.
- the horizontal and vertical coordinates and the angle of the calibration point picture are unchanged, and the gradient adjustment is based on the non-calibration point picture parameter, and the calibration picture can be regarded as a constant.
- the construction connection set can be defined as a connection that does not contain two points that are calibration points. Because this adjustment has no meaning, the calibration point should not be adjusted, and the gradient will not be solved.
- the optimization function is as shown in Equation 1, for example:
- N indicates that the set of connection connections contains a total of N connections
- i represents the i-th connection in the set of connection connections
- a i represents the reference point of the i-th connection
- B i represents the adjacent point of the i-th connection
- R i Represents the cross-correlation result of the ith connection
- g ⁇ (A i , B i ) can be understood as the angular difference between the reference point and the adjacent point under the inertial navigation measurement component
- g ⁇ (A i , B i )-u ⁇ ( R i ) can be understood as the difference between the angle difference under the inertial navigation measurement component and the relative rotation angle in the cross-correlation result (the rotation angle in the cross-correlation result is the theta calculated by the phase correlation method, and this value represents the adjacent point picture
- the weight of the cross-correlation result angle (the connection of two uncalibrated points, the change It should be equal, or equal, because the two codes are equal, but the degree of change between the calibration point and the non-calibration point is not equal.
- the degree of change of the non-calibration point is significantly greater than the calibration point, so Through the weight control.
- the weight can be given according to the actual situation).
- the weight can be taken as 1 and adjusted according to the same level; for the connection of the calibrated and non-calibrated points, the weight can also take 1 because the calibration point is constant and does not participate in the gradient. The calculation can be considered that the gradient is constant. If you want to fine-tune the calibration point, the weight ratio of the connection between the calibration point and the non-calibration point can be as high as 99 to 1.
- g x (A i , B i ) can be understood as the x-direction coordinate difference between the reference point and the adjacent point under the inertial navigation measurement component
- g x (A i , B i )- u x (R i ) can be understood as the difference between the x-direction coordinate difference under the inertial navigation measurement component and the x-direction relative displacement in the cross-correlation result (the x-direction relative displacement in the cross-correlation result is the delta_x calculated by the phase correlation method).
- this value characterizes how much distance the adjacent point image needs to translate in the x direction to align with the reference point image),
- f x is the x-axis weight function, used to represent the different connection point properties during the x-axis coordinate fitting process (eg : the calibration point and the non-calibration point) have different weights in the map iteration (as an example, usually the weight of the calibration point is relatively large, for example, 1000, the weight of the non-calibration point is relatively small, for example, 1);
- v x is the cross-correlation result relative to
- the adjustment weight of the relative displacement on the x-axis can be, for example, a value of one.
- g y (A i , B i ) can be understood as the difference in the y-direction coordinate between the reference point and the adjacent point under the inertial navigation measurement component
- g y (A i , B i )- u y (R i ) can be understood as the difference between the y-direction coordinate difference under the inertial navigation measurement component and the y-direction relative displacement in the cross-correlation result (the y-direction relative displacement in the cross-correlation result is the delta_y calculated by the phase correlation method).
- f y is the x-axis weight function used to represent the different connection point properties during the y-axis coordinate fitting process (eg : the calibration point and the non-calibration point) have different weights in the map iteration (as an example, usually the weight of the calibration point is relatively large, for example, 1000, the weight of the non-calibration point is relatively small, for example, 1); v y is the relative correlation result
- the adjustment weight of the relative displacement on the y-axis can be, for example, a value of one.
- ⁇ 1 , ⁇ 2 , and ⁇ 3 represent the weights of the change of theta, x, and y in the final fitting result, and some scenes are sensitive to the change of theta, and can be adjusted to ⁇ 1 .
- ⁇ 1 , ⁇ 2 , ⁇ 3 are all 1.
- Equation 1 The argument in Equation 1 is By deriving the individual independent variables of Equation 1, the direction of the gradient of each independent variable is obtained, or a set of gradients for gradient descent.
- the initialization step of the gradient descent method is performed, and the positional parameter and the attitude parameter of the inertial navigation annotation are taken as the initial position of the picture.
- the input of the gradient descent method is the last iteration set, one is the gradient, and the other is the step size, wherein the gradient is obtained by deriving the formula 1.
- the iterative initial set is assigned by the position parameter and the attitude parameter of the inertial navigation annotation, for example.
- the step size is fixed or variable.
- the step length is decreased in the direction of the gradient to optimize using Equation 1.
- the step size algorithm can be customized as needed, and the system preferably uses a fixed step size for gradient descent.
- the execution is repeated until the iteration change rate is less than the set threshold, and the system sets, for example, a threshold of 0.1%.
- the rate of change is, for example, the difference between the last calculated value and the value calculated by this iteration, and the value of the change is the rate of change.
- the physical coordinates and the attitude parameters of each picture base point (for example, the center point) are obtained as the position parameter and the attitude parameter of the corrected position to be positioned.
- the x-axis coordinates, the y-axis coordinates, and the heading angle of the picture are used in the execution of the gradient descent method described above.
- vertical coordinates, pitch angles and roll angles corresponding to the picture may also be included, especially in the case of uneven terrain. These are all within the scope of the invention.
- a plurality of picture acquisitions are performed on some or all of the calibration points, and positional parameters and attitude parameters corresponding to each picture acquisition are obtained.
- the method further includes: the coordinate system, a picture of the calibration point, a picture of the position to be located, a position parameter and a posture parameter of a picture of the calibration point, and a modified
- the location parameter and the pose parameter of the picture of the location to be located are stored in a database or in a file to establish a map.
- the set of connections and/or the set of connection connections are simultaneously stored in the database or file as part of the map.
- Figure 9 shows an illustration of a map established in accordance with the present invention.
- a stable mapping between the physical coordinate system and the logical coordinate system is completed for subsequent positioning.
- the automatic guided vehicle 10 includes: a base 6; a light emitting device 5-2 mounted on the base and configured to illuminate an area under the base; a camera 5-3, the camera is mounted On the pedestal and configured to capture a picture of an area under the pedestal, such as a picture of an area illuminated by the illuminating device; a measurement assembly 3 mounted on the pedestal and configured to The positional parameters and attitude parameters of the automated guided vehicle corresponding to the picture may be measured or calculated.
- the driving wheel 1 is mounted on the base 6, including a motor, a speed reducer and an encoder, wherein the motor provides driving force, the speed reducer amplifies the driving force, and the encoder is used to obtain the turning angle of the motor, thereby obtaining the level of the automatic guiding vehicle or the driving wheel. position.
- the driving wheel 2 cooperates with the driving wheel 1 to complete the motion control.
- the measuring component 3 is, for example, an inertial navigation measuring device, which can provide one or several of instantaneous speed, instantaneous angle, instantaneous position, such as abscissa, ordinate, vertical coordinate, heading angle, pitch angle and roll angle.
- the encoder of the driving wheel may also be part of the measuring component 3.
- the control unit 4 is mounted on the base 6 and coupled to the measuring unit 3 and the camera 5-3.
- the control device 4 is configured to control the carriage to travel to a marker point and a location to be located to acquire a picture of the marker point and a picture of the location to be located, and to synchronize the camera 5-3 and the measurement component 3.
- the measurement component 3 is capable of measuring a position parameter and a posture parameter of the car, that is, obtaining a position parameter and a posture parameter corresponding to the picture, while the camera collects a picture.
- the camera 5-3 is, for example, a lower-view camera, together with the light-emitting device 5-2 and the hood 5-1, forms an image capturing device 5, wherein the camera 5-3 is used to acquire an image of the under-automobile, the light-emitting device 5-2 Mounted on the base to illuminate the lower-view camera shooting area.
- a hood 5-1 is mounted on the pedestal for softening the light of the illuminating device and preventing the occurrence of reflection.
- the illumination device is preferably mounted around the hood.
- the automated guided vehicle 10 further comprises processing means (not shown) coupled to said camera 5-3 and said measuring component 3 for receiving pictures captured by said camera And measuring the position parameter and the attitude parameter measured by the component, and correcting the position parameter and the posture parameter of the picture of the position to be positioned based on the picture, and the position parameter and the attitude parameter.
- processing means (not shown) coupled to said camera 5-3 and said measuring component 3 for receiving pictures captured by said camera And measuring the position parameter and the attitude parameter measured by the component, and correcting the position parameter and the posture parameter of the picture of the position to be positioned based on the picture, and the position parameter and the attitude parameter.
- the processing device may be integrated into the automated guided vehicle 10, or physically separate from the automated guided vehicle, and communicated with other components by wire or wirelessly. These are all within the scope of the invention.
- the processing device corrects the position parameter and the posture parameter of the picture of the position to be located by the following method:
- connection points Constructing a set of connection points, each of the connection points including a picture, the position parameter corresponding to the one picture and the attitude parameter, and whether the picture corresponds to a calibration point;
- connection points two connection points whose distance does not exceed a predetermined value as a connection, establishing a set of connections;
- Equation 1-7 Performing a gradient descent method on the set of connection connections until the iterative rate of change is lower than a predetermined prefetch, wherein the positional parameter and the pose parameter of the picture of the connection point of the uncalibrated point are performed when the initialization step of the gradient descent method is performed As an initial iteration parameter of the gradient descent method.
- the specific calculation process is shown in Equation 1-7.
- the measuring component is an inertial navigation measuring component, the positional parameter comprising an abscissa and an ordinate, preferably comprising a vertical coordinate, the attitude parameter comprising a heading angle, preferably comprising a pitch angle and Roll angle.
- the measuring component comprises a laser SLAM measuring device and/or a visual SLAM measuring device.
- the processing device is configured to configure the coordinate system, a picture of the calibration point, a picture of the position to be located, a position parameter and a posture parameter of a picture of the calibration point, and
- the corrected position parameter and posture parameter of the picture of the position to be located are stored in a database or in a file to establish a map.
- the present invention also provides an image acquisition and processing system comprising: an automated guided vehicle as described above; and a processing device in communication with the camera and the measurement component and configured to be based on the picture, And the position parameter and the attitude parameter, and correcting the position parameter and the attitude parameter of the picture.
- the processing device is not provided, for example, on the automated guided vehicle.
- processing device is configured, for example, to perform the mapping method 100 as described above.
- the present invention also provides a mapping and positioning system for an automated guided vehicle, comprising: a camera, the camera is configured to capture an image under the automated guided vehicle; and a lighting device configured to illuminate Below the automated guided vehicle; an inertial navigation measurement assembly configured to measure positional and attitude parameters of the automated guided vehicle; processing device, the camera and the inertial navigation measurement component are coupled To the processing device, the position parameter and the attitude parameter of the picture are corrected based on the image, the position parameter, and the posture parameter.
- a mapping and positioning system for an automated guided vehicle comprising: a camera, the camera is configured to capture an image under the automated guided vehicle; and a lighting device configured to illuminate Below the automated guided vehicle; an inertial navigation measurement assembly configured to measure positional and attitude parameters of the automated guided vehicle; processing device, the camera and the inertial navigation measurement component are coupled To the processing device, the position parameter and the attitude parameter of the picture are corrected based on the image, the position parameter, and the posture parameter.
- processing device is configured, for example, to perform the mapping method 100 as described above.
- the present invention also provides an apparatus for mapping a site, comprising: a device configured to establish or acquire a coordinate system of the site; configured to scan the site, obtain a picture of a calibration point, and obtain a picture of a plurality of locations to be located And means for position and posture parameters corresponding to the picture; means for modifying a position parameter and a posture parameter of the picture of the position to be located based on the picture, the position parameter and the attitude parameter.
- the present invention Based on the map established by method 100, the present invention also provides a positioning method 200.
- a positioning method 200 in accordance with the present invention is described below with reference to FIG.
- step S201 loading or obtaining a map obtained by the method 100 of the present invention can be performed, for example, by loading or reading a map file or a database.
- step S202 a picture of the location to be located and a position parameter and a posture parameter corresponding to the picture are acquired or obtained. For example, during the operation of the AGV, the position parameter and the attitude parameter corresponding to the picture are measured while the picture is being acquired.
- step S203 in the map, retrieving a picture that is closest to the picture of the location to be located
- the positioning method 200 further includes: using a phase correlation method to calculate a confidence level, a position parameter offset, and a posture parameter deviation between a picture of the position to be located and a picture of the closest distance. shift.
- the picture closest to the distance is discarded, and the distance from the picture of the position to be located is re-retrieved (excluding the discarded The picture is in the picture) with a higher confidence than the preset value.
- the position of the image to be located can be obtained by using the retrieved image position and the offset of the phase correlation method, and then the positioning position of the device is updated. , that is, the positioning is successful. After the positioning is successful, the next search position is this positioning position.
- Figure 12 is a block diagram of a computer program product 900 arranged in accordance with at least some embodiments of the present invention.
- the signal bearing medium 902 can be implemented as or include a computer readable medium 906, a computer recordable medium 908, a computer communication medium 910, or a combination thereof that stores a configurable processing unit to perform programming of all or some of the previously described processes. Instruction 904.
- the instructions may include, for example, one or more executable instructions for causing one or more processors to: establish or acquire a coordinate system of the venue; scan the venue, obtain a picture of the calibration point, a location to be located a picture, and a position parameter and a posture parameter corresponding to the picture; correcting the picture of the to-be-positioned position based on the picture of the calibration point, the picture of the position to be located, the position parameter, and the attitude parameter The position and attitude parameters.
- Computer programs are implemented as one or more programs running on one or more processors (eg, in one or more micro One or more programs running on the processor, implemented as firmware, or implemented as almost any combination thereof, and in accordance with the present disclosure, designing the circuit and/or writing code for the software and/or firmware will be in this Within the skill of the skilled person in the field. For example, if the user determines that speed and accuracy are of the utmost importance, the user can select primary hardware and/or firmware media; if flexibility is paramount, the user can select the primary software implementation; or, alternatively, alternatively The user can select some combination of hardware, software, and/or firmware.
- signal bearing media include, but are not limited to, the following: recordable type media, such as floppy disks, hard drives, compact discs (CDs), digital video discs (DVDs), digital tapes, computer memories, and the like; and transmission type media, such as Digital and/or analog communication media (eg, fiber optic cables, waveguides, wired communication links, wireless communication links, etc.).
- a typical data processing system typically includes one or more of the following: a system unit housing, a video display device, a memory such as a volatile and non-volatile memory, such as a microprocessor and A processor of a digital signal processor, a computing entity such as an operating system, a driver, a graphical user interface, and an application, one or more interactive devices such as a trackpad or touch screen, and/or includes a feedback loop and a control motor (eg, A control system for sensing position and/or rate feedback; a control motor for moving and/or adjusting components and/or quantities.
- a typical data processing system may be implemented using any suitable commercially available components, such as those commonly found in data computing/communication and/or network computing/communication systems.
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
Claims (29)
- 一种对场地进行建图的方法,包括:建立或获取所述场地的坐标系;扫描所述场地,获取标定点的图片、待定位位置的图片、以及与所述图片对应的位置参数和姿态参数;基于所述标定点的图片、所述待定位位置的图片、所述位置参数和所述姿态参数,修正所述待定位位置的图片的所述位置参数和姿态参数。
- 如权利要求1所述的方法,其中所述位置参数包括横坐标和纵坐标,优选地包括垂直坐标,所述姿态参数包括航向角,优选地包括俯仰角和横滚角。
- 如权利要求1-2中任一项所述的方法,其中所述修正的步骤包括:构造连接点的集合,每个所述连接点包括一幅图片、与所述一幅图片对应的所述位置参数和所述姿态参数、以及所述图片是否对应标定点;基于所述连接点的集合,修正所述待定位位置的图片的所述位置参数和姿态参数。
- 如权利要求3所述的方法,其中所述修正的步骤包括:从所述连接点的集合中,获取距离不超过预定值的两个连接点作为一连接,建立连接的集合;对所述连接的集合中的每一个连接所包括的两个连接点,计算所述两个连接点之间的连接置信度,并过滤出连接置信度高于预定阈值的那些连接,作为建图连接集合;基于所述建图连接集合,修正所述待定位位置的图片的所述位置参数和姿态参数。
- 如权利要求4所述的方法,其中所述修正步骤还包括:在所述建图连接集合上执行梯度下降法,其中在执行梯度下降法的初始化步骤时,将非标定点的连接点的图片的所述位置参数和姿态参数作为所述梯度下降法的初始迭代参数。
- 如权利要求5所述的方法,其中所述修正步骤还包括:执行所述梯度下降法,直至迭代变化率低于预定阈值;
- 如权利要求1-6所述的方法,其中对标定点中的一些或者全部,进行多次图片采集,并获取与每次图片采集对应的位置参数和姿态参数。
- 如权利要求1-7中任一项所述的方法,还包括:将所述坐标系、所述标定点的图片、所述待定位位置的图片、所述标定点的图片的位置参数和姿态参数、以及修正后的所述待定位位置的图片的位置参数和姿态参数存储到数据库中或文件中,建立地图,优选的将所述连接的集合和/或所述建图连接集合存储到所述数据库或文件中。
- 如权利要求1-8中任一项所述的方法,其中,所述坐标系为物理坐标系。
- 如权利要求4所述的方法,其中所述预定值为所述图片的长度或宽度的一半。
- 一种用于图像采集的自动引导车,包括:基座;摄像头,所述摄像头安装在所述基座上并配置成可采集所述基座下方的区域的图片;测量组件,所述测量组件安装在所述基座上,并配置成可测量或计算与所述图片对应的所述自动引导车的位置参数以及姿态参数。
- 如权利要求11所述的自动引导车,还包括发光装置,所述发光装置安装在所述基座上并配置成可照亮所述基座下方的区域,供所述摄像头采集图片。
- 如权利要求11或12所述的自动引导车,还包括安装在所述基座上的控制装置,所述摄像头和所述测量组件均耦合至所述控制装置,所述控制装置配置成控制所述小车行进至标记点和待定位位置以采集所述标记点的图片和所述待定位位置的图片。
- 如权利要求13所述的自动引导车,还包括处理装置,所述处理装置与所述摄像头和所述测量组件耦合,并基于所述图片、以及所述位置参数以及姿态参数,修正所述待定位位置的图片的位置参数和姿态参数。
- 如权利要求14所述的自动引导车,所述处理装置配置成通过以下的方法修正所述待定位位置的图片的位置参数和姿态参数:构造连接点的集合,每个所述连接点包括一幅图片、与所述一幅图片对应的所述位置参数和所述姿态参数、以及所述图片是否对应标定点;从所述连接点的集合中,获取距离不超过预定值的两个连接点作为一连接,建立连接的集合;对所述连接的集合中的每一个连接所包括的两个连接点,计算所述两个连接点之间的连接置信度,并过滤出连接置信度高于预定阈值的那些连接,作为建图连接集合;在所述建图连接集合上执行梯度下降法,直至迭代变化率低于预定阈值,其中在执行梯度下降法的初始化步骤时,将非标定点的连接点的图片的所述位置参数和姿态参数作为所述梯度下降法的初始迭代参数。
- 如权利要求11-15中任一项所述的自动引导车,还包括遮光罩,所述遮光罩安装在所述基座上,用于柔化所述发光装置发出的光线,所述发光装置优选环绕所述遮光罩安装。
- 如权利要求11-16中任一项所述的自动引导车,其中所述测量组件是惯性导航测量组件。
- 如权利要求11-17中任一项所述的自动引导车,其中所述位置参数包括横坐标和纵坐标,优选地包括垂直坐标,所述姿态参数包括航向角,优选地包括俯仰角和横滚角。
- 如权利要求11-18中任一项所述的自动引导车,其中所述测量组件包括激光SLAM测量装置和/或视觉SLAM测量装置。
- 如权利要求14或15所述的自动引导车,其中所述处理装置配置成将所述坐标系、所述标定点的图片、所述待定位位置的图片、所述标定点的图片的位置参数和姿态参数、以及修正后的所述待定位位置的图片的位置参数和姿态参数存储到数据库中或文件中,建立地图,优选的将所述连接的集合和/或所述建图连 接集合存储到所述数据库或文件中。
- 一种图像采集和处理***,包括:如权利要求11所述的自动引导车;和处理装置,所述处理装置与所述摄像头和所述测量组件耦合,并基于所述图片、以及所述位置参数以及姿态参数,修正所述图片的位置参数和姿态参数。
- 如权利要求20所述的图像采集和处理***,其中所述处理装置配置成可执行权利要求1-10中任一项所述的建图方法。
- 一种用于自动引导车的建图和定位***,包括:摄像头,所述摄像头设置成可采集所述自动引导车下方的图像;发光装置,所述发光装置配置成可照亮所述自动引导车的下方;惯性导航测量组件,所述惯性导航测量组件配置为可测量所述自动引导车的位置参数以及姿态参数;处理装置,所述摄像头和所述惯性导航测量组件均耦合至所述处理装置,所述控制装置配置成基于所述图像、所述位置参数以及姿态参数,修正所述图片的位置参数和姿态参数。
- 如权利要求22所述的建图和定位***,其中所述处理装置配置成可执行如权利要求1-10中任一项所述的建图方法。
- 一种对场地进行建图的设备,包括:配置成建立或获取所述场地的坐标系的装置;配置成扫描所述场地、获取标定点的图片以及多个待定位位置的图片、以及与所述图片对应的位置参数和姿态参数的装置;配置成基于所述图片、所述位置参数和所述姿态参数,修正所述待定位位置的图片的位置参数和姿态参数的装置。
- 一种定位方法,包括:加载或获得通过权利要求1-10中任一项所述的方法获得的地图;采集或获得待定位位置的图片以及与该图片对应的位置参数和姿态参数;根据所述地图,检索与该待定位位置的图片距离最近的图片。
- 如权利要求26所述的定位方法,还包括:使用相位相关法计算所述待定位位置的图片与所述距离最近的图片之间的置信度、位置参数偏移和姿态参数偏移。
- 如权利要求26所述的定位方法,其中,当使用相位相关法计算得到的置信度低于预设值时,丢弃该距离最近的图片,重新检索与该待定位位置的图片距离最近且置信度高于预设值的图片。
- 一种计算机可读存储介质,包括存储于其上的计算机可执行指令,所述可执行指令在被处理器执行时实施如权利要求1-10中任一项所述的用于建图的方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019531677A JP6977921B2 (ja) | 2018-05-31 | 2019-02-21 | マッピング方法、画像収集処理システム及び測位方法 |
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201820865300.X | 2018-05-31 | ||
CN201820865300 | 2018-05-31 | ||
CN201810551792 | 2018-05-31 | ||
CN201810551792.X | 2018-05-31 | ||
CN201811475564.5 | 2018-12-04 | ||
CN201822023605.9U CN211668521U (zh) | 2018-05-31 | 2018-12-04 | 用于图像采集的自动引导车、以及图像采集和处理*** |
CN201822023605.9 | 2018-12-04 | ||
CN201811475564.5A CN110006420B (zh) | 2018-05-31 | 2018-12-04 | 建图方法、图像采集和处理***和定位方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019154435A1 true WO2019154435A1 (zh) | 2019-08-15 |
Family
ID=67548786
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/075741 WO2019154435A1 (zh) | 2018-05-31 | 2019-02-21 | 建图方法、图像采集和处理***和定位方法 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2019154435A1 (zh) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104835173A (zh) * | 2015-05-21 | 2015-08-12 | 东南大学 | 一种基于机器视觉的定位方法 |
CN105437251A (zh) * | 2016-01-04 | 2016-03-30 | 杭州亚美利嘉科技有限公司 | 一种定位机器人位置的方法及装置 |
CN106643801A (zh) * | 2016-12-27 | 2017-05-10 | 纳恩博(北京)科技有限公司 | 一种定位准确度的检测方法及电子设备 |
CN107607110A (zh) * | 2017-07-29 | 2018-01-19 | 刘儿兀 | 一种基于图像和惯导技术的定位方法及*** |
CN107702714A (zh) * | 2017-07-31 | 2018-02-16 | 广州维绅科技有限公司 | 定位方法、装置及*** |
-
2019
- 2019-02-21 WO PCT/CN2019/075741 patent/WO2019154435A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104835173A (zh) * | 2015-05-21 | 2015-08-12 | 东南大学 | 一种基于机器视觉的定位方法 |
CN105437251A (zh) * | 2016-01-04 | 2016-03-30 | 杭州亚美利嘉科技有限公司 | 一种定位机器人位置的方法及装置 |
CN106643801A (zh) * | 2016-12-27 | 2017-05-10 | 纳恩博(北京)科技有限公司 | 一种定位准确度的检测方法及电子设备 |
CN107607110A (zh) * | 2017-07-29 | 2018-01-19 | 刘儿兀 | 一种基于图像和惯导技术的定位方法及*** |
CN107702714A (zh) * | 2017-07-31 | 2018-02-16 | 广州维绅科技有限公司 | 定位方法、装置及*** |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN211668521U (zh) | 用于图像采集的自动引导车、以及图像采集和处理*** | |
CN109002161B (zh) | 对眼动追踪器进行硬件校准的装置和方法 | |
TWI661210B (zh) | 座標系統制定方法、裝置及資料結構產品 | |
MX2014003390A (es) | Sistema para la deteccion automatica de la huella de la estructura de imagenes oblicuas. | |
CN111750804B (zh) | 一种物体测量的方法及设备 | |
US9584768B2 (en) | Information processing apparatus, information processing method and computer-readable storage medium | |
US20150153172A1 (en) | Photography Pose Generation and Floorplan Creation | |
US11416004B2 (en) | System and method for validating readings of orientation sensor mounted on autonomous ground vehicle | |
US20230386065A1 (en) | Systems and methods for processing captured images | |
CN116091724A (zh) | 一种建筑数字孪生建模方法 | |
CN112833890A (zh) | 地图构建方法、装置、设备、机器人及存储介质 | |
AU2018204650A1 (en) | Node placement planning | |
WO2019154435A1 (zh) | 建图方法、图像采集和处理***和定位方法 | |
CN113610782A (zh) | 一种建筑物变形监测方法、设备及存储介质 | |
Tsoy et al. | Exhaustive simulation approach for a virtual camera calibration evaluation in gazebo | |
CN112945266A (zh) | 激光导航机器人及其机器人的里程计校准方法 | |
JP2017199259A (ja) | 資材認識装置、および、資材認識方法 | |
CN113628284B (zh) | 位姿标定数据集生成方法、装置、***、电子设备及介质 | |
WO2019154444A2 (zh) | 建图方法、图像采集和处理***和定位方法 | |
JP2022136365A (ja) | 植物評価装置、植物評価方法、及びプログラム | |
CN112651393A (zh) | 兴趣点数据处理方法、装置、设备及存储介质 | |
KR20180116738A (ko) | 지시선을 이용하여 공간 정보를 측정하기 위한 방법, 그 방법을 이용한 장치 | |
JP6293293B2 (ja) | マルチセンサ計測装置のルーティンを確立する方法 | |
CN111984029B (zh) | 一种无人机控制方法、装置及电子设备 | |
Chen et al. | Improving Indoor Tracking Accuracy Through Sensor Fusion: A Low-Cost, Neural Network-Assisted Visual System for Real-Time Position Estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2019531677 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19751744 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20/04/2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19751744 Country of ref document: EP Kind code of ref document: A1 |