WO2019209169A1 - Precise positioning system - Google Patents

Precise positioning system Download PDF

Info

Publication number
WO2019209169A1
WO2019209169A1 PCT/SG2018/050205 SG2018050205W WO2019209169A1 WO 2019209169 A1 WO2019209169 A1 WO 2019209169A1 SG 2018050205 W SG2018050205 W SG 2018050205W WO 2019209169 A1 WO2019209169 A1 WO 2019209169A1
Authority
WO
WIPO (PCT)
Prior art keywords
landmark
sub
pose
camera
positioning system
Prior art date
Application number
PCT/SG2018/050205
Other languages
French (fr)
Inventor
Qinghua Xia
Original Assignee
Unitech Mechatronics Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unitech Mechatronics Pte Ltd filed Critical Unitech Mechatronics Pte Ltd
Priority to PCT/SG2018/050205 priority Critical patent/WO2019209169A1/en
Priority to CN201880092763.XA priority patent/CN112074706A/en
Publication of WO2019209169A1 publication Critical patent/WO2019209169A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Definitions

  • the present disclosure describes an imaging panel, a camera, an inertia measurement unit, an altimeter, and an MCU that forms a precise positioning system, for obtaining its pose with respect to the passive or projected landmarks.
  • a precise positioning system is essential for navigation of a mobile robot such as unmanned aerial vehicle (UAV) or unmanned ground vehicle (UGV).
  • UAV unmanned aerial vehicle
  • UUV unmanned ground vehicle
  • positioning and orientation accuracy is crucial.
  • Positioning systems employing ultrasonic sensors, infrared sensors, laser range finder, wireless beacons, and vision exist in the market.
  • a problem associated with ultrasonic or infrared sensor based positioning systems is that, the systems can only provide position information, unable to provide orientation information. In order to navigate around, additional sensor is needed for a mobile robot to get orientation information.
  • a problem associated with laser range finder based positioning systems is that, the calculated position accuracy may drop under some dynamic environments. Sometimes, it is unable to obtain its own position at all under some scenarios.
  • vSLAM visual simultaneous localization and mapping approach
  • Wireless based positioning systems suffer from uncertainties of non-line-of-sight conditions and radio multi-path issue, which affect position accuracy.
  • patent WO 2004/015369 A2 discloses a tracking, autocalibration, and map-building system with artificial landmarks on the ceiling as one of the positioning methods.
  • Patent CN 102419178 A discloses a mobile robot positioning system and methods based on irradiating infrared landmarks on the ceiling.
  • Patent CN 102135429 A discloses an indoor positioning system based on passive landmarks on the ceiling.
  • Patent WO 2008/013355 A1 discloses a system and method for calculating location using a combination of odometer and irradiating infrared landmarks.
  • These systems employ cameras to capture either passive or irradiating artificial landmarks images.
  • theses landmarks do not contain sub-landmarks with different sizes and patterns to facilitate image recognition from different distances.
  • the effective image recognition distance of a passive or irradiating artificial landmark is shorter than that of a projected landmark.
  • a precise positioning system that can achieve less than a few millimetres position accuracy consists of an imaging panel, a camera, an inertia measurement unit, an altimeter, and an MCU.
  • either a passive or projected landmark consists of sub-landmarks combining big, medium, and small 2D codes, with groups of solid and hollow circles and squares around the 2D codes.
  • an imaging panel is positioned within focal range of the camera.
  • the central area of the imaging panel is made of light filtering material used to filter out the unwanted light spectrum, and the camera can see passive landmarks directly through the filter and obtain its pose with respect to the landmarks.
  • the rest area of the imaging panel is made of diffusion material used to capture the landmark image projected onto it.
  • the imaging panel, camera, and MCU can be mounted on a mobile robot to obtain its global pose information while navigating.
  • an altimeter on the mobile robot can be used to get its altitude information, while the inertial measurement unit, together with the mobile robot’s odometer, can be employed to estimate its location when no information can be obtained from either landmarks or projected landmarks.
  • either passive or projected landmarks can be put on a pallet to facilitate precise alignment between the pallet and a mobile forklift for manipulation purpose.
  • directional RFID tags can be put on a pallet or cabinet for a mobile forklift to know its rough pose, and then perform precise manipulation task with the help of either passive or projected landmarks on it.
  • landmarks or landmark projectors can be put on a jacket of person for a mobile robot to follow.
  • an UAV equipped with an imaging panel, a camera, an inertia measurement unit, light detection and ranging sensors, an infrared projector, an altimeter, and an MCU for navigation around a building.
  • building luminaries can be used as reference positions for UAV navigation.
  • Landmarks and landmark projectors can be mounted next to luminaries for UAV localization purpose.
  • a projector projects landmark downwards for a camera on the UAV to capture.
  • an altimeter on the UAV is used to get UAV’s altitude information, while the inertia measurement unit, together with light detection and ranging sensors, can be employed to estimate its pose when position and orientation information is not available from luminaries, landmarks or projected landmarks.
  • infrared projector onboard the UAV is used to project light toward a luminaire to trigger its motion sensor and judge working condition of the luminaire based on the brightness level variation.
  • FIG. 1 shows passive landmark, landmark projector, imaging panel and camera
  • FIG. 2 shows imaging panel and camera
  • FIG. 3 shows image processing procedure of the QR code image projected onto panel
  • FIG. 4 shows pose of projected QR code with respect to camera frame
  • FIG. 5 shows an scenario when landmark arrays are projected onto floor
  • FIG. 6 shows construction of an UAV mounted with positioning system
  • FIG. 7 shows UAV fleet for material delivery
  • FIG. 8 shows a mobile forklift for material handling
  • FIG. 9 shows a type of landmark construction
  • FIG. 10 shows a type of sub-landmark construction with big and small QR codes
  • FIG. 1 1 shows a type of sub-landmark construction with medium and small QR codes
  • FIG. 12 shows a type of sub-landmark construction with solid and hollow circles
  • FIG. 13 shows a type of sub-landmark construction with solid and hollow squares
  • FIG. 14 shows coordinate frames of components in the positioning system
  • FIG. 15 shows concept of QR code projection
  • FIG. 16 shows relationship between the ceiling and projected landmark in X direction
  • FIG. 17 shows relationship between the ceiling and projected landmark in Y direction
  • FIG. 18 shows construction of a pallet with landmarks
  • FIG. 19 illustrates pallet lifting using an autonomous mobile forklift
  • FIG. 20 shows a pallet with directional RFID tags
  • FIG. 21 shows a cabinet with directional RFID tags
  • FIG. 22 shows a robot following a person carrying a passive or projected landmark
  • FIG. 23 shows UAV navigating in a building with landmarks as reference
  • FIG. 24 shows UAV navigating with projected landmark
  • FIG. 25 shows UAV projecting infrared light to trigger motion sensor of a luminaire
  • projector 100 projects landmark onto imaging panel 102 that is located at the focal range of camera 101 .
  • Imaging panel 102 consists of optical filtering material at its central area, and diffusion material at the rest of the area.
  • Camera 101 can either capture the passive landmark image 103 through light filter 104, or the landmark image projected onto the diffusion area of imaging panel 102.
  • a landmark maybe in the form of 2D code, or other image patters that can be recognized and processed by an MCU.
  • one or more images of landmarks maybe projected onto panel 102, and camera 101 captures the images and transmits them to the MCU for processing.
  • FIG. 3 illustrates MCU’s image processing procedure of the projected QR code on imaging panel.
  • the captured image on the panel is converted into black and white image first, and then the three squares at the edges of the image are identified. Based on the three identified squares, local coordinate frame O q is assigned to the QR code, and the coordinates of the four corners D1 , D2, D3, D4 can be obtained.
  • the content of the QR code which is unique and represents its relative location with respect to landmark frame O m , can also be obtained.
  • the local pose of the image with respect to camera frame O c can be obtained. Once fixed, the global poses of all the landmarks are known. With the information, the camera’s global pose in XY plane can be obtained.
  • FIG. 5 shows the scenario when array of landmarks are projected onto floor for positioning.
  • imaging panel 102 and camera 101 can be mounted on an UAV.
  • UAV flies within array of the projected landmarks as shown in FIG. 5, one or more images of landmarks will be projected onto the panel.
  • global pose of the UAV can be obtained for navigation purpose.
  • An altimeter on the UAV can be used to get its height information with respect to ground.
  • the UAV’s global pose in 3-dimentional space can be determined.
  • the inertial measurement unit, together with the mobile robot’s odometer, maybe in the form of visual odometer, can be employed to estimate its pose during the period when no information is available from the landmarks.
  • UAV fleet can be deployed indoor for speedy point-to-point material delivery, as illustrated in FIG. 7.
  • the imaging panel, camera, and MCU can also be put on a UGV such as an autonomous forklift for localization, navigation, and precise manipulation.
  • a UGV such as an autonomous forklift for localization, navigation, and precise manipulation.
  • FIG. 9 shows a type of landmark construction with combination of big, medium, and small QR codes, plus solid and hollow squares and circles.
  • FIG. 10 shows a type of sub-landmark that forms part of a landmark shown in FIG. 9.
  • a big QR code is nested with a smaller QR code inside, surrounded with four small QR codes near the four corners of the big QR code. At horizontal and vertical direction, solid or hollow circles and squares are placed between the four small QR codes.
  • FIG. 1 1 shows a type of sub-landmark that forms part of a landmark shown in FIG. 9.
  • Four medium QR codes arranged in an array of two rows by two columns is also surrounded with four small QR codes near the four corners of the array. At horizontal and vertical direction, solid or hollow circles and squares are place between the four small QR codes.
  • FIG. 12 shows a type of sub-landmark with eight solid or hollow circles, with solid circle representing“1”, and hollow circle representing“0”.
  • the combination of solid and hollow circles represents the position of the sub-landmark in X direction of the landmark frame. For example, counting from left to right, seven hollow circles followed by one solid circle represent binary “00000001”, indicating its unique position.
  • FIG. 13 shows another type of sub-landmark with eight solid or hollow squares, with solid square representing“1”, and hollow square representing“0”.
  • the combination of solid and hollow squares represents the location of the sub-landmark in Y direction of the landmark frame. For example, counting from bottom to top, six hollow squares followed by two solid squares represent binary“0000001 1”, indicating its unique position.
  • FIG. 14 shows assignment of world, robot, camera, panel, and landmark coordinate frames O w , O r , O c , O p and O m . All the coordinate systems are right-handed systems by determining the direction of the z-axis by aiming the pointer finger of the right hand along the positive x-axis and curling the palm toward the positive y-axis.
  • the world frame O w is fixed at a location, the robot frame O r is attached to the mobile robot, the origin of the camera frame O c is placed at the centre of its focal lens and attached to the mobile robot, the origin of the panel frame O p is placed at the top centre of the imaging panel and also attached to the mobile robot, the landmark frame O m can be on the ceiling.
  • the pose of an object with respect to a reference frame 0 can be represented by a homogenous transformation matrix as
  • the upper left 3x3 submatrix of the 4x4 matrix T 0 represents the relative orientation of the object with respect to the reference frame 0
  • the upper right 3x1 vector represents the object’s position with respect to the same frame.
  • the homogenous transformation matrixes of landmark frame with respect to world frame, each sub-landmark pose with respect to landmark frame, robot frame with respect to world frame, camera frame with respect to robot frame, panel frame with respect to camera frame, and each sub-landmark pose with respect to panel frame are denoted as T w m , T m q , T w r , T r c , T c p , and T p q respectively.
  • the objective of the positioning system is to obtain the pose of a mobile robot with respect to the world coordinate frame T w r , denoted as
  • the camera is attached on a mobile robot, its pose with respect to the robot frame is known and represented as
  • FIG. 15 illustrates relationship between the QR code landmark on the ceiling and its
  • the projected QR code image is captured by the camera, and then processed by the MCU to get its four corners’ coordinates expressed in pixels.
  • the four corners are denoted as D1 , D2, D3, and D4.
  • D1 and D2’s positions in camera frame are expressed as [X C ,DI y c ,Di 0] r and 0] r , then their positions on the imaging panel with respect to the camera frame can be obtained as
  • FIG. 16 and FIG. 17 show the relationship between a ceiling QR code in landmark frame O m and its projected QR code in frame O pl . Based on P c,Di(Pjj m ) ’ the position of the ceiling QR code corner d1 expressed in frame O p , can be obtained as
  • the angle 0 (dl 2) of the ceiling QR code with respect to the X axis of frame O p can be obtained, thus the homogenous transformation matrix of the ceiling QR code with respect to frame O p , can be expressed as
  • the pose of the robot with respect to the world frame can be obtained as
  • a sub-landmark shown in FIG .12 with eight solid or hollow circles is projected onto the imaging panel, its pose T p q can also be obtained following similar derivation procedure described above, and the pose of the robot with respect to world frame T w r can be obtained. If only part of the sublandmark’s eight solid or hollow circles are projected onto the imaging panel, although exact pose of the sub-landmark T p q is unable to be obtained, its orientation can still be obtained, thus the orientation of the robot with respect to world frame can be obtained.
  • FIG. 18 shows construction of a pallet with either passive or projected landmarks put at the legs of a pallet.
  • the poses of the left, middle, and right landmarks on the pallet with respect to a mobile forklift frame can be obtained as T ' r ql , T r qm and T r qr .
  • the forklift can use the parameters to align with the pallet properly and perform precise pallet lifting task.
  • FIG. 20 shows a pallet mounted with directional RFID tags, with one RFID tag restricting the tag reading zone at horizontal plan only, and three RFID tags restricting the tag reading zones at left, middle, and right vertical plan only.
  • An autonomous forklift equipped with an RFID reader will know the pallet’s rough pose based on the readings from the RFID tags on the pallet. Combing either passive or projected landmarks on the pallet, the forklift will be able to identify the pallet’s rough pose first, and then perform precise pallet lifting task.
  • FIG. 21 shows a cabinet mounted with directional RFID tags, with one RFID tag restricting the tag reading zone at horizontal plan only, and three RFID tags restricting the tag reading zones at left, middle, and right vertical plan only.
  • An autonomous forklift equipped with an RFID reader will know the cabinet’s rough pose based on the readings from the RFID tags on the cabinet.
  • FIG. 22 shows a concept when a person wearing either passive or projected landmarks, a robot with positioning system calculates its pose with respect to the landmarks, and follow the person in front of it.
  • FIG. 23 illustrates a UAV navigating in a building with luminaries, passive landmarks, and landmark projectors as reference locations.
  • FIG. 24 shows a scenario when a UAV is obtaining its pose with respect to the projector’s landmark based on the projected landmark on the UAV’s imaging panel, and navigating in the building to carry out inspection and surveillance tasks.
  • FIG. 25 illustrates a scenario when a UAV carrying infrared projector is projecting infrared light to trigger the motion sensor of a luminaire.
  • the onboard camera can be used to detect whether a luminaire’s brightness level is changed after triggering. If the brightness level is adjusted up, which means the luminaire is working well, if not, there is a need to change the faulty luminaire.
  • the UAV can detect and record the working condition to facilitate building lighting maintenance.
  • the UAV can also carry out building surveillance works together with the onboard inertia measurement unit, light detection and ranging sensors, and an altimeter.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A precise positioning system that can achieve less than a few millimetres position accuracy consists of a camera, an MCU, an imaging panel with its central area made of optical filtering material, and the rest of the area made of diffusion material, an inertia measurement unit and an altimeter. The camera captures either a passive or projected landmark image and obtains its pose with respect to the landmark frame. If passive landmarks and landmark projectors are fixed at certain locations, their global poses are known. With known relationships between the positioning system and camera frames, landmark and world frames, the global pose of the positioning system with respect to world frame will be obtained. An altimeter can be used to get its altitude information. With the altitude information and global pose in XY plane, the system's global pose in 3-dimentional space can be determined.

Description

PRECISE POSITIONING SYSTEM
TECHNICAL FIELD
The present disclosure describes an imaging panel, a camera, an inertia measurement unit, an altimeter, and an MCU that forms a precise positioning system, for obtaining its pose with respect to the passive or projected landmarks.
BACKGROUND ART
A precise positioning system is essential for navigation of a mobile robot such as unmanned aerial vehicle (UAV) or unmanned ground vehicle (UGV). For some applications that need mobile manipulation, positioning and orientation accuracy is crucial.
Positioning systems employing ultrasonic sensors, infrared sensors, laser range finder, wireless beacons, and vision exist in the market.
A problem associated with ultrasonic or infrared sensor based positioning systems is that, the systems can only provide position information, unable to provide orientation information. In order to navigate around, additional sensor is needed for a mobile robot to get orientation information.
A problem associated with laser range finder based positioning systems is that, the calculated position accuracy may drop under some dynamic environments. Sometimes, it is unable to obtain its own position at all under some scenarios.
There is one vision based positioning system that uses a camera to obtain its pose from the landmarks on the floor. However, landmarks can be damaged easily, or are not allowed to be put on the floor in some places. In addition, such a system is unable to be used on a UAV for localization.
Position accuracy of vision based systems using visual simultaneous localization and mapping approach (vSLAM) is affected by varying lighting condition and dynamic environments.
Sometimes, it is unable to obtain its own position at all under some scenarios.
Wireless based positioning systems suffer from uncertainties of non-line-of-sight conditions and radio multi-path issue, which affect position accuracy.
As examples of landmark based systems, patent WO 2004/015369 A2 discloses a tracking, autocalibration, and map-building system with artificial landmarks on the ceiling as one of the positioning methods. Patent CN 102419178 A discloses a mobile robot positioning system and methods based on irradiating infrared landmarks on the ceiling. Patent CN 102135429 A discloses an indoor positioning system based on passive landmarks on the ceiling. Patent WO 2008/013355 A1 discloses a system and method for calculating location using a combination of odometer and irradiating infrared landmarks.
These systems employ cameras to capture either passive or irradiating artificial landmarks images. The further the distance between a camera and a landmark, the lower the position accuracy. In addition, theses landmarks do not contain sub-landmarks with different sizes and patterns to facilitate image recognition from different distances. Furthermore, the effective image recognition distance of a passive or irradiating artificial landmark is shorter than that of a projected landmark.
It is therefore the objective of the invention to provide a positioning system that can be used for both UAV and UGV, with deterministic and precise position accuracy. SUMMARY
According to the invention, a precise positioning system that can achieve less than a few millimetres position accuracy consists of an imaging panel, a camera, an inertia measurement unit, an altimeter, and an MCU.
According to the first aspect of the present invention, either a passive or projected landmark consists of sub-landmarks combining big, medium, and small 2D codes, with groups of solid and hollow circles and squares around the 2D codes.
According to the second aspect of the present invention, an imaging panel is positioned within focal range of the camera. The central area of the imaging panel is made of light filtering material used to filter out the unwanted light spectrum, and the camera can see passive landmarks directly through the filter and obtain its pose with respect to the landmarks. The rest area of the imaging panel is made of diffusion material used to capture the landmark image projected onto it.
According to the third aspect of the present invention, the imaging panel, camera, and MCU can be mounted on a mobile robot to obtain its global pose information while navigating.
According to the fourth aspect of the present invention, an altimeter on the mobile robot can be used to get its altitude information, while the inertial measurement unit, together with the mobile robot’s odometer, can be employed to estimate its location when no information can be obtained from either landmarks or projected landmarks.
According to the fifth aspect of the present invention, either passive or projected landmarks can be put on a pallet to facilitate precise alignment between the pallet and a mobile forklift for manipulation purpose.
According to the sixth aspect of the present invention, directional RFID tags can be put on a pallet or cabinet for a mobile forklift to know its rough pose, and then perform precise manipulation task with the help of either passive or projected landmarks on it.
According to the seventh aspect of the present invention, landmarks or landmark projectors can be put on a jacket of person for a mobile robot to follow.
According to the eighth aspect of the present invention, an UAV equipped with an imaging panel, a camera, an inertia measurement unit, light detection and ranging sensors, an infrared projector, an altimeter, and an MCU for navigation around a building.
According to the ninth aspect of the present invention, building luminaries can be used as reference positions for UAV navigation. Landmarks and landmark projectors can be mounted next to luminaries for UAV localization purpose. A projector projects landmark downwards for a camera on the UAV to capture.
According to the tenth aspect of the present invention, an altimeter on the UAV is used to get UAV’s altitude information, while the inertia measurement unit, together with light detection and ranging sensors, can be employed to estimate its pose when position and orientation information is not available from luminaries, landmarks or projected landmarks.
According to the eleventh aspect of the present invention, infrared projector onboard the UAV is used to project light toward a luminaire to trigger its motion sensor and judge working condition of the luminaire based on the brightness level variation.
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1 shows passive landmark, landmark projector, imaging panel and camera
FIG. 2 shows imaging panel and camera FIG. 3 shows image processing procedure of the QR code image projected onto panel
FIG. 4 shows pose of projected QR code with respect to camera frame
FIG. 5 shows an scenario when landmark arrays are projected onto floor
FIG. 6 shows construction of an UAV mounted with positioning system
FIG. 7 shows UAV fleet for material delivery
FIG. 8 shows a mobile forklift for material handling
FIG. 9 shows a type of landmark construction
FIG. 10 shows a type of sub-landmark construction with big and small QR codes
FIG. 1 1 shows a type of sub-landmark construction with medium and small QR codes
FIG. 12 shows a type of sub-landmark construction with solid and hollow circles
FIG. 13 shows a type of sub-landmark construction with solid and hollow squares
FIG. 14 shows coordinate frames of components in the positioning system
FIG. 15 shows concept of QR code projection
FIG. 16 shows relationship between the ceiling and projected landmark in X direction
FIG. 17 shows relationship between the ceiling and projected landmark in Y direction
FIG. 18 shows construction of a pallet with landmarks
FIG. 19 illustrates pallet lifting using an autonomous mobile forklift
FIG. 20 shows a pallet with directional RFID tags
FIG. 21 shows a cabinet with directional RFID tags
FIG. 22 shows a robot following a person carrying a passive or projected landmark
FIG. 23 shows UAV navigating in a building with landmarks as reference
FIG. 24 shows UAV navigating with projected landmark
FIG. 25 shows UAV projecting infrared light to trigger motion sensor of a luminaire
DETAILED DESCRIPTION
As shown in FIG. 1 , projector 100 projects landmark onto imaging panel 102 that is located at the focal range of camera 101 . Imaging panel 102 consists of optical filtering material at its central area, and diffusion material at the rest of the area. Camera 101 can either capture the passive landmark image 103 through light filter 104, or the landmark image projected onto the diffusion area of imaging panel 102.
A landmark maybe in the form of 2D code, or other image patters that can be recognized and processed by an MCU.
As shown in FIG. 2, one or more images of landmarks maybe projected onto panel 102, and camera 101 captures the images and transmits them to the MCU for processing.
FIG. 3 illustrates MCU’s image processing procedure of the projected QR code on imaging panel. The captured image on the panel is converted into black and white image first, and then the three squares at the edges of the image are identified. Based on the three identified squares, local coordinate frame Oq is assigned to the QR code, and the coordinates of the four corners D1 , D2, D3, D4 can be obtained. The content of the QR code, which is unique and represents its relative location with respect to landmark frame Om, can also be obtained.
As shown in FIG. 4, based on obtained QR code coordinates illustrated in FIG. 3, the local pose of the image with respect to camera frame Oc can be obtained. Once fixed, the global poses of all the landmarks are known. With the information, the camera’s global pose in XY plane can be obtained.
FIG. 5 shows the scenario when array of landmarks are projected onto floor for positioning.
As shown in FIG. 6, imaging panel 102 and camera 101 can be mounted on an UAV. When the UAV flies within array of the projected landmarks as shown in FIG. 5, one or more images of landmarks will be projected onto the panel. Based on the procedure illustrated in FIGs 3 and 4, global pose of the UAV can be obtained for navigation purpose. An altimeter on the UAV can be used to get its height information with respect to ground. With the height information, and global pose in XY plane, the UAV’s global pose in 3-dimentional space can be determined. The inertial measurement unit, together with the mobile robot’s odometer, maybe in the form of visual odometer, can be employed to estimate its pose during the period when no information is available from the landmarks.
Based on the positioning system, UAV fleet can be deployed indoor for speedy point-to-point material delivery, as illustrated in FIG. 7.
As shown in FIG. 8, the imaging panel, camera, and MCU can also be put on a UGV such as an autonomous forklift for localization, navigation, and precise manipulation.
FIG. 9 shows a type of landmark construction with combination of big, medium, and small QR codes, plus solid and hollow squares and circles.
FIG. 10 shows a type of sub-landmark that forms part of a landmark shown in FIG. 9. A big QR code is nested with a smaller QR code inside, surrounded with four small QR codes near the four corners of the big QR code. At horizontal and vertical direction, solid or hollow circles and squares are placed between the four small QR codes.
FIG. 1 1 shows a type of sub-landmark that forms part of a landmark shown in FIG. 9. Four medium QR codes arranged in an array of two rows by two columns is also surrounded with four small QR codes near the four corners of the array. At horizontal and vertical direction, solid or hollow circles and squares are place between the four small QR codes.
FIG. 12 shows a type of sub-landmark with eight solid or hollow circles, with solid circle representing“1”, and hollow circle representing“0”. The combination of solid and hollow circles represents the position of the sub-landmark in X direction of the landmark frame. For example, counting from left to right, seven hollow circles followed by one solid circle represent binary “00000001”, indicating its unique position.
FIG. 13 shows another type of sub-landmark with eight solid or hollow squares, with solid square representing“1”, and hollow square representing“0”. The combination of solid and hollow squares represents the location of the sub-landmark in Y direction of the landmark frame. For example, counting from bottom to top, six hollow squares followed by two solid squares represent binary“0000001 1”, indicating its unique position.
FIG. 14 shows assignment of world, robot, camera, panel, and landmark coordinate frames Ow, Or, Oc, Op and Om. All the coordinate systems are right-handed systems by determining the direction of the z-axis by aiming the pointer finger of the right hand along the positive x-axis and curling the palm toward the positive y-axis.
The world frame Ow is fixed at a location, the robot frame Or is attached to the mobile robot, the origin of the camera frame Oc is placed at the centre of its focal lens and attached to the mobile robot, the origin of the panel frame Op is placed at the top centre of the imaging panel and also attached to the mobile robot, the landmark frame Om can be on the ceiling.
The pose of an object with respect to a reference frame 0 can be represented by a homogenous transformation matrix as
Figure imgf000005_0001
Where the upper left 3x3 submatrix of the 4x4 matrix T 0 represents the relative orientation of the object with respect to the reference frame 0 , and the upper right 3x1 vector represents the object’s position with respect to the same frame.
The homogenous transformation matrixes of landmark frame with respect to world frame, each sub-landmark pose with respect to landmark frame, robot frame with respect to world frame, camera frame with respect to robot frame, panel frame with respect to camera frame, and each sub-landmark pose with respect to panel frame are denoted as Tw m, Tm q , Tw r, Tr c, Tc p , and Tp q respectively. With the arrangement, the following equation holds
Figure imgf000006_0001
The objective of the positioning system is to obtain the pose of a mobile robot with respect to the world coordinate frame Tw r, denoted as
Figure imgf000006_0002
Once fixed on the ceiling, the pose of each landmark with respect to world frame and pose of each sub-landmark with respect to landmark frame are known and represented as
Figure imgf000006_0003
The camera is attached on a mobile robot, its pose with respect to the robot frame is known and represented as
Figure imgf000006_0004
FIG. 15 illustrates relationship between the QR code landmark on the ceiling and its
corresponding projected landmark on the imaging panel. The projected QR code image is captured by the camera, and then processed by the MCU to get its four corners’ coordinates expressed in pixels. The four corners are denoted as D1 , D2, D3, and D4. For example, if D1 and D2’s positions in camera frame are expressed as [XC,DI yc,Di 0]r and 0]r, then their positions on the imaging panel with respect to the camera frame can be obtained as
PC,D1— [_ Xc,Dldc,p/fc -yC,Dl dc,p/fc dC p T
P C,D2 — [ Xc,D2 -cJ fc yc.D2 d-c.pl fc dC p~y Where the focal length of the camera is denoted as fc, and the distance between the panel top surface and the focal lens denoted as dc p. Assuming that the orientations of the panel and robot frames are the same.
At this moment, the robot’s orientation with respect to world frame can be expressed as
Figure imgf000007_0001
It can be obtained from the roll, pitch, and yaw angles of the robot, which is measured by the onboard gyroscope.
Assuming that the orientations of the camera and robot frames are the same, and the origin of camera frame is just above the origin of the robot frame. To get the pose of the ceiling QR code with respect to the panel frame Tp q , first, multiply the inverse of Rw r with Pc D1 to transform the position vector of D1 to the frame Op, that has the same orientation with the ceiling landmark frame Om, and expressed in camera frame as
Figure imgf000007_0002
FIG. 16 and FIG. 17 show the relationship between a ceiling QR code in landmark frame Om and its projected QR code in frame Opl. Based on Pc,Di(Pjjm)’ the position of the ceiling QR code corner d1 expressed in frame Op, can be obtained as
Figure imgf000007_0003
Similarly we can get
Figure imgf000007_0004
Where (xqdl,yqdl and ( xqd .yqd ) are coordinates of ceiling QR code corners d1 and d2, and (xmi, ymi ) is the coordinate of projector light source in landmark frame, (xqD1,yqD1) is the coordinate of ceiling QR code expressed in frame Op; . dqi is the vertical distance between the light source and landmark plane, Dpq is the vertical distance between landmark frame Om and Op;.
Based on Pp; D1 and Pp, D2, the angle 0(dl 2) of the ceiling QR code with respect to the X axis of frame Op, can be obtained, thus the homogenous transformation matrix of the ceiling QR code with respect to frame Op, can be expressed as
Figure imgf000008_0001
And the homogenous transformation matrix of the ceiling QR code with respect to frame Op can be expressed as
Figure imgf000008_0002
Following similar derivation sequence, the pose of sub-landmark shown from FIG. 10 to FIG. 13 with respect to frame Op can be obtained.
With one or more obtained poses of sub-landmarks, the pose of the robot with respect to the world frame can be obtained as
_ rr
Lw,r Lw,l Ll,q 1 p,q 1 c,p 1 r,c
If a sub-landmark shown in FIG .12, with eight solid or hollow circles is projected onto the imaging panel, its pose Tp q can also be obtained following similar derivation procedure described above, and the pose of the robot with respect to world frame Tw r can be obtained. If only part of the sublandmark’s eight solid or hollow circles are projected onto the imaging panel, although exact pose of the sub-landmark Tp q is unable to be obtained, its orientation can still be obtained, thus the orientation of the robot with respect to world frame can be obtained.
Similarly, for a sub-landmark shown in FIG .13, either the pose or the orientation of the robot with respect to world frame Tw r can be obtained.
FIG. 18 shows construction of a pallet with either passive or projected landmarks put at the legs of a pallet. As shown in FIG. 19, the poses of the left, middle, and right landmarks on the pallet with respect to a mobile forklift frame can be obtained as T 'r ql, Tr qm and Tr qr. With the distance between the left edge of the mobile forklift and the left landmark d1 , and distance between the right edge of the mobile forklift and the right landmark d2, the forklift can use the parameters to align with the pallet properly and perform precise pallet lifting task.
FIG. 20 shows a pallet mounted with directional RFID tags, with one RFID tag restricting the tag reading zone at horizontal plan only, and three RFID tags restricting the tag reading zones at left, middle, and right vertical plan only.
An autonomous forklift equipped with an RFID reader will know the pallet’s rough pose based on the readings from the RFID tags on the pallet. Combing either passive or projected landmarks on the pallet, the forklift will be able to identify the pallet’s rough pose first, and then perform precise pallet lifting task.
FIG. 21 shows a cabinet mounted with directional RFID tags, with one RFID tag restricting the tag reading zone at horizontal plan only, and three RFID tags restricting the tag reading zones at left, middle, and right vertical plan only. An autonomous forklift equipped with an RFID reader will know the cabinet’s rough pose based on the readings from the RFID tags on the cabinet.
Combing either passive or projected landmarks on the cabinet or the pallet below, the forklift will be able to identify the cabinet’s rough pose first, and then perform precise cabinet lifting task. FIG. 22 shows a concept when a person wearing either passive or projected landmarks, a robot with positioning system calculates its pose with respect to the landmarks, and follow the person in front of it.
FIG. 23 illustrates a UAV navigating in a building with luminaries, passive landmarks, and landmark projectors as reference locations.
FIG. 24 shows a scenario when a UAV is obtaining its pose with respect to the projector’s landmark based on the projected landmark on the UAV’s imaging panel, and navigating in the building to carry out inspection and surveillance tasks.
FIG. 25 illustrates a scenario when a UAV carrying infrared projector is projecting infrared light to trigger the motion sensor of a luminaire. The onboard camera can be used to detect whether a luminaire’s brightness level is changed after triggering. If the brightness level is adjusted up, which means the luminaire is working well, if not, there is a need to change the faulty luminaire. The UAV can detect and record the working condition to facilitate building lighting maintenance.
With luminaries, passive landmarks, and landmark projectors as position references, the UAV can also carry out building surveillance works together with the onboard inertia measurement unit, light detection and ranging sensors, and an altimeter.

Claims

1. A precise positioning system comprising:
an imaging panel with its central area made of optical filtering material to remove unwanted light spectrum, and the rest of the area made of diffusion material for projected artificial landmark image formation;
a camera to either capture passive artificial landmark image through the optical filter of the imaging panel, or to capture the projected artificial landmark image formed on the imaging panel;
an inertia measurement unit to measure the system’s pose;
an altimeter to measure the system’s altitude;
and an MCU to obtain the system’s global pose with respect to world frame.
2. An artificial landmark, either in passive form or projected with a landmark projector,
comprising:
sub-landmarks combining big, medium, and small 2D codes, and groups of solid and hollow circles and squares around the 2D codes;
a small 2D code can be either nested inside or put outside of a big 2D code;
at horizontal and vertical directions, groups of solid or hollow circles and squares are placed around 2D codes;
solid or hollow circles are arranged in groups, with solid circle representing“1”, and hollow circle representing“0”, and the combination of solid and hollow circles within a group represents the location of the sub-landmark in horizontal direction, for example, counting from left to right, seven hollow circles followed by one solid circle represent binary “00000001”, indicating horizontal location of this sub-landmark;
solid or hollow squares are arranged in groups, with solid square representing“1”, and hollow square representing“0”, and the combination of solid and hollow squares within a group represents the location of the sub-landmark in vertical direction, for example, counting from bottom to top, six hollow squares followed by two solid squares represent binary“0000001 1”, indicating vertical location of this sub-landmark.
3. For the system of claims 1 and 2, if a 2D code type sub-landmark of a passive landmark at a fixed location is captured by the camera through the imaging panel’s light filter:
the sub-landmark’s pose with respect to the camera frame can be obtained, and with known relationships between positioning system and camera frames, sub-landmark and landmark frames, landmark and world frames, the global pose of the positioning system with respect to world frame will be obtained;
with known sub-landmark dimension, obtained sub-landmark image dimension in pixel, and camera’s focal length, the distance between the positioning system and the passive landmark will be obtained.
4. For the system of claims 1 and 2, if the circle type sub-landmark of a passive landmark at a fixed location is captured by the camera through the imaging panel’s light filter:
the sub-landmark’s pose with respect to the camera frame can be obtained, and with known relationships between positioning system and camera frames, sub-landmark and landmark frames, landmark and world frames, the global pose of the positioning system with respect to world frame will be obtained;
the combination of solid and hollow circles within a group represents the position of the sublandmark in X direction of the landmark frame, for example, counting from left to right, seven hollow circles followed by one solid circle in the group represent binary“00000001”, indicating its unique position;
with known sub-landmark dimension, obtained sub-landmark image dimension in pixel, and camera’s focal length, the distance between the positioning system and the passive landmark will be obtained;
if only part of the sub-landmark’s solid or hollow circles within a group is captured, although exact pose of the sub-landmark is unable to be obtained, its orientation can still be obtained, thus the global orientation of the positioning system with respect to world frame can be obtained.
5. For the system of claims 1 and 2, if the square type sub-landmark of a passive landmark at a fixed location is captured by the camera through the imaging panel’s light filter:
the sub-landmark’s pose with respect to the camera frame can be obtained, and with known relationships between positioning system and camera frames, sub-landmark and landmark frames, landmark and world frames, the global pose of the positioning system with respect to world frame will be obtained;
the combination of solid and hollow squares within a group represents the position of the sublandmark in Y direction of the landmark frame, for example, counting from bottom to top, six hollow squares followed by two solid squares represent binary“0000001 1”, indicating its unique location;
with known sub-landmark dimension, obtained sub-landmark image dimension in pixel, and camera’s focal length, the distance between the positioning system and the passive landmark will be obtained;
if only part of the sub-landmark’s solid or hollow squares within a group is captured, although exact pose of the sub-landmark is unable to be obtained, its orientation can still be obtained, thus the global orientation of the positioning system with respect to world frame can be obtained.
6. For the system of claims 1 and 2, if a 2D code type sub-landmark is projected onto the
diffusion area of the imaging panel and captured by the camera:
the projected sub-landmark’s pose with respect to the camera frame can be obtained, and with known relationships between positioning system and camera frames, projected sublandmark and landmark frames, landmark and world frames, the global pose of the positioning system with respect to world frame will be obtained;
with known sub-landmark dimension, obtained projected sub-landmark image dimension in pixel, distance between the imaging panel and camera lens, and camera’s focal length, the distance between the positioning system and the landmark will be obtained.
7. For the system of claims 1 and 2, if the circle type sub-landmark is projected onto the
diffusion area of the imaging panel and captured by the camera:
the projected sub-landmark’s pose with respect to the camera frame can be obtained, and with known relationships between positioning system and camera frames, projected sublandmark and landmark frames, landmark and world frames, the global pose of the positioning system with respect to world frame will be obtained;
the combination of solid and hollow circles within a group represents the position of the sublandmark in X direction of the landmark frame, for example, counting from left to right, seven hollow circles followed by one solid circle represent binary“00000001”, indicating its unique position;
with known sub-landmark dimension, obtained projected sub-landmark image dimension in pixel, distance between the imaging panel and camera lens, and camera’s focal length, the distance between the positioning system and the landmark will be obtained
if only part of the sub-landmark’s solid or hollow circles within a group is captured, although exact pose of the sub-landmark is unable to be obtained, its orientation can still be obtained, thus the global orientation of the positioning system with respect to world frame can be obtained.
8. For the system of claims 1 and 2, if the square type sub-landmark is projected onto the
diffusion area of the imaging panel and captured by the camera:
the projected sub-landmark’s pose with respect to the camera frame can be obtained, and with known relationships between positioning system and camera frames, projected sublandmark and landmark frames, landmark and world frames, the global pose of the positioning system with respect to world frame will be obtained; the combination of solid and hollow squares within a group represents the position of the sublandmark in Y direction of the landmark frame, for example, counting from bottom to top, six hollow squares followed by two solid squares represent binary“0000001 1”, indicating its unique location;
with known sub-landmark dimension, obtained projected sub-landmark image dimension in pixel, distance between the imaging panel and camera lens, and camera’s focal length, the distance between the positioning system and the landmark will be obtained;
if only part of the sub-landmark’s solid or hollow squares within a group is captured, although exact pose of the sub-landmark is unable to be obtained, its orientation can still be obtained, thus the global orientation of the positioning system with respect to world frame can be obtained.
9. For the landmark of claim 2, the combination of sub-landmarks with big, medium and small 2D codes, and solid and hollow circles and squares groups will increase chance of obtaining a landmark’s pose under various scenarios;
for example, at certain distance, only part of a big 2D code is captured and unable to be processed by a MCU, but at this moment, complete image of a medium or small 2D code can still be captured to obtain its pose;
at another distance, a small 2D code image is not sharp, but a big or medium 2D code image is clear enough to be used to obtain its pose;
under certain scenario, none of the 2D codes image is sharp enough to be used to obtain its pose, but solid or hollow circles or squares image is still clear enough to obtain its pose, or at least its orientation if only part of the solid or hollow circles or squares image is captured; with known sub-landmark dimension, obtained projected sub-landmark image dimension in pixel, distance between the imaging panel and camera lens, and camera’s focal length, the distance between the positioning system and the landmark will be obtained;
if one or more poses of sub-landmarks cab be obtained at the same time, combination of the poses information will get more reliable pose information with respect to world frame.
10. For the system of claims 1-9, an altimeter on a mobile robot can be used to get its altitude information, while the inertial measurement unit, together with the mobile robot’s odometer, can be employed to estimate the system’s location with dead reckoning or other algorithms when no information can be obtained from either landmarks or projected landmarks.
11 . For the system of claims 1-10, passive landmarks or landmark projectors are put on a pallet or cabinet, and the positioning system on an autonomous forklift calculate its own pose and distance with respect to the pallet or cabinet based on the captured passive landmark or projected landmark image on the imaging panel, and then perform precise pallet or cabinet lifting task.
12. For the system of claims 1-1 1 , directional RFID tags, passive landmarks or landmark
projectors are put on a pallet or cabinet:
one RFID tag restricting the tag reading zone at horizontal plan only, and three RFID tags restricting the tag reading zones at left, middle, and right vertical plan only, and an autonomous forklift equipped with an RFID reader will know the pallet or cabinet’s rough pose and distance based on the readings from the RFID tags on the pallet;
once the forklift knows its rough pose and distance with respect to the pallet or cabinet based on directional RFID tags, it then calculate its more precise pose and distance with respect to it based on the captured passive landmark image or projected landmark image on the imaging panel, and then perform precise pallet or cabinet lifting task.
13. For the system of claims 1-10, passive landmarks or landmark projectors are mounted on top of a shelf, a mobile robot obtains its pose and distance with respect to the shelf, and then performs precise material loading and unloading tasks.
14. For the system of claims 1-10, a passive landmark or landmark projector is mounted on a person or a mobile device, and a mobile robot behind captures either the passive or projected landmark image, obtain its pose and distance with respect to the person or a mobile device, and then perform following task.
15. For the system of claims 1-10, passive landmarks or landmark projectors are mounted on the ceiling of a building, and a UAV with positioning system obtains its pose with world frame, navigate in the building to carry out inspection and surveillance tasks.
16. For the system of claims 1-9 and 15, an altimeter on the UAV is used to get its altitude
information, while the inertia measurement unit, visual odometer, together with light detection and ranging sensors, can be employed to estimate its pose when reference position and orientation information is not available from luminaries, landmarks or projected landmarks.
17. For the system of claims 1-9, 15 and 16, a UAV carrying infrared projector projects infrared light to trigger the motion sensor of a luminaire, and the onboard camera can be used to detect whether luminaire’s brightness level is changed after triggering, if the brightness level is adjusted up, which means the luminaire is working, if not, there is a need to change the faulty luminaire, which helps facilitate building lighting maintenance work.
18. For the system of claims 1 and 2, passive landmarks and directional RFID tags are put on the floor at known fixed locations, and the RFID reader on a mobile robot reads the RFID tags to get its rough global pose, the camera on the mobile robot captures the passive landmark image to get its more precise global pose, and then navigates from one to another place.
PCT/SG2018/050205 2018-04-28 2018-04-28 Precise positioning system WO2019209169A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/SG2018/050205 WO2019209169A1 (en) 2018-04-28 2018-04-28 Precise positioning system
CN201880092763.XA CN112074706A (en) 2018-04-28 2018-04-28 Accurate positioning system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SG2018/050205 WO2019209169A1 (en) 2018-04-28 2018-04-28 Precise positioning system

Publications (1)

Publication Number Publication Date
WO2019209169A1 true WO2019209169A1 (en) 2019-10-31

Family

ID=68294489

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2018/050205 WO2019209169A1 (en) 2018-04-28 2018-04-28 Precise positioning system

Country Status (2)

Country Link
CN (1) CN112074706A (en)
WO (1) WO2019209169A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112875578A (en) * 2020-12-28 2021-06-01 深圳市易艾得尔智慧科技有限公司 Unmanned forklift control system
JP2022007511A (en) * 2020-06-26 2022-01-13 株式会社豊田自動織機 Recognition device, recognition method, and marker
JP7466813B1 (en) 2023-04-07 2024-04-12 三菱電機株式会社 Automatic connection mechanism, autonomous vehicle, and automatic connection method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004015369A2 (en) * 2002-08-09 2004-02-19 Intersense, Inc. Motion tracking system and method
JP2004328496A (en) * 2003-04-25 2004-11-18 Toshiba Corp Image processing method
CN102135429A (en) * 2010-12-29 2011-07-27 东南大学 Robot indoor positioning and navigating method based on vision
CN102419178A (en) * 2011-09-05 2012-04-18 中国科学院自动化研究所 Mobile robot positioning system and method based on infrared road sign
US20150153639A1 (en) * 2012-03-02 2015-06-04 Mitsubishi Paper Mills Limited Transmission type screen
CN105184343A (en) * 2015-08-06 2015-12-23 吴永 Composite bar code
US20160078335A1 (en) * 2014-09-15 2016-03-17 Ebay Inc. Combining a qr code and an image
CN107450540A (en) * 2017-08-04 2017-12-08 山东大学 Indoor mobile robot navigation system and method based on infrared road sign

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005124431A1 (en) * 2004-06-18 2005-12-29 Pioneer Corporation Information display device and navigation device
CN201548685U (en) * 2009-11-26 2010-08-11 山东大学 Assisting navigation device for ceiling projector
CN104641315B (en) * 2012-07-19 2017-06-30 优泰机电有限公司 3D tactile sensing apparatus
CN104766309A (en) * 2015-03-19 2015-07-08 江苏国典艺术品保真科技有限公司 Plane feature point navigation and positioning method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004015369A2 (en) * 2002-08-09 2004-02-19 Intersense, Inc. Motion tracking system and method
JP2004328496A (en) * 2003-04-25 2004-11-18 Toshiba Corp Image processing method
CN102135429A (en) * 2010-12-29 2011-07-27 东南大学 Robot indoor positioning and navigating method based on vision
CN102419178A (en) * 2011-09-05 2012-04-18 中国科学院自动化研究所 Mobile robot positioning system and method based on infrared road sign
US20150153639A1 (en) * 2012-03-02 2015-06-04 Mitsubishi Paper Mills Limited Transmission type screen
US20160078335A1 (en) * 2014-09-15 2016-03-17 Ebay Inc. Combining a qr code and an image
CN105184343A (en) * 2015-08-06 2015-12-23 吴永 Composite bar code
CN107450540A (en) * 2017-08-04 2017-12-08 山东大学 Indoor mobile robot navigation system and method based on infrared road sign

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
COLIOS C. I. ET AL.: "A framework for visual landmark identification based on projective and point-permutation invariant vectors", ROBOTICS AND AUTONOMOUS SYSTEMS, vol. 35, no. 1, 30 April 2001 (2001-04-30), pages 37 - 51, XP004231364, [retrieved on 20180626], DOI: 10.1016/S0921-8890(00)00129-9 *
TIAN H.: "QR Code and Its Applications on Robot Self-localization", A THESIS IN PATTERN RECOGNITION AND INTELLIGENCE SYSTEM, 30 June 2014 (2014-06-30), pages 1 - 72 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022007511A (en) * 2020-06-26 2022-01-13 株式会社豊田自動織機 Recognition device, recognition method, and marker
JP7351265B2 (en) 2020-06-26 2023-09-27 株式会社豊田自動織機 Recognition device and recognition method
CN112875578A (en) * 2020-12-28 2021-06-01 深圳市易艾得尔智慧科技有限公司 Unmanned forklift control system
CN112875578B (en) * 2020-12-28 2024-05-07 深圳鹏鲲智科技术有限公司 Unmanned forklift control system
JP7466813B1 (en) 2023-04-07 2024-04-12 三菱電機株式会社 Automatic connection mechanism, autonomous vehicle, and automatic connection method

Also Published As

Publication number Publication date
CN112074706A (en) 2020-12-11

Similar Documents

Publication Publication Date Title
US10930015B2 (en) Method and system for calibrating multiple cameras
CN108571971B (en) AGV visual positioning system and method
CN109242890B (en) Laser speckle system and method for aircraft
TWI827649B (en) Apparatuses, systems and methods for vslam scale estimation
US11448762B2 (en) Range finder for determining at least one geometric information
CN102419178B (en) Mobile robot positioning system and method based on infrared road sign
CN107687855B (en) Robot positioning method and device and robot
US11614743B2 (en) System and method for navigating a sensor-equipped mobile platform through an environment to a destination
CN110009682B (en) Target identification and positioning method based on monocular vision
Khazetdinov et al. Embedded ArUco: a novel approach for high precision UAV landing
EP3113147B1 (en) Self-location calculating device and self-location calculating method
JP2014013146A5 (en)
WO2019209169A1 (en) Precise positioning system
CN114415736B (en) Multi-stage visual accurate landing method and device for unmanned aerial vehicle
CN106370160A (en) Robot indoor positioning system and method
CN107436422A (en) A kind of robot localization method based on infrared lamp solid array
CN113390426A (en) Positioning method, positioning device, self-moving equipment and storage medium
CN106403926B (en) Positioning method and system
JP5874252B2 (en) Method and apparatus for measuring relative position with object
JP2010078466A (en) Method and system for automatic marker registration
CN100582653C (en) System and method for determining position posture adopting multi- bundle light
Mutka et al. A low cost vision based localization system using fiducial markers
Cucchiara et al. Efficient Stereo Vision for Obstacle Detection and AGV Navigation.
Karakaya et al. A hybrid indoor localization system based on infra-red imaging and odometry
CN116710975A (en) Method for providing navigation data for controlling a robot, method and device for manufacturing at least one predefined point-symmetrical area

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18916399

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18916399

Country of ref document: EP

Kind code of ref document: A1