CN212044822U - Laser radar's spill robot head - Google Patents

Laser radar's spill robot head Download PDF

Info

Publication number
CN212044822U
CN212044822U CN202020145422.9U CN202020145422U CN212044822U CN 212044822 U CN212044822 U CN 212044822U CN 202020145422 U CN202020145422 U CN 202020145422U CN 212044822 U CN212044822 U CN 212044822U
Authority
CN
China
Prior art keywords
steering rod
robot
laser radar
supports
base
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202020145422.9U
Other languages
Chinese (zh)
Inventor
史超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Guoxin Taifu Technology Co ltd
Original Assignee
Shenzhen Guoxin Taifu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Guoxin Taifu Technology Co ltd filed Critical Shenzhen Guoxin Taifu Technology Co ltd
Priority to CN202020145422.9U priority Critical patent/CN212044822U/en
Application granted granted Critical
Publication of CN212044822U publication Critical patent/CN212044822U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The utility model discloses a concave robot head of a laser radar, which comprises a head body, an embedded processor and a sensor group; the head body comprises a base, a neck connecting piece, two supports, a connecting seat and a rotating shaft; the neck connecting piece is rotatably connected to the bottom of the base; the two supports are respectively arranged on the left side and the right side of the top of the base; the connecting seat is arranged between the two supports, and the rotating shaft is rotationally connected with the connecting seat; the sensor group comprises a laser radar and an auxiliary camera, the laser radar is arranged on the steering rod, and the auxiliary camera is arranged on the front side of the head body; the utility model obtains the geometric data in the 360-degree range around the robot through the laser radar and obtains the specific environmental data in front of the robot through the auxiliary camera; the operator can use the user interface to view scenes from any vantage point according to the three-dimensional map, and can also accurately control the actions of the robot according to the specific environment in front of the robot.

Description

Laser radar's spill robot head
Technical Field
The utility model relates to a technical field of robot especially relates to a laser radar's technical field of spill robot head.
Background
The laser radar is a radar system that detects a characteristic amount such as a position and a velocity of a target by emitting a laser beam. The working principle is that a detection signal (laser beam) is emitted to a target, then a received signal (target echo) reflected from the target is compared with the emitted signal, and after appropriate processing, relevant information of the target, such as target distance, azimuth, height, speed, attitude, even shape and other parameters, can be obtained, so that the targets of airplanes, missiles and the like are detected, tracked and identified.
The binocular camera is used for positioning an object by using two cameras. For a characteristic point on an object, two cameras fixed at different positions are used for shooting the image of the object, and the coordinates of the point on the image planes of the two cameras are respectively obtained. As long as the precise relative positions of the two cameras are known, the coordinates of the feature point in the coordinate system for fixing one camera can be obtained in a geometric method, namely, the position of the feature point is determined.
In the prior art, most robots realize large-scale positioning through laser radars, but the positioning accuracy of the laser radar positioning is not high; there are also robots that achieve small-range positioning with binocular cameras, but positioning with binocular cameras is computationally expensive and cannot be used directly for large-range accurate positioning. Because only a single sensor structure is often installed on the head of the concave robot of the laser radar, the obtained information is limited, and the environment acquisition and feedback requirements of more and more complex actions of the robot cannot be supported.
SUMMERY OF THE UTILITY MODEL
In view of the above problems, an object of the present invention is to provide a laser radar concave robot head, which obtains geometric data within 360 ° around the robot through the laser radar, establishes a three-dimensional map around the robot using the geometric data, and obtains specific environmental data in front of the robot through an auxiliary camera; the operator can use the user interface to view scenes from any vantage point according to the three-dimensional map, and can also accurately control the actions of the robot according to the specific environment in front of the robot.
In order to realize the purpose, the utility model discloses the technical scheme who takes does:
a concave robot head of a laser radar comprises a head body, an embedded processor and a sensor group arranged on the head body;
the head body comprises a base, a neck connecting piece, two supports and a steering rod; the neck connecting piece is rotatably connected to the bottom of the base; the two supports are respectively arranged on the left side and the right side of the top of the base; the steering rod is horizontally arranged between the two supports, and two ends of the steering rod are respectively and rotatably connected to the two supports;
the sensor group comprises a laser radar and an auxiliary camera, the laser radar is arranged on the steering rod, and the auxiliary camera is arranged on the front side of the head body;
the embedded processor is disposed within the head body and connects the lidar and the auxiliary camera.
The concave robot head of the laser radar is characterized in that the steering rod comprises a left steering rod and a right steering rod, one end of the left steering rod and one end of the right steering rod are coaxially and rotatably connected, and the other end of the left steering rod and the other end of the right steering rod are respectively and rotatably connected with the two supports; and the left steering rod and the right steering rod are respectively provided with one laser radar.
The concave robot head of the laser radar is characterized in that one end of the left steering rod and one end of the right steering rod are connected through a shaft coupling piece; the other end of the left steering rod and the other end of the right steering rod are respectively inserted into the support at the same side, and belt pulleys are respectively arranged on the end parts of the left steering rod and the right steering rod, which are positioned in the support.
Above-mentioned laser radar's concave robot head, wherein, supplementary camera has two three-dimensional cameras, including two wide-angle lens, two telephoto lens and a plurality of light filling LED lamps, and two telephoto lens slope points down.
The concave robot head of the laser radar comprises a sensor group, a sensor group and a controller, wherein the sensor group further comprises a double-fisheye camera, the double-fisheye camera is provided with two lenses, one lens faces the front, and the other lens faces the rear.
Above-mentioned laser radar's concave robot head, wherein, the sensor group still includes the GPS locator, the GPS locator sets up the top center of base.
The concave robot head of the laser radar, wherein the embedded processor is connected with the Ethernet in a wired or wireless mode.
The utility model discloses owing to adopted above-mentioned technique, make it compare the positive effect that has with prior art and be:
1. the utility model obtains the geometric data in the range of 360 degrees around the robot through the laser radar, establishes a three-dimensional map around the robot by utilizing the geometric data, and obtains the specific environmental data in front of the robot through the auxiliary camera; the operator can use the user interface to view scenes from any vantage point according to the three-dimensional map, and can also accurately control the actions of the robot according to the specific environment in front of the robot.
2. The utility model discloses a set up two fisheye cameras, acquire color data and texture data around the robot to color to the three-dimensional map around the robot that the modeling obtained, generate colored three-dimensional map, the operator that is convenient for more to be located the distal end understands the environment around the robot.
3. The utility model discloses a set up the GPS locator, acquire the geocentric coordinate data of robot to compare with the position of robot in the three-dimensional map, when the position of robot in the three-dimensional map of establishing the deviation appears and make the three-dimensional map by the mistake when establishing, update three-dimensional map.
Drawings
Fig. 1 is a schematic structural diagram of a concave robot head of a laser radar of the present invention;
fig. 2 is a rear view of a concave robot head of a laser radar of the present invention;
fig. 3 is a schematic view of a connection mode between a steering rod and a support of a concave robot head of the laser radar of the present invention.
In the drawings:
1. a head body; 11. a base; 12. a neck connector; 13. a support; 14. a steering lever; 141. a shaft coupling member; 142. a left steering shaft; 143. a right steering shaft; 144. a belt pulley; 15. heat dissipation holes; 3. a sensor group; 31. a laser radar; 32. an auxiliary camera; 321. a wide-angle lens; 322. a telephoto lens; 33. a double fisheye camera; 34. a GPS locator.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and specific embodiments, but the present invention is not limited thereto.
Fig. 1 is a schematic structural diagram of a concave robot head of a laser radar of the present invention; fig. 2 is a rear view of a concave robot head of a laser radar of the present invention; fig. 3 is a schematic view of a connection mode between a steering rod and a support of a concave robot head of the laser radar of the present invention. Referring to fig. 1 to 3, a preferred embodiment of a concave robot head of a lidar is shown, which is characterized by comprising a head body 1, an embedded processor (not shown in the figure) and a sensor group 3 arranged on the head body 1.
The head body 1 includes a base 11, a neck connecting member 12, two supports 13, and a steering rod 14. The neck connecting piece 12 is rotatably connected to the bottom of the base 11, and the head body 1 is connected to the main body of the robot through the neck connecting piece 12, so that the head body 1 can rotate along a vertical rotating shaft relative to the main body of the robot. The two supports 13 are respectively arranged at the left side and the right side of the top of the base 11, the supports 13 and the base 11 are of an integral structure, and the supports 13 and the base 11 integrally form a basic outline frame of the head body 1. The steering rod 14 is horizontally disposed in the middle of the two holders 13, and both ends of the steering rod 14 are rotatably connected to the two holders 13, respectively.
The sensor group 3 includes a laser radar 31 and an auxiliary camera 32, and the laser radar 31 is provided on the steering rod 14 and rotates in the vertical direction together with the steering rod 14. In this embodiment, the steering rod 14 is provided with two laser radars 31 on the left and right, and the steering rod 14 can be driven by a driving motor according to a control instruction, so as to drive the two laser radars 31 to rotate together, so that the two laser radars 31 can acquire all geometric data within a range of 360 degrees around the robot, and perform data support for real-time 3D modeling required by a subsequent control system. The auxiliary camera 32 is provided on the front side of the head body 1, and the accuracy of the auxiliary camera 32 is higher than that of the laser radar 31, so that the auxiliary camera 32 is used to acquire specific environmental data on the front side of the head body 1, and by rotating the head body 1 in the horizontal direction around the main body of the robot, the auxiliary camera 32 can be directed to a direction that needs attention. And laser radar 31 sets up between two supports 13, has avoided the robot to appear laser radar 31 by the condition of colliding in the removal process, has improved robot's security and rationality.
Also, with the auxiliary camera 32, a visual odometry system may be integrated to provide a posture estimate over time. The system generates a solution based on an incremental structure of motion estimation while refining the results using keyframe selection and sparse local bundle adjustment. The auxiliary camera 32 can determine the moving path and head pose change of the robot, update the position and head pose of the robot in real time in the established three-dimensional map, and update the three-dimensional map in real time along with the movement of the robot.
An embedded processor is provided within the head body 1, and the embedded processor is connected to the laser radar 31 and the auxiliary camera 32. In this embodiment, the embedded processor contains all the processors necessary to perform the processing tasks of the head images of the lidar 31 and the auxiliary camera 32, i.e. the embedded processor includes a quad-core Intel i 7-3820 QM unit, two custom Xilinx Spartan 6FPGA units and an Arm Cortex M4 unit.
The above is merely an example of the preferred embodiments of the present invention, and the embodiments and the protection scope of the present invention are not limited thereby.
Further, in a preferred embodiment, the steering rod 14 includes a left steering rod 142 and a right steering rod 143, one end of the left steering rod 142 and one end of the right steering rod 143 are coaxially rotatably connected, and the other end of the left steering rod 142 and the other end of the right steering rod 143 are rotatably connected to the two holders 13, respectively. One laser radar 31 is provided on each of the left steering rod 142 and the right steering rod 143. The left steering rod 142 and the right steering rod 143 respectively drive the two laser radars 31 to rotate, so that the two laser radars 31 rotate independently.
Further, in a preferred embodiment, one end of the left steering rod 142 and one end of the right steering rod 143 are connected by a shaft coupling member 141. The other end of the left steering rod 142 and the other end of the right steering rod 143 are inserted into the same holder 13, and a pulley 144 is provided on each of the ends of the left steering rod 142 and the right steering rod 143 in the holder 13. Two pulleys 144 may be driven by two driving motors to rotate respectively, so as to drive two laser radars 31 to rotate, and make the two laser radars 31 rotate independently.
Further, in a preferred embodiment, the auxiliary camera 32 has a dual-stereo camera including two wide-angle lenses 321, two telephoto lenses 322 and a plurality of fill-in LED lamps 323, and the two telephoto lenses 322 are directed obliquely downward. Through the combination of wide-angle lens 321 and telephoto lens 322, and by means of optical zooming, auxiliary camera 32 can obtain a better zooming experience, wide-angle lens 321 with a wide angle of view can "see" a wide range, but "see" objects at a distance, and telephoto lens 322 with a narrow angle of view, although the range of "see" is not large, but "see" farther and clearer. The wide-angle 321 and the telephoto lens 322 are combined and matched, and relatively smooth zooming can be realized through lens switching and a fusion algorithm during shooting. The high-pixel telephoto lens 322 can ensure that the image information lost by the wide-angle lens 321 due to zooming is much lower than the false zooming of a single camera, thereby greatly improving the zooming performance of the auxiliary camera 32.
In the present embodiment, the two wide-angle lenses 321 are both disposed on the front side of the base 11, and the two wide-angle lenses 321 are located on the same level. Both telephoto lenses 322 are disposed at the front side of the base 11, and the two telephoto lenses 322 are also located at the same level. The two telephoto lenses 311 are disposed between the two wide-angle lenses 321.
Two wide-angle lenses 321 are connected to one of the customized Xilinx Spartan 6FPGA units, and two telephoto lenses 322 are connected to the other customized Xilinx Spartan 6FPGA unit. The two customized Xilinx Spartan 6FPGA units are connected with the quad-core Intel i 7-3820 QM unit through a customized Ethernet PCIe adapter. Both lidar 31 are connected to an Arm Cortex M4 unit and an Arm Cortex M4 unit is connected to a quad-core Intel i 7-3820 QM unit through an 8-port managed gigabit ethernet switch.
The surface of the base 11 where the telephoto lenses 322 are provided is slightly inclined downward such that the two telephoto lenses 322 are obliquely directed downward. When the robot grabs the object or carries out other work, can observe the motion of the manipulator of robot through two telephoto lens 322, prevent that the manipulator of robot from bumping in the course of the work, improve the security and the stability of manipulator work.
In the present embodiment, four light supplement LEDs 323 are disposed on the front side of the head body 1. Two of the light supplement LEDs 323 are respectively located below the two wide-angle lenses 321, and the other two light supplement LEDs 323 are located between the two telephoto lenses 322. In a dark place, the fill-in LED323 can emit light and provide light sources for the wide-angle lens 321 and the telephoto lens 322, so that the images of the wide-angle lens 321 and the telephoto lens 322 are clearer.
The heat dissipation holes 15 are formed below the four light supplement LEDs 323, and the heat dissipation holes 15 are used for dissipating heat generated by work in the head body 1.
Further, in a preferred embodiment, the sensor group 3 further includes a dual-fisheye camera 33, and the dual-fisheye camera 33 has two lenses, both of which project forward, one of the lenses being disposed on the front side of the head body 1, and the other lens being disposed on the rear side of the head body 1. In the present embodiment, the dual-fisheye camera 33 is disposed between the two telephoto lenses 322.
The lens of fish-eye camera is a kind of lens with very short focal length and visual angle close to or equal to 180 deg. and the front lens of the photographic lens is parabolic and convex in front. By providing the two fisheye cameras 33 and providing the lenses of one fisheye camera on each of the front and rear sides of the head main body 1, the two fisheye cameras 33 can photograph all directions over a full 360 ° range.
The dual fisheye camera 33 is used for acquiring color data and texture data around the robot, and coloring a three-dimensional map around the robot obtained through modeling by a fusion algorithm, so that an operator can more easily understand how the environment around the robot is when remotely observing the environment around the robot through a user interface.
Further, in a preferred embodiment, the sensor group 3 further comprises a GPS locator 34, the GPS locator 34 being centrally located on the top of the base 11.
With the shift of time, the position of the voxel model of the robot in the three-dimensional map may be deviated, so that the fusion between the voxel of the newly generated three-dimensional map and the previous voxel is deviated, and the three-dimensional map is established incorrectly. And require precise positioning to engage with the object (e.g., pick the object at a particular location) and avoid collisions while the robot performs the task.
The GPS locator 34 is used to acquire geocentric coordinate data of the robot and compare the geocentric coordinate data with the position of the robot in the three-dimensional map, and when the position of the robot in the established three-dimensional map is deviated and the three-dimensional map is established incorrectly, the three-dimensional map is updated.
Further, in a preferred embodiment, the embedded processor is connected to the ethernet network in a wired or wireless manner, so that an operator can observe the posture and position of the robot and control the robot through a user interface at a remote end.
Further, the utility model discloses a laser radar's spill robot head samples the environment, and the environment sampling method of adoption includes:
geometric data in a 360 ° range around the robot is acquired by two laser radars 31, and modeling is performed based on the geometric data to construct a three-dimensional map around the robot.
The specific environmental data in front of the robot is acquired by the auxiliary camera 32. The operator can use the user interface to view scenes from any vantage point according to the three-dimensional map, and can also accurately control the actions of the robot according to the specific environment in front of the robot.
In this embodiment, a three-dimensional map of the robot surroundings is constructed using a set of voxel grids. These grids contain 3D voxel sets, each voxel set containing occupancy and color marker information. The grid is created at different ranges and resolutions, as required by the individual tasks, to balance high resolution world modeling with bandwidth and computational constraints. The user can look at the robot within a bold-faced grid (0.5m resolution) of 30m of the sensor to learn about the situation. The high resolution model can capture the local environment around the robot (resolution 0.05 m). An on-demand area of interest, typically placed by a user at a particular location, provides information on the order of 1 centimeter and is used when creating a planning fixture for objects in an environment. With the plug-in panel available on the OCU, the robotic operator can actively modify the settings of each voxel grid, determine which grids to display at a given time, and view the scene from any vantage point using a 3D user interface.
In addition to providing situational awareness that enables an operator to perceive the environment, voxel models are also used in performing motion planning of actions. The robot motion is collision tested using a grid representation of voxels, ensuring that the motions generated by the planning routine do not collide and do not attempt to move the robot limb through the obstacle.
Also, with the auxiliary camera 32, a visual odometry system may be integrated to provide a posture estimate over time. The system generates a solution based on an incremental structure of motion estimation while refining the results using keyframe selection and sparse local bundle adjustment. The auxiliary camera 32 can determine the moving path and head pose change of the robot, update the position and head pose of the robot in real time in the established three-dimensional map, and update the three-dimensional map in real time along with the movement of the robot.
Further, in a preferred embodiment, a double fisheye camera 33 is provided on the head main body 1. Color data and texture data around the robot are acquired by the dual-fisheye camera 33, and modeling is performed based on the color data, the texture data, and the geometric data to generate a colored three-dimensional map around the robot, so that an operator can understand the environment around the robot more easily.
Further, in a preferred embodiment, a GPS locator 34 is provided on the head main body 1. The GPS locator 34 is used to acquire the geocentric coordinates of the robot.
With the shift of time, the position of the voxel model of the robot in the three-dimensional map may be deviated, so that the fusion between the voxel of the newly generated three-dimensional map and the previous voxel is deviated, and the three-dimensional map is established incorrectly. And require precise positioning to engage with the object (e.g., pick the object at a particular location) and avoid collisions while the robot performs the task.
Therefore, whether the position of the voxel model of the robot in the three-dimensional map is deviated or not can be determined by comparing the geocentric coordinates of the robot with the position of the voxel model of the robot in the three-dimensional map in real time. If the position of the voxel model of the robot in the three-dimensional map is deviated, the voxel increment is used for updating the model so as to ensure the high precision of the three-dimensional map.
The above is only a preferred embodiment of the present invention, and not intended to limit the scope of the invention, and it should be appreciated by those skilled in the art that various equivalent substitutions and obvious changes made in the specification and drawings should be included within the scope of the present invention.

Claims (7)

1. The concave robot head of the laser radar is characterized by comprising a head body, an embedded processor and a sensor group arranged on the head body;
the head body comprises a base, a neck connecting piece, two supports and a steering rod; the neck connecting piece is rotatably connected to the bottom of the base; the two supports are respectively arranged on the left side and the right side of the top of the base; the steering rod is horizontally arranged between the two supports, and two ends of the steering rod are respectively and rotatably connected to the two supports;
the sensor group comprises a laser radar and an auxiliary camera, the laser radar is arranged on the steering rod, and the auxiliary camera is arranged on the front side of the head body;
the embedded processor is disposed within the base and is connected to the lidar and the auxiliary camera.
2. The lidar concave robot head according to claim 1, wherein the steering rod comprises a left steering rod and a right steering rod, one end of the left steering rod and one end of the right steering rod are coaxially and rotatably connected, and the other end of the left steering rod and the other end of the right steering rod are respectively and rotatably connected with the two supports; and the left steering rod and the right steering rod are respectively provided with one laser radar.
3. The lidar concave robot head according to claim 2, wherein one end of the left steering rod and one end of the right steering rod are connected by an axle coupling; the other end of the left steering rod and the other end of the right steering rod are respectively inserted into the support at the same side, and belt pulleys are respectively arranged on the end parts of the left steering rod and the right steering rod, which are positioned in the support.
4. The lidar female robot head of claim 1, wherein the auxiliary camera has a bi-stereo camera comprising two wide-angle lenses, two tele lenses, and a plurality of fill-in LED lights, and wherein the two tele lenses are directed obliquely downward.
5. The lidar female robot head of claim 1, wherein the sensor set further comprises a dual fisheye camera having two lenses, one lens facing forward and the other lens facing rearward.
6. The lidar female robot head of claim 1, wherein the sensor set further comprises a GPS locator centrally disposed on a top portion of the base.
7. The lidar female robot head of claim 1, wherein the embedded processor is connected to an ethernet network via a wired or wireless connection.
CN202020145422.9U 2020-01-22 2020-01-22 Laser radar's spill robot head Active CN212044822U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202020145422.9U CN212044822U (en) 2020-01-22 2020-01-22 Laser radar's spill robot head

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202020145422.9U CN212044822U (en) 2020-01-22 2020-01-22 Laser radar's spill robot head

Publications (1)

Publication Number Publication Date
CN212044822U true CN212044822U (en) 2020-12-01

Family

ID=73538363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202020145422.9U Active CN212044822U (en) 2020-01-22 2020-01-22 Laser radar's spill robot head

Country Status (1)

Country Link
CN (1) CN212044822U (en)

Similar Documents

Publication Publication Date Title
CN110728715B (en) Intelligent inspection robot camera angle self-adaptive adjustment method
RU2664257C2 (en) Systems and methods for tracking location of mobile target objects
US11187790B2 (en) Laser scanning system, laser scanning method, movable laser scanning system, and program
CN109079799B (en) Robot perception control system and control method based on bionics
CN110275538A (en) Intelligent cruise vehicle navigation methods and systems
KR20220123726A (en) Non-rigid stereo vision camera system
CN109917420A (en) A kind of automatic travelling device and robot
CN110163963B (en) Mapping device and mapping method based on SLAM
US10893190B2 (en) Tracking image collection for digital capture of environments, and associated systems and methods
CN214520204U (en) Port area intelligent inspection robot based on depth camera and laser radar
CN113085896B (en) Auxiliary automatic driving system and method for modern rail cleaning vehicle
CN112819943B (en) Active vision SLAM system based on panoramic camera
CN113848931B (en) Agricultural machinery automatic driving obstacle recognition method, system, equipment and storage medium
CN115932882A (en) System for providing 3D detection of an environment through an autonomous robotic vehicle
CN115540849A (en) Laser vision and inertial navigation fusion positioning and mapping device and method for aerial work platform
CN212193168U (en) Robot head with laser radars arranged on two sides
CN212044822U (en) Laser radar's spill robot head
CN211517547U (en) Concave robot head with rotary disc
CN111152237B (en) Robot head with laser radars arranged on two sides and environment sampling method thereof
WO2022078437A1 (en) Three-dimensional processing apparatus and method between moving objects
Yuan et al. Visual steering of UAV in unknown environments
Peñalver et al. Multi-view underwater 3D reconstruction using a stripe laser light and an eye-in-hand camera
CN110969652B (en) Shooting method and system based on mechanical arm monocular camera serving as binocular stereoscopic vision
US12014515B2 (en) Estimating a pose of a spatially movable platform
CN113658221A (en) Monocular camera-based AGV pedestrian following method

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant