CN113658221A - Monocular camera-based AGV pedestrian following method - Google Patents

Monocular camera-based AGV pedestrian following method Download PDF

Info

Publication number
CN113658221A
CN113658221A CN202110857535.0A CN202110857535A CN113658221A CN 113658221 A CN113658221 A CN 113658221A CN 202110857535 A CN202110857535 A CN 202110857535A CN 113658221 A CN113658221 A CN 113658221A
Authority
CN
China
Prior art keywords
pedestrian
agv
mobile robot
angle
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110857535.0A
Other languages
Chinese (zh)
Other versions
CN113658221B (en
Inventor
刘成菊
袁家遥
陈启军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202110857535.0A priority Critical patent/CN113658221B/en
Publication of CN113658221A publication Critical patent/CN113658221A/en
Application granted granted Critical
Publication of CN113658221B publication Critical patent/CN113658221B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to an AGV pedestrian following method based on a monocular camera, which comprises the following steps: 1) detecting a pedestrian target: obtaining a pedestrian target detection frame according to a pedestrian detection model deployed on an upper computer; 2) calibrating the homography matrix by using a monocular camera: acquiring a homography matrix H from a three-dimensional world coordinate system to a two-dimensional pixel coordinate system; 3) resolving the coordinates of the pedestrians: calculating the coordinates of the contact point between the pedestrian and the ground in the three-dimensional world coordinate system, namely the world coordinates (x) of the pedestrian targetw,yw) (ii) a 4) Calculating the speed and the angle of the mobile robot in real time: respectively designing linear velocity PID controller and angle PID controllerCalculating the linear velocity and the angular velocity of the mobile robot in real time; 5) controlling the chassis to move: the upper computer issues linear velocity and angular velocity information to the lower computer through the ROS system, and the lower computer resolves the velocity instruction into the expected rotating speed of the driving motor, so that the AGV follows the pedestrian. Compared with the prior art, the method has the advantages of simplicity, high efficiency, real-time accuracy, distributed communication of the master machine and the slave machine, small calculated amount and the like.

Description

Monocular camera-based AGV pedestrian following method
Technical Field
The invention relates to the field of robot target detection and tracking control, in particular to an AGV pedestrian following method based on a monocular camera in an indoor environment.
Background
The rapid development of computer vision related technologies and the improvement of hardware computing speed enable the functions of the robot to be greatly enriched, the mobile AGV service robot can simply, conveniently and efficiently estimate the distance and the angle of a target pedestrian by using a monocular camera based on the ROS, a certain safety distance is kept, and the mobile AGV service robot has wide application value in the scenes of office work, routing inspection, welcome and the like in the future.
The existing technologies for estimating the distance of a pedestrian target by using a sensor mainly include the following technologies:
firstly, a distance measurement method based on laser radar: a common method for laser ranging is a triangular reflection method. The triangular distance measurement is carried out by measuring the moving distance of a light spot reflected by a measured object on the CCD sensor and then estimating the distance and the relative angle value between a target and the radar by the triangular relation formed by incident light and reflected light. The laser ranging is simple and convenient to operate, high in speed and high in measuring accuracy, and can reach a millimeter level, meanwhile, the commonly used handheld laser measuring range can reach 200 meters at most, and the distance can be measured well in scenes with poor light conditions, but the method has high requirements on the cleanness degree and the environment humidity of the sensor, and the hardware cost is too high.
Secondly, a distance measurement method based on a binocular camera comprises the following steps: the information collected by the camera sensor is more comprehensive than the laser and contains color information. The binocular ranging is similar to human eye perception, image parallax acquired by the same target through binocular matching is calculated after calibration, correction and binocular matching, ranging of pixel points in the image is achieved, the mobile robot can brake the obstacle in real time according to change of distance information, however, the method is large in calculated amount, high in requirement on illumination conditions, difficult in matching of scenes lacking visual characteristics, and capable of increasing ranging errors.
Thirdly, a distance measuring method based on a depth camera: the TOF method calculates the time difference between infrared rays emitted by an IR module and reflected light rays received by a receiver, and calculates the target distance according to the product of flight time and light speed. However, the method has the defects that the measurement range is limited by the camera baseline, the requirement on the precision of time measurement is high, the precision of the distance estimated by the depth camera cannot reach the millimeter level, and the distance cannot be accurately measured for black objects, transparent objects and objects in a short distance.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an AGV pedestrian following method based on a monocular camera.
The purpose of the invention can be realized by the following technical scheme:
an AGV pedestrian following method based on a monocular camera is used for achieving AGV pedestrian following in an indoor environment and comprises the following steps:
1) detecting a pedestrian target: real-time pedestrian target detection is realized according to a pedestrian detection model deployed on an upper computer, and a pedestrian target detection frame is obtained;
2) calibrating the homography matrix by using a monocular camera: according to the prior condition that a pedestrian target is located on a ground plane, images are collected through a monocular camera and are calibrated with a plurality of groups of corresponding characteristic point pairs on the ground, and a homography matrix H from a three-dimensional world coordinate system to a two-dimensional pixel coordinate system is obtained;
3) resolving the coordinates of the pedestrians: calculating the coordinate of the contact point between the descending person and the ground in the three-dimensional world coordinate system, namely the world coordinate (x) of the pedestrian target according to the calibrated homography matrix H and the two-dimensional pixel coordinate of the midpoint of the bottom edge of the pedestrian target detection framew,yw);
4) Calculating the speed and the angle of the mobile robot in real time: acquiring distance deviation and angle deviation of a pedestrian relative to the mobile robot according to the world coordinates of the pedestrian target, and designing a linear velocity PID (proportion integration differentiation) controller and an angle PID controller respectively to calculate the linear velocity and the angular velocity of the mobile robot in real time;
5) controlling the chassis to move: the upper computer issues linear velocity and angular velocity information to the lower computer through the ROS system, and the lower computer resolves the velocity instruction into the expected rotating speed of the driving motor according to the AGV kinematics model, so that the AGV follows the pedestrian.
In the step 1), the pedestrian detection model adopts a MobileNet improved SSD single-stage target detector, and the specific structure is as follows:
and (3) replacing VGG-16 in the SSD original model with MobileNet v2 to serve as a backbone network backbone extraction feature, wherein the SSD classifier is still used by the classifier.
In the step 2), because the actual degree of freedom of the homography matrix H is only eight, the number of groups of the selected feature points is four.
In the step 3), considering that the three-dimensional world coordinate system takes the intersection point of the camera fixed at the edge of the mobile robot and the ground as the coordinate origin, the radius r of the mobile robot needs to be counted when calculating the world coordinate point of the pedestrian targetAGVAnd the coordinate of the world coordinate point of the pedestrian target after the radius of the mobile robot is counted into (x)w+rAGV,yw)。
The distance PID controller and the angle PID controller are designed by the following steps:
41) carrying out initialization setting, specifically:
411) setting an initial linear velocity cmdv, an initial angular velocity cmdw and an initial rotation angle;
412) obtaining the world coordinate point coordinate (x) of the pedestrian target counted into the radius of the mobile robot at the current momentw+rAGV,yw);
413) Calculating the distance between the mobile robot and the pedestrian after the safe distance is counted at the current moment
Figure BDA0003184707680000031
Wherein d issafeThe safety distance is set to prevent the mobile robot from tracking the pedestrian;
42) designing a linear velocity PID controller, specifically:
setting control parameters of a linear velocity PID controller, taking the distance between the mobile robot and the pedestrian after the safe distance is counted at the current moment as the input of the linear velocity PID controller, setting the expected distance value between the mobile robot and the pedestrian as 0, and obtaining a control linear velocity v according to a distance error value;
43) designing an angle PID controller, specifically:
setting control parameters of an angle PID controller, taking the angle between the mobile robot and the pedestrian after the safe distance is counted at the current moment as input, setting the expected angle value between the mobile robot and the pedestrian as 0 degree, and obtaining the control angular speed w according to the angle error value.
The linear velocity PID controller and the angle PID controller both adopt P controllers.
Said step 413), setting a safety distance dsafeIs 0.5 m.
In the step 411), the initial rotation angle is set to 10 degrees to reduce unnecessary frequent rotation of the robot.
In the step 5), the upper computer adopts a Jetson Nano visual computing card, the lower computer is a mobile robot industrial personal computer, and the upper computer and the lower computer communicate in a Topic communication mode.
The Jetson Nano visual calculation card is used for completing a main thread task, and comprises the steps of processing visual perception information, detecting a pedestrian target, calculating a distance, communicating the calculated linear velocity result and the calculated angular velocity result through topics of an ROS system, and creating a publisher issuing velocity message;
the manual control machine of the mobile machine is used for completing sub-thread tasks and comprises a motion control device which is used for controlling the motion according to the received linear velocity and the received angular velocity.
Compared with the prior art, the invention has the following advantages:
the invention relates to a monocular distance estimation method, which comprises the following steps of selecting at least four groups of characteristic point pairs: the coordinate point of the pixel plane and the coordinate position of the pixel plane relative to the robot are solved, so that the homography matrix of the transformation from the pixel plane to the ground is solved, the world coordinate of a pedestrian target in the image is further calculated, the estimation of the robot on the distance and the angle of the pedestrian is realized, and the simple, convenient, efficient and real-time tracking of the pedestrian under the indoor environment is realized.
Secondly, topic and node sharing between the Jetson Nano and the industrial personal computer is achieved through ROS-based master-slave distributed communication, then the P control is used for controlling the linear speed and the angular speed through the distance and the angle error, and finally the Jetson Nano distributes the calculated speed information to the industrial personal computer of the AGV mobile robot through a topic communication mode, so that the calculation pressure of the mobile robot is dispersed, and the real-time performance of following the target is improved.
Drawings
FIG. 1a is a flow chart of the method of the present invention.
FIG. 1b is a schematic diagram of homography matrix calibration.
Fig. 1c is a schematic diagram of pedestrian coordinate point selection.
Fig. 2 is a schematic diagram of a pedestrian detection terminal operating and displaying window.
Fig. 3 is a transformation diagram of pixel coordinates to world coordinates.
Fig. 4 is a schematic diagram of selecting four pairs of feature point pairs.
Fig. 5 is a speed P control block diagram.
FIG. 6 is a bottom level control block diagram.
FIG. 7 is a partial ROS node operating diagram.
Fig. 8 is a pedestrian tracking result diagram.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
The invention provides an AGV pedestrian following method based on a monocular camera, which realizes the following of an indoor mobile robot to a pedestrian, and the flow schematic diagram of the method is shown in figures 1a-1c, and the method specifically comprises the following steps:
s1, detecting a pedestrian target: firstly, improving an SSD single-stage target detector by using MobileNet, and deploying a model on an AGV embedded platform through TensoRT acceleration model reasoning to realize real-time pedestrian target detection with the speed of more than 22 FPS;
s2, calibrating the homography matrix by using a monocular camera: the method only uses a monocular camera to realize distance estimation, and calculates a homography matrix H from the ground to a camera imaging plane by acquiring at least 4 groups of feature point pairs (P (u, v), P (x, y)) corresponding to the image and the ground by utilizing the prior condition that a pedestrian target is positioned on the ground plane;
s3, resolving the pedestrian coordinates: calculating coordinates (x, y) of contact points of a pedestrian and the ground under a camera coordinate system by using the calibrated homography matrix H and coordinates (u, v) of pixels at the bottom edge of the pedestrian target detection frame to obtain world coordinates of the pedestrian target and calculate the distance between the pedestrian and the robot;
s4, controlling the robot to follow the pedestrian by the PID control method: firstly, calculating distance deviation and angle deviation of a pedestrian relative to a robot by using pedestrian coordinates (x, y), designing a distance PID (proportion integration differentiation) controller according to the distance deviation, and calculating the linear velocity v of the mobile robot in real time; and designing an angle PID controller according to the angle deviation, and calculating the angular speed w of the robot.
S5, controlling chassis movement by using an ROS system: the upper computer of the mobile robot issues speed information (v, w) to the lower computer, the lower computer resolves the speed instruction into the expected rotating speed of the driving motor by using the AGV kinematics model, and then the current rotating speed of the motor is fed back by using the motor encoder to carry out closed-loop control, so that the AGV follows the pedestrian.
In step S1, this example uses the pre-training model SSD _ mobilent _ v2 provided by the nvidia authority to perform pedestrian tracking recognition, compatible with Jetson Nano computing card, which takes several days to complete training using coco 91-class object data set, selects SSD single-stage detector to detect pedestrian targets, and uses MobileNet v2 to lighten SSD to meet the requirements of mobile devices for real-time and accuracy.
The pedestrian detection model deployed on the Jetson Nano computing card uses the MobileNet v2 to replace VGG-16 in the original SSD model of the SSD as a backbone network backbone to extract features, and the classifier still uses the SSD classifier. Due to the difference of the backbone networks, the size of the output feature map of the feature extraction part and the number of default boxes are necessarily changed, the number of default boxes is reduced from 8732 to 2268, the size of the feature map is changed to [19,10,5,3,2,1], and the calculation amount is greatly reduced. As can be seen from the test result of the coco data set, although the detection precision is slightly reduced, the parameter quantity is reduced by about 10 times, and the speed is improved by about 20 times, so that the method is very suitable for being used as a target detection network model on a mobile robot, and not only is the accuracy ensured, but also the calculation power and real-time performance requirements of equipment are met.
In the implementation of step S1, it should be noted that the conversion of the current Jetson Nano port number can be updated in real time by using ll/dev/video at the terminal to view the port number of the current camera on the computing card. The light weight model based on SSD MobileNet v2 uses a single-stage detection model SSD, the backbone network VGG-16 is replaced by MobileNet v2, the pedestrian detection model is loaded, then circulation is used for judging whether the currently detected target label is person, and if the target label is the person, subsequent coordinate transformation speed calculation is completed. The method comprises the steps of starting a terminal under a folder where a source code is located, inputting a legal command to select a port number/dev/video 0, setting display resolution through-input-width or-input-height, finally displaying the detected object type, confidence, width, height, area, pixel coordinates of the middle point of the bottom edge, world coordinates, distance between a pedestrian object and a robot and angle values in real time, wherein the detection speed can reach 22FPS, and the time consumption of single detection cycle after CUDA acceleration is only 47.59771 ms. Fig. 2 is a schematic diagram of a pedestrian detection terminal operating and displaying window.
In the implementation of step S2, the transformation from the two-dimensional pixel coordinate system to the three-dimensional world coordinate system needs to be completed as shown in fig. 3, and the image displayed by the display is the result in the two-dimensional pixel coordinate system, and the upper left corner of the image result is the origin, the right side is the positive x half axis, and the downward side is the positive y half axis. The intersection point of the camera on the mobile robot extending downwards and vertically and the grassland is taken as the origin of a world coordinate system, and the right front of the robot is xwPositive half axis, left ywPositive half axis, upward zwA positive half shaft. By letting zwAnd (3) simplifying the problem to detect only the target located on the grassland plane or the intersection point of the target and the grassland plane, knowing that in the projection pictures of the same target under two coordinate planes, a one-to-one corresponding matrix transformation relation exists between each pixel point, and completing the coordinate transformation through a homography.
The method utilizes the prior condition that the sole of the foot of the pedestrian is positioned on the ground to convert the condition into the calculation of the imaging plane from a point on the ground to a cameraAnd calculating the coordinates of the point by taking the middle point of the bottom edge of the pedestrian target detection frame as the intersection point of the pedestrian and the ground, wherein the target can be approximated to a rigid body, so the three-dimensional world coordinates of the target relative to the robot are solved from the two-dimensional pixel coordinates by solving the rotation matrix R transformed by the rigid body coordinate system and the translation vector t through rotation and translation. However, the difficulty of solving the changed R and t in real time is large, and the ranging result is not ideal, so that the problem is simplified in the example: let zwAnd (2) only detecting the target located on the grassland plane or the intersection point of the target and the grassland plane, converting the two-dimensional pixel coordinate into the three-dimensional coordinate, converting the three-dimensional coordinate into the two-dimensional coordinate, knowing that in the projection pictures of the same target under the two coordinate planes, a one-to-one corresponding matrix conversion relation exists between each pixel point, and completing the coordinate conversion through a homography matrix homography. Since the camera position is fixed, the rotation matrix of the external reference is fixed, and the homography matrix H includes the rotation matrix R of the external reference and the internal reference matrix a, and is therefore also fixed.
Because a homogeneous coordinate system is used, any scale scaling can be performed, the actual degree of freedom of the homography matrix is only eight, and therefore at least four corresponding points are needed to be calculated, pixel coordinates can be displayed in a terminal in a mouse click image by calling cv2.findHomography (pts _ src, pts _ dst) function of opencv, the upper left corner of the image is a pixel coordinate system origin, and the corresponding coordinate points in the pts _ src image and the measured coordinate point matrix of the pts _ dst actual world coordinate system are modified. As shown in fig. 4, a mouse clicks four feature points in an image: the left and right boundary points near the checkerboard and the left and right boundary points of the far black partition plate to finally obtain the homography matrix
Figure BDA0003184707680000061
In the implementation of step S3, the homography matrix solved by step two can solve the distance and angle of the pedestrian relative to the robot on the actual ground from the position of the pedestrian on the pixel plane. Assuming that the homogeneous coordinates of the corresponding feature pairs in the image are (u, v,1) and (x, y,1), the world coordinates of the trip person can be calculated as (x, y,1) ═ H(u, v, 1). Since the world coordinate system uses the intersection point of the camera fixed at the edge of the mobile robot and the artificial grassland as the coordinate origin, the robot radius is also required to be taken into account for 18.15cm, and the world coordinate of the actual pedestrian target relative to the robot is (x +0.1815, y). Finally, the current distance is calculated to be
Figure BDA0003184707680000071
Setting a safety distance of 0.5m to prevent the condition that the pedestrian is collided due to over-close tracking, and calculating the current azimuth (y) to arctanw/(xw+0.1815)), and setting the initial rotation angle theta to 10 degrees to 10 pi/180, thereby reducing unnecessary frequent rotation of the robot.
In the implementation process of step S4, the speed is controlled by the distance and the angle error using the P controller, and the control block diagram is shown in fig. 5, and specifically includes the following steps:
a) initialization setting:
1) setting the initial linear velocity cmdv as 0.1m/s and the angular velocity cmdw as 0.1 rad/s;
2) the current pedestrian world coordinates (personX, personY) are read from the real parameters of the pedestrian tracking callback function personcalback (), and since the world coordinate system takes the intersection point of the camera fixed at the edge of the mobile robot and the artificial grass lawn as the origin of coordinates, the radius of the robot needs to be taken into account by 18.15cm, and the world coordinates of the actual pedestrian target relative to the robot are (x)w+0.1815,yw);
3) Finally, the current distance is calculated to be
Figure BDA0003184707680000072
The distance is set to be 0.5m, so that the condition that a pedestrian is hit due to too-close tracking is prevented; calculating the current position angle as arctan (y)w/(xw+0.1815)), and setting the initial rotation angle theta to 10 degrees to 10 pi/180, thereby reducing unnecessary frequent rotation of the robot.
b) Design linear velocity P controller
The size of the current linear speed is controlled by the distance error, and the tracking speed is faster when the distance error is larger. The ideal case of the distance between the current target and the robot is 0.0m under the condition of considering the safe distanceThe distance error value is distanced0.0 m. K is selected through parameter adjusting experiencep0.6, cmdv (t) e when distance > 0.0mdistance·kpDistance (t) · 0.6, and the linear velocity range was set from 0.1m/s to 0.4 m/s.
c) Design angular velocity P controller
Angle between human target and robot under ideal conditiondK is selected through parameter adjustment experience when the angle is 0 DEGpWhen angle > θ, cmdw (t) eangle*kpAngular velocity ranges of 0.15rad/s to 0.65rad/s, cmdw sign (cmdw) min (max (| cmdw |,0.15),0.65 are set.
In the implementation process of the step S5, the invention selects a Topic communication mode to realize data exchange between an industrial personal computer of the mobile robot and a computer card Jetson Nano, after addresses and names are added to a master and a slave, the Jetson Nano is connected with a host WIFI, that is, a host WIFI can share a rossmaster node which is started by the industrial personal computer, so as to complete subsequent speed Topic issuing and the like, create an issuer trackball, issue a Topic/cmd _ vel at a frequency of 20 times per second by the issuer cmdPub, update a vel. The single motor rotating speed control adopts a PID algorithm, the actual rotating speed is fed back through an encoder, the PWM driving waveform corresponding to the duty ratio is sent to a motor driving plate, the power motor is controlled to rotate, and the closed-loop control of the rotating speeds of the double motors is completed. Fig. 6 is a bottom-level control block diagram, fig. 7 is a partial ROS node operation diagram, and fig. 8 is a pedestrian tracking result diagram.
Compared with the pedestrian following method in the prior art, the indoor pedestrian following method provided by the invention has the following two greatest innovation points: firstly, the invention utilizes a monocular distance estimation method, does not need to use a binocular or a depth camera, and selects at least four groups of characteristic point pairs: solving a homography matrix transformed from the pixel plane to the ground by the coordinate point of the pixel plane and the coordinate position of the pixel plane relative to the robot, so that the world coordinate of a pedestrian target in the image can be calculated, and the estimation of the robot on the distance and the angle of the pedestrian is realized; secondly, topic and node sharing between the Jetson Nano and the industrial personal computer is achieved through ROS-based master-slave distributed communication, P control is used for controlling linear speed and angular speed through distance and angle errors, and the Jetson Nano distributes calculated speed information to the industrial personal computer of the AGV mobile robot in a topic communication mode. The two points innovatively disperse the calculation pressure of the autonomous mobile robot, and the real-time performance of the autonomous mobile robot for following the target is improved.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. An AGV pedestrian following method based on a monocular camera is characterized in that the method is used for achieving AGV pedestrian following in an indoor environment and comprises the following steps:
1) detecting a pedestrian target: real-time pedestrian target detection is realized according to a pedestrian detection model deployed on an upper computer, and a pedestrian target detection frame is obtained;
2) calibrating the homography matrix by using a monocular camera: according to the prior condition that a pedestrian target is located on a ground plane, images are collected through a monocular camera and are calibrated with a plurality of groups of corresponding characteristic point pairs on the ground, and a homography matrix H from a three-dimensional world coordinate system to a two-dimensional pixel coordinate system is obtained;
3) resolving the coordinates of the pedestrians: calculating the coordinate of the contact point between the descending person and the ground in the three-dimensional world coordinate system, namely the world coordinate (x) of the pedestrian target according to the calibrated homography matrix H and the two-dimensional pixel coordinate of the midpoint of the bottom edge of the pedestrian target detection framew,yw);
4) Calculating the speed and the angle of the mobile robot in real time: acquiring distance deviation and angle deviation of a pedestrian relative to the mobile robot according to the world coordinates of the pedestrian target, and designing a linear velocity PID (proportion integration differentiation) controller and an angle PID controller respectively to calculate the linear velocity and the angular velocity of the mobile robot in real time;
5) controlling the chassis to move: the upper computer issues linear velocity and angular velocity information to the lower computer through the ROS system, and the lower computer resolves the velocity instruction into the expected rotating speed of the driving motor according to the AGV kinematics model, so that the AGV follows the pedestrian.
2. The AGV pedestrian following method based on the monocular camera according to claim 1, wherein in the step 1), the pedestrian detection model adopts a MobileNet improved SSD single-stage target detector, and the specific structure is as follows:
and (3) replacing VGG-16 in the SSD original model with MobileNet v2 to serve as a backbone network backbone extraction feature, wherein the SSD classifier is still used by the classifier.
3. The AGV pedestrian following method according to claim 1, wherein in the step 2), the number of the groups of the selected feature points is four, because the actual degree of freedom of the homography matrix H is only eight.
4. The AGV pedestrian following method according to claim 1, wherein in the step 3), the radius r of the mobile robot is taken into account when calculating the world coordinate point of the pedestrian target considering that the three-dimensional world coordinate system uses the intersection point of the camera fixed on the edge of the mobile robot and the ground as the coordinate originAGVAnd the coordinate of the world coordinate point of the pedestrian target after the radius of the mobile robot is counted into (x)w+rAGV,yw)。
5. The AGV pedestrian following method according to claim 4, wherein the distance PID controller and the angle PID controller are designed to comprise:
41) carrying out initialization setting, specifically:
411) setting an initial linear velocity cmdv, an initial angular velocity cmdw and an initial rotation angle;
412) obtaining the world coordinate point coordinate (x) of the pedestrian target counted into the radius of the mobile robot at the current momentw+rAGV,yw);
413) Calculating the distance between the mobile robot and the pedestrian after the safe distance is counted at the current moment
Figure FDA0003184707670000021
Wherein d issafeThe safety distance is set to prevent the mobile robot from tracking the pedestrian;
42) designing a linear velocity PID controller, specifically:
setting control parameters of a linear velocity PID controller, taking the distance between the mobile robot and the pedestrian after the safe distance is counted at the current moment as the input of the linear velocity PID controller, setting the expected distance value between the mobile robot and the pedestrian as 0, and obtaining a control linear velocity v according to a distance error value;
43) designing an angle PID controller, specifically:
setting control parameters of an angle PID controller, taking the angle between the mobile robot and the pedestrian after the safe distance is counted at the current moment as input, setting the expected angle value between the mobile robot and the pedestrian as 0 degree, and obtaining the control angular speed w according to the angle error value.
6. The AGV pedestrian following method according to claim 5, wherein both the linear velocity PID controller and the angle PID controller employ P controllers.
7. The AGV pedestrian following method according to claim 5, wherein the safety distance d is set in step 413)safeIs 0.5 m.
8. The AGV pedestrian following method according to claim 5, wherein in step 411), the initial rotation angle is set to 10 degrees to reduce unnecessary frequent rotation of the robot.
9. The AGV pedestrian following method based on the monocular camera according to claim 1, wherein in the step 5), the upper computer adopts a Jetson Nano visual computing card, the lower computer is a mobile robot industrial personal computer, and the upper computer and the lower computer communicate with each other through a Topic communication mode.
10. The AGV pedestrian following method based on the monocular camera of claim 9, wherein the Jetson Nano visual computing card is used to complete main thread tasks, including processing visual perception information, detecting pedestrian targets and calculating distances, communicating the calculated linear and angular velocity results through topics of the ROS system, creating publisher release velocity messages;
the manual control machine of the mobile machine is used for completing sub-thread tasks and comprises a motion control device which is used for controlling the motion according to the received linear velocity and the received angular velocity.
CN202110857535.0A 2021-07-28 2021-07-28 AGV pedestrian following method based on monocular camera Active CN113658221B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110857535.0A CN113658221B (en) 2021-07-28 2021-07-28 AGV pedestrian following method based on monocular camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110857535.0A CN113658221B (en) 2021-07-28 2021-07-28 AGV pedestrian following method based on monocular camera

Publications (2)

Publication Number Publication Date
CN113658221A true CN113658221A (en) 2021-11-16
CN113658221B CN113658221B (en) 2024-04-26

Family

ID=78490756

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110857535.0A Active CN113658221B (en) 2021-07-28 2021-07-28 AGV pedestrian following method based on monocular camera

Country Status (1)

Country Link
CN (1) CN113658221B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115344051A (en) * 2022-10-17 2022-11-15 广州市保伦电子有限公司 Visual following method and device of intelligent following trolley

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN112598709A (en) * 2020-12-25 2021-04-02 之江实验室 Pedestrian movement speed intelligent sensing method based on video stream
WO2021114888A1 (en) * 2019-12-10 2021-06-17 南京航空航天大学 Dual-agv collaborative carrying control system and method
CN113051767A (en) * 2021-04-07 2021-06-29 绍兴敏动科技有限公司 AGV sliding mode control method based on visual servo

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
WO2021114888A1 (en) * 2019-12-10 2021-06-17 南京航空航天大学 Dual-agv collaborative carrying control system and method
CN112598709A (en) * 2020-12-25 2021-04-02 之江实验室 Pedestrian movement speed intelligent sensing method based on video stream
CN113051767A (en) * 2021-04-07 2021-06-29 绍兴敏动科技有限公司 AGV sliding mode control method based on visual servo

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
谷凤伟;高宏伟;姜月秋;: "一种简易的单目视觉位姿测量方法研究", 光电技术应用, no. 04 *
陈琦;吴黎明;赵亚男;: "装配生产线的视觉AGV跟踪检测算法研究", 组合机床与自动化加工技术, no. 05 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115344051A (en) * 2022-10-17 2022-11-15 广州市保伦电子有限公司 Visual following method and device of intelligent following trolley
CN115344051B (en) * 2022-10-17 2023-01-24 广州市保伦电子有限公司 Visual following method and device of intelligent following trolley

Also Published As

Publication number Publication date
CN113658221B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
WO2020135446A1 (en) Target positioning method and device and unmanned aerial vehicle
US10796151B2 (en) Mapping a space using a multi-directional camera
US20210138657A1 (en) Mobile control method, mobile robot and computer storage medium
CN110842940A (en) Building surveying robot multi-sensor fusion three-dimensional modeling method and system
CN111670339B (en) Techniques for collaborative mapping between unmanned aerial vehicles and ground vehicles
CN106767913B (en) Compound eye system calibration device and calibration method based on single LED luminous point and two-dimensional rotary table
CN102650886A (en) Vision system based on active panoramic vision sensor for robot
CN103065323A (en) Subsection space aligning method based on homography transformational matrix
EP3531224A1 (en) Environment-adaptive sense and avoid system for unmanned vehicles
CN104503339A (en) Multi-resolution indoor three-dimensional scene reconstitution device and method based on laser radar and quadrotor
CN113848931B (en) Agricultural machinery automatic driving obstacle recognition method, system, equipment and storage medium
CN110998241A (en) System and method for calibrating an optical system of a movable object
Gebre et al. Remotely operated and autonomous mapping system (ROAMS)
CN113658221B (en) AGV pedestrian following method based on monocular camera
CN114140534A (en) Combined calibration method for laser radar and camera
CN111788573A (en) Sky determination in environmental detection for mobile platforms and related systems and methods
Li et al. Mobile robot map building based on laser ranging and kinect
WO2022078437A1 (en) Three-dimensional processing apparatus and method between moving objects
CN212193168U (en) Robot head with laser radars arranged on two sides
WO2022040940A1 (en) Calibration method and device, movable platform, and storage medium
Rakhmatulin et al. Low-cost stereovision system (Disparity Map) for few dollars
CN113184767A (en) Aerial work platform navigation method, device and equipment and aerial work platform
CN212044822U (en) Laser radar's spill robot head
Hakim et al. Asus Xtion Pro Camera Performance in Constructing a 2D Map Using Hector SLAM Method
Deng et al. Implementation and Optimization of LiDAR and Camera Fusion Mapping for Indoor Robots

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant