CN111708042B - Robot method and system for predicting and following pedestrian track - Google Patents

Robot method and system for predicting and following pedestrian track Download PDF

Info

Publication number
CN111708042B
CN111708042B CN202010388347.3A CN202010388347A CN111708042B CN 111708042 B CN111708042 B CN 111708042B CN 202010388347 A CN202010388347 A CN 202010388347A CN 111708042 B CN111708042 B CN 111708042B
Authority
CN
China
Prior art keywords
pedestrian
prediction
track
target
video stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010388347.3A
Other languages
Chinese (zh)
Other versions
CN111708042A (en
Inventor
范衠
马培立
朱贵杰
林培涵
李晓明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shantou University
Original Assignee
Shantou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shantou University filed Critical Shantou University
Priority to CN202010388347.3A priority Critical patent/CN111708042B/en
Publication of CN111708042A publication Critical patent/CN111708042A/en
Application granted granted Critical
Publication of CN111708042B publication Critical patent/CN111708042B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/50Systems of measurement based on relative movement of target
    • G01S17/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a robot method and a system for predicting and following a pedestrian track, comprising the following steps: the system comprises a ZED camera, a GPU embedded platform, an industrial personal computer, an MCU controller, a laser radar sensor and a wheel type odometer; according to the invention, the laser radar and the camera are integrated, the pedestrian track prediction network is combined with the pedestrian re-recognition frame, the pedestrian interaction track network is utilized for prediction, the active selection of the optimal visual angle of the target pedestrian is realized for following, and the CSRT and the Kalman filtering are integrated, so that the robot can accurately and autonomously predict and follow the track of the pedestrian target; the method has the advantages of high robustness, high precision, high accuracy and lower cost.

Description

Robot method and system for predicting and following pedestrian track
Technical Field
The invention relates to the technical field of service robots, in particular to a robot method and a system for predicting and following a pedestrian track.
Background
The demands of service robots in the service industry and the medical industry are rising year by year, and the service robots have huge application values of liberating productivity, developing productivity, meeting the consumption upgrading demands of people and the like. With the continuous development of artificial intelligence, the service robot has also made great development in the application of artificial intelligence technology, and has become more intelligent in deep learning, machine vision, semantic analysis and the like. The service robot has a plurality of limitations in the mainstream sensor at present, the mainstream technology of the following robot on the market is ultrasonic radar ranging, UWB positioning, RGB or RGB-D camera recognition and the like to recognize and track pedestrians, but the method still lacks autonomy, and the service robot can not actively realize pedestrian following on target pedestrians. In an actual service robot working scene, a 2D laser radar used by a robot is affected by a complex environment, so that point cloud clustering of the 2D laser radar is deviated, accurate acquisition of pedestrian track points is difficult to realize, and deviation is predicted.
Disclosure of Invention
The invention aims to provide a robot method and a robot system for predicting and following a pedestrian track, which are used for solving one or more technical problems in the prior art and at least providing a beneficial selection or creation condition.
The technical scheme adopted for solving the technical problems is as follows: a robotic method of pedestrian trajectory prediction and following, the method comprising the steps of:
s100, capturing video stream information by a camera, transmitting the video stream information into a video stream sparsification frame, and capturing all pedestrian position information by a deep learning algorithm;
s200, scanning the surrounding environment through a laser radar sensor of the service robot, and realizing aggregation of pedestrian point sets by utilizing a Euclidean clustering algorithm;
s300, fusing the aggregated pedestrian point set with pedestrian position information captured by a camera to obtain a prediction network and all tracks of pedestrians in the video;
s400, screening all pedestrians by utilizing a pedestrian re-identification frame, judging whether a target pedestrian exists or not, and storing track prediction points of the target pedestrian;
s500, carrying out Kalman filtering fusion on the predicted point set output by the deep learning by utilizing the position of the current target pedestrian, realizing correction, and outputting a final robot predicted track strategy.
As a further improvement of the above technical solution, the step S100 specifically includes: the method comprises the steps that video stream information is captured by a camera and is transmitted into a video stream sparsification frame, after the video stream is transmitted into the video stream sparsification frame, the video stream data is transmitted into a pedestrian re-recognition network and all pedestrian position information is captured in an RGB recognition frame based on a CSRT algorithm, wherein the pedestrian re-recognition network is YOLO v3.
As a further improvement of the above technical solution, step 200 specifically includes: scanning is achieved by using a laser radar sensor, scanning data are integrated, a Euclidean clustering algorithm is used for clustering the scanned point cloud data, iteration is continuously carried out until all points are clustered, finally, the azimuth of pedestrians relative to a service robot is found, and the aggregated data of a pedestrian position point set are output.
As a further improvement of the above technical solution, step 300 specifically includes: fusing the aggregated pedestrian point set with pedestrian position information captured by a camera, and inputting fused data into a neural network to obtain a pedestrian track prediction network; the recognition result output by the camera is in one-to-one correspondence with distance and angle information of the pedestrian position relative to the robot, and information data are stored; and transmitting the data into a pedestrian track prediction network, and outputting the pedestrian track with social interactivity.
As a further improvement of the above technical solution, step S400 specifically includes: acquiring all pedestrian prediction tracks, transmitting detected pedestrians and corresponding track prediction points into a pedestrian re-recognition frame, wherein a target pedestrian database exists in the pedestrian re-recognition frame, selecting the corresponding pedestrian prediction tracks after the target pedestrians are recognized, and storing the track prediction points of the target pedestrians.
As a further improvement of the above technical solution, step S500 specifically includes: acquiring the position of a target pedestrian and carrying out Kalman filtering fusion on a predicted point corresponding to the future moment; and entering a gate control circuit after fusion, and enabling the gate control circuit to be activated if and only if the pedestrian prediction network is updated; and finally outputting the predicted track strategy.
A robotic system for pedestrian trajectory prediction and following, comprising: the system comprises a ZED camera, a GPU embedded platform, an industrial personal computer, an MCU controller, a laser radar sensor and a wheel type odometer.
The ZED camera is used for capturing video stream information and transmitting the video stream information into a video stream sparsification frame;
the GPU embedded platform is arranged below the ZED camera and is used for processing images transmitted by the ZED camera and capturing all pedestrian position information by using a deep learning method.
The laser radar sensor is used for scanning the surrounding environment and sending scanning information data to the industrial personal computer.
The industrial personal computer is respectively connected with the GPU embedded platform and the MCU controller and is used for realizing aggregation of pedestrian point sets by utilizing Euclidean clustering, and the aggregated pedestrian point sets are fused with pedestrian position information captured by the camera to obtain all tracks of pedestrians in the video and a prediction network thereof; the method comprises the steps of screening all pedestrians by utilizing a pedestrian re-identification network, judging whether a target pedestrian exists or not, and storing track prediction points of the target pedestrian; and the method is used for carrying out Kalman filtering fusion on the predicted point set output by the deep learning by utilizing the position of the current target pedestrian to realize correction.
The MCU controller is used for receiving the control information of the industrial personal computer and converting the corresponding information into pulse control information of the wheel type odometer.
The wheel type odometer is used for outputting a final robot predicted track strategy.
The invention has the beneficial effects that: according to the invention, the laser radar and the camera are integrated, the pedestrian track prediction network is combined with the pedestrian re-recognition frame, the pedestrian interaction track network is utilized for prediction, the active selection of the optimal visual angle of the target pedestrian is realized for following, and the CSRT and the Kalman filtering are integrated, so that the robot can accurately and autonomously predict and follow the track of the pedestrian target; the method has the advantages of high robustness, high precision, high accuracy and lower cost.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and may be better understood from the following description of embodiments taken in conjunction with the accompanying drawings in which:
FIG. 1 is a flow chart of a method and system for predicting and following a pedestrian trajectory;
FIG. 2 is a flowchart of step S100 of a robot method and system for pedestrian trajectory prediction and following provided by the present invention;
fig. 3 is a flowchart of step S500 of a robot method and system for pedestrian trajectory prediction and following provided by the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
In the description of the present invention, it should be understood that references to orientation descriptions such as upper, lower, front, rear, left, right, etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of description of the present invention and to simplify the description, and do not indicate or imply that the apparatus or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the present invention.
In the description of the present invention, a number means one or more, a number means two or more, and greater than, less than, exceeding, etc. are understood to not include the present number, and above, below, within, etc. are understood to include the present number. The description of the first and second is for the purpose of distinguishing between technical features only and should not be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, unless explicitly defined otherwise, terms such as arrangement, installation, connection, etc. should be construed broadly and the specific meaning of the terms in the present invention can be reasonably determined by a person skilled in the art in combination with the specific contents of the technical scheme.
Referring to fig. 1, a robot method for predicting and following a pedestrian track, the method includes the following steps:
s100, capturing video stream information by a camera, transmitting the video stream information into a video stream sparsification frame, and capturing all pedestrian position information by a deep learning algorithm.
Referring to fig. 2, specifically, the camera captures video stream information and transmits the video stream information to a video stream thinning frame, and after the video stream is transmitted to the video stream thinning frame, the video stream data is transmitted to a pedestrian re-recognition network and captures all pedestrian position information in an RGB recognition frame based on a CSRT algorithm, wherein the pedestrian re-recognition network is YOLO v3.
Transmitting the video stream information to a pedestrian re-recognition network and an RGB recognition frame based on a CSRT algorithm, feeding back RGB feature information data of all pedestrians at present by the pedestrian re-recognition network, transmitting the RGB feature information data into the RGB recognition frame based on the CSRT algorithm, and circularly waiting for the RGB feature information data to be transmitted and updated by the RGB recognition frame based on the CSRT algorithm; a gate control circuit exists at the output position of the video stream sparsification frame, and the gate control circuit is activated only when the pedestrian re-recognition network is recognized and updated, so that a model output by a neural network is obtained and a picture is output; after the pedestrian re-recognition result is transmitted into the RGB recognition frame based on the CSRT algorithm, the recognition result of the RGB recognition frame based on the CSRT algorithm is interrupted, and then the gate control circuit is disconnected.
Preferably, the pedestrian re-recognition network is YOLO v3.
S200, scanning the surrounding environment through a laser radar sensor of the service robot, and realizing aggregation of pedestrian point sets by utilizing a Euclidean clustering algorithm;
specifically, a laser radar sensor is utilized to realize scanning, scanning data are integrated, a Euclidean clustering algorithm is utilized to perform clustering processing on scanned point cloud data, iteration is continuously performed until all points are clustered, finally, the azimuth of a pedestrian relative to a service robot is found, and the aggregated data of a pedestrian position point set are output.
S300, fusing the aggregated pedestrian point set with pedestrian position information captured by a camera to obtain a prediction network and all tracks of pedestrians in the video;
specifically, the aggregated pedestrian point set and pedestrian position information captured by a camera are fused, and the fused data are input into a neural network to obtain a pedestrian track prediction network; the recognition result output by the camera is in one-to-one correspondence with distance and angle information of the pedestrian position relative to the robot, and information data are stored; and transmitting the data into a pedestrian track prediction network, and outputting the pedestrian track with social interactivity.
S400, screening all pedestrians by utilizing a pedestrian re-identification frame, judging whether a target pedestrian exists or not, and storing track prediction points of the target pedestrian;
specifically, all pedestrian prediction tracks are obtained, detected pedestrians and corresponding track prediction points are transmitted into a pedestrian re-recognition frame, a target pedestrian database exists in the pedestrian re-recognition frame, after the target pedestrians are recognized, the corresponding pedestrian prediction tracks are selected, and the track prediction points of the target pedestrians are stored.
S500, carrying out Kalman filtering fusion on the predicted point set output by the deep learning by utilizing the position of the current target pedestrian, realizing correction, and outputting a final robot predicted track strategy.
Referring to fig. 3, specifically, a position of a target pedestrian (a distance between the pedestrian and the robot) and a predicted point of the target pedestrian track corresponding to a future time are obtained for kalman filter fusion; and after fusion, entering a gate control circuit, namely sending pedestrian corresponding position data to gate control; the gate control circuit is activated if and only if the pedestrian prediction network is updated (the target pedestrian trajectory predicts the updated predicted trajectory point); finally outputting a predicted track strategy; and after the target pedestrian track prediction is updated, the gate control circuit is disconnected after the prediction path fused with the Kalman filtering is interrupted.
A robotic system for pedestrian trajectory prediction and following, comprising: the system comprises a ZED camera, a GPU embedded platform, an industrial personal computer, an MCU controller, a laser radar sensor and a wheel type odometer.
The ZED camera is used for capturing video stream information and transmitting the video stream information into a video stream sparsification frame.
The laser radar sensor is used for scanning the surrounding environment and sending scanning information data to the industrial personal computer.
Preferably, the lidar sensor employs RPLidar A2.
The GPU embedded platform is arranged below the ZED camera and is used for processing images transmitted by the ZED camera and capturing all pedestrian position information by using a deep learning method.
Preferably, the GPU embedded platform employs Jetson TX2.
The industrial personal computer is respectively connected with the GPU embedded platform and the MCU controller and is used for realizing aggregation of pedestrian point sets by utilizing Euclidean clustering, and the aggregated pedestrian point sets are fused with pedestrian position information captured by the camera to obtain all tracks of pedestrians in the video and a prediction network thereof; the method comprises the steps of screening all pedestrians by utilizing a pedestrian re-identification network, judging whether a target pedestrian exists or not, and storing track prediction points of the target pedestrian; and the method is used for carrying out Kalman filtering fusion on the predicted point set output by the deep learning by utilizing the position of the current target pedestrian to realize correction.
The MCU controller is used for receiving the control information of the industrial personal computer and converting the corresponding information into pulse control information of the wheel type odometer.
The wheel type odometer is used for outputting a final robot predicted track strategy.
According to the invention, the 2D laser radar and the RGB camera are fused, the algorithm and the deep learning are combined to acquire and fuse information of the pedestrian target and the environment, so that the moving direction of the pedestrian target and the following path of the robot are obtained, and the robot can accurately and autonomously predict and follow the track of the pedestrian target; by combining the pedestrian track prediction network with the pedestrian re-recognition framework, the pedestrian interaction track network with sociality is utilized for prediction, the active selection of the optimal visual angle of the target pedestrian is realized for following, and methods with less resource consumption such as CSRT, kalman filtering and the like are integrated, so that the performance of a computer is supplemented to a certain extent, and the smooth operation of codes is ensured. Finally, an active following robot with high robustness is realized. Lower system cost is required, and better precision is ensured; the conventional algorithm and the deep learning are fully combined by using a common RGB camera and a 2D laser radar, so that the robustness is ensured and the corresponding precision is improved; in separating pedestrian recognition from re-recognition, repeated training of a neural network is not needed, and the accuracy of positioning and recognition is high.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of one of ordinary skill in the art without departing from the spirit of the present invention.

Claims (7)

1. A robot method for predicting and following a pedestrian trajectory, characterized by: the method comprises the following steps:
s100, capturing video stream information by a camera, transmitting the video stream information into a video stream sparsification frame, and capturing all pedestrian position information by a deep learning algorithm;
s200, scanning the surrounding environment through a laser radar sensor of the service robot, and realizing aggregation of pedestrian point sets by utilizing a Euclidean clustering algorithm;
s300, fusing the aggregated pedestrian point set with pedestrian position information captured by a camera to obtain a prediction network and all tracks of pedestrians in the video;
s400, screening all pedestrians by utilizing a pedestrian re-identification frame, judging whether a target pedestrian exists or not, and storing track prediction points of the target pedestrian;
s500, carrying out Kalman filtering fusion on predicted points of the target pedestrian track prediction corresponding to the future moment by utilizing the position of the current target pedestrian, realizing correction, and outputting a final robot predicted track strategy.
2. A robotic method of pedestrian trajectory prediction and tracking as claimed in claim 1, wherein: the step S100 specifically includes: the method comprises the steps that video stream information is captured by a camera and is transmitted into a video stream sparsification frame, after the video stream is transmitted into the video stream sparsification frame, the video stream data is transmitted into a pedestrian re-recognition network and all pedestrian position information is captured in an RGB recognition frame based on a CSRT algorithm, wherein the pedestrian re-recognition network is YOLO v3.
3. A robotic method of pedestrian trajectory prediction and tracking as claimed in claim 1, wherein: step 200 is specifically: scanning is achieved by using a laser radar sensor, scanning data are integrated, a Euclidean clustering algorithm is used for clustering the scanned point cloud data, iteration is continuously carried out until all points are clustered, finally, the azimuth of pedestrians relative to a service robot is found, and the aggregated data of a pedestrian position point set are output.
4. A robotic method of pedestrian trajectory prediction and tracking as claimed in claim 1, wherein: step 300 is specifically: fusing the aggregated pedestrian point set with pedestrian position information captured by a camera, and inputting fused data into a neural network to obtain a pedestrian track prediction network; the recognition result output by the camera is in one-to-one correspondence with distance and angle information of the pedestrian position relative to the robot, and information data are stored; and transmitting the data into a pedestrian track prediction network, and outputting the pedestrian track with social interactivity.
5. A robotic method of pedestrian trajectory prediction and tracking as claimed in claim 1, wherein: the step S400 specifically includes: acquiring all pedestrian prediction tracks, transmitting detected pedestrians and corresponding track prediction points into a pedestrian re-recognition frame, wherein a target pedestrian database exists in the pedestrian re-recognition frame, selecting the corresponding pedestrian prediction tracks after the target pedestrians are recognized, and storing the track prediction points of the target pedestrians.
6. A robotic method of pedestrian trajectory prediction and tracking as claimed in claim 1, wherein: the step S500 specifically includes: acquiring the position of a target pedestrian and carrying out Kalman filtering fusion on a predicted point of the target pedestrian at a future moment corresponding to the track prediction; and entering a gate control circuit after fusion, and enabling the gate control circuit to be activated if and only if the pedestrian prediction network is updated; and finally outputting the predicted track strategy.
7. A robotic system for pedestrian trajectory prediction and tracking, comprising: the system comprises a ZED camera, a GPU embedded platform, an industrial personal computer, an MCU controller, a laser radar sensor and a wheel type odometer;
the ZED camera is used for capturing video stream information and transmitting the video stream information into a video stream sparsification frame;
the GPU embedded platform is positioned below the ZED camera and is used for processing images transmitted by the ZED camera and capturing all pedestrian position information by using a deep learning method;
the laser radar sensor is used for scanning the surrounding environment and transmitting scanning information data to the industrial personal computer;
the industrial personal computer is respectively connected with the GPU embedded platform and the MCU controller and is used for realizing aggregation of pedestrian point sets by utilizing Euclidean clustering, and the aggregated pedestrian point sets are fused with pedestrian position information captured by the camera to obtain all tracks of pedestrians in the video and a prediction network thereof; the method comprises the steps of screening all pedestrians by utilizing a pedestrian re-identification network, judging whether a target pedestrian exists or not, and storing track prediction points of the target pedestrian; the method is used for carrying out Kalman filtering fusion on the predicted points of the target pedestrian track prediction corresponding to the future moment by utilizing the position of the current target pedestrian to realize correction;
the MCU controller is used for receiving the control information of the industrial personal computer and converting the corresponding information into pulse control information of the wheel type odometer;
the wheel type odometer is used for outputting a final robot predicted track strategy.
CN202010388347.3A 2020-05-09 2020-05-09 Robot method and system for predicting and following pedestrian track Active CN111708042B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010388347.3A CN111708042B (en) 2020-05-09 2020-05-09 Robot method and system for predicting and following pedestrian track

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010388347.3A CN111708042B (en) 2020-05-09 2020-05-09 Robot method and system for predicting and following pedestrian track

Publications (2)

Publication Number Publication Date
CN111708042A CN111708042A (en) 2020-09-25
CN111708042B true CN111708042B (en) 2023-05-02

Family

ID=72537185

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010388347.3A Active CN111708042B (en) 2020-05-09 2020-05-09 Robot method and system for predicting and following pedestrian track

Country Status (1)

Country Link
CN (1) CN111708042B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418288B (en) * 2020-11-17 2023-02-03 武汉大学 GMS and motion detection-based dynamic vision SLAM method
CN112478015B (en) * 2021-02-03 2021-04-16 德鲁动力科技(成都)有限公司 Four-footed robot foot end touchdown detection method and system
CN112965081B (en) * 2021-02-05 2023-08-01 浙江大学 Simulated learning social navigation method based on feature map fused with pedestrian information
CN113313201A (en) * 2021-06-21 2021-08-27 南京挥戈智能科技有限公司 Multi-target detection and distance measurement method based on Swin transducer and ZED camera
CN113916221B (en) * 2021-09-09 2024-01-09 北京理工大学 Self-adaptive pedestrian dead reckoning method integrating visual odometer and BP network
CN115877328B (en) * 2023-03-06 2023-05-12 成都鹰谷米特科技有限公司 Signal receiving and transmitting method of array radar and array radar

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106125087B (en) * 2016-06-15 2018-10-30 清研华宇智能机器人(天津)有限责任公司 Pedestrian tracting method in Dancing Robot room based on laser radar
US10884417B2 (en) * 2016-11-07 2021-01-05 Boston Incubator Center, LLC Navigation of mobile robots based on passenger following
CN107765220B (en) * 2017-09-20 2020-10-23 武汉木神机器人有限责任公司 Pedestrian following system and method based on UWB and laser radar hybrid positioning
CN109146929B (en) * 2018-07-05 2021-12-31 中山大学 Object identification and registration method based on event-triggered camera and three-dimensional laser radar fusion system
US11340610B2 (en) * 2018-07-24 2022-05-24 Huili Yu Autonomous target following method and device
CN109444911B (en) * 2018-10-18 2023-05-05 哈尔滨工程大学 Unmanned ship water surface target detection, identification and positioning method based on monocular camera and laser radar information fusion
CN109947119B (en) * 2019-04-23 2021-06-29 东北大学 Mobile robot autonomous following method based on multi-sensor fusion
CN110414396B (en) * 2019-07-19 2021-07-16 中国人民解放军海军工程大学 Unmanned ship perception fusion algorithm based on deep learning

Also Published As

Publication number Publication date
CN111708042A (en) 2020-09-25

Similar Documents

Publication Publication Date Title
CN111708042B (en) Robot method and system for predicting and following pedestrian track
CN110948492B (en) Three-dimensional grabbing platform and grabbing method based on deep learning
US11960290B2 (en) Systems and methods for end-to-end trajectory prediction using radar, LIDAR, and maps
KR102043142B1 (en) Method and apparatus for learning artificial neural network for driving control of automated guided vehicle
JP2021089724A (en) 3d auto-labeling with structural and physical constraints
CN109444912B (en) Driving environment sensing system and method based on cooperative control and deep learning
KR20190103103A (en) Artificial intelligence moving agent
CN113093726A (en) Target detection and tracking method based on Yolo _ v4 algorithm
US20220020158A1 (en) System and method for 3d object detection and tracking with monocular surveillance cameras
Loukkal et al. Driving among flatmobiles: Bird-eye-view occupancy grids from a monocular camera for holistic trajectory planning
KR20190106918A (en) Artificial intelligence moving agent
Ismail et al. Vision-based system for line following mobile robot
Jiang et al. Perceive, interact, predict: Learning dynamic and static clues for end-to-end motion prediction
CN113128339A (en) Intelligent vehicle operation control system and method based on behavior recognition
CN114730192A (en) Object moving system
Singh Transformer-based sensor fusion for autonomous driving: A survey
Mygapula et al. Cnn based end to end learning steering angle prediction for autonomous electric vehicle
CN102745196A (en) Intelligent control device and method for granular computing-based micro intelligent vehicle
KR102537381B1 (en) Pedestrian trajectory prediction apparatus
Mutz et al. Following the leader using a tracking system based on pre-trained deep neural networks
Wu et al. Cmp: Cooperative motion prediction with multi-agent communication
US20240185612A1 (en) Systems and methods for controlling a vehicle by detecting and tracking objects through associated detections
CN112116630A (en) Target tracking method
CN111062311A (en) Pedestrian gesture recognition and interaction method based on depth-level separable convolutional network
US20230281961A1 (en) System and method for 3d object detection using multi-resolution features recovery using panoptic segmentation information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant