CN111027432B - Gait feature-based visual following robot method - Google Patents

Gait feature-based visual following robot method Download PDF

Info

Publication number
CN111027432B
CN111027432B CN201911211427.5A CN201911211427A CN111027432B CN 111027432 B CN111027432 B CN 111027432B CN 201911211427 A CN201911211427 A CN 201911211427A CN 111027432 B CN111027432 B CN 111027432B
Authority
CN
China
Prior art keywords
gait
human body
image
extracting
constructing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911211427.5A
Other languages
Chinese (zh)
Other versions
CN111027432A (en
Inventor
夏阳
杜兆臣
郑仁成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201911211427.5A priority Critical patent/CN111027432B/en
Publication of CN111027432A publication Critical patent/CN111027432A/en
Application granted granted Critical
Publication of CN111027432B publication Critical patent/CN111027432B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Computing Systems (AREA)

Abstract

The invention belongs to the technical field of machine vision, and provides a gait feature-based vision following robot method. The method comprises the steps of acquiring a gait video of a person during walking in advance, and extracting a gait contour map of the human body in a motion cycle; positioning key bones of a human body according to the Kinect, and extracting a joint swing angle; establishing a characteristic vector according to the gait contour map and the joint swing angle, and setting different weights for the two parts to realize characteristic fusion; designing a classifier by utilizing nearest neighbor classification of Euclidean distance, and training a sample to obtain identification characteristics; and finally, writing an ROS object follower plug-in so as to realize robot following. The method for realizing robot following based on gait recognition does not need wearable equipment, is low in cost, convenient in information acquisition and high in recognition rate, and provides a new idea for robot visual following.

Description

Gait feature-based visual following robot method
Technical Field
The invention belongs to the technical field of machine vision, and particularly relates to a gait feature-based vision following robot method.
Background
At present, in the external research on a visual following robot, identification features (such as clothes texture and color) similar to a followed person may appear under the conditions of large pedestrian volume and complex following environment based on methods such as color matching, clothes texture matching and depth information, so that the robot identification features are interfered, and the following failure is caused. For a visual following robot, the followed target is correctly identified for realizing accurate following, and the robot can capture the unique characteristics of the followed target at any time.
In the prior art, the identification feature of the visual following robot has no uniqueness, so that the robot is lost in crowds, and an effective solution is not provided at present.
Disclosure of Invention
In view of the shortcomings of the prior art, the present invention provides a gait feature-based visual following robot method and system.
According to an aspect of an embodiment of the present invention, there is provided a gait feature-based visual following robot method including:
step 1, acquiring gait videos of people walking in indoor and outdoor environments in advance, and extracting a key frame in a motion period according to the contact condition of feet and the ground;
step 2, detecting a moving human body in a video by using a foreground/background segmentation algorithm based on a Gaussian mixture model;
step 3, extracting a gait contour map, and carrying out morphological processing denoising and size normalization;
step 4, positioning key bones of the human body according to the Kinect, and extracting a swing angle of a lower limb joint of the human body;
step 5, establishing a feature vector according to the gait contour map and the joint swing angle, and setting different weights for the two parts to realize feature fusion;
step 6, designing a classifier by utilizing nearest neighbor classification of Euclidean distance, and training a sample to obtain identification characteristics;
step 7, recognizing human gait, and constructing an object follower;
and 8, matching the template with the pedestrian to realize robot following.
The invention has the beneficial effects that: firstly, the joint swing angle and the gait contour map of the gait are used as identification features, and multi-feature fusion of the gait features is realized; secondly, the gait acquisition equipment only has Kinect, other acquisition and sensing equipment is not needed, and the cost is low; finally, the gait is used as the identification characteristic of the following robot, so that the following robot keeps high identification rate in the following process.
Drawings
FIG. 1 is a technical flow chart of a gait feature-based visual following robot method according to the invention;
FIG. 2 is a skeleton swing angle reduced model established by the present invention;
FIG. 3 is a composite gait profile of the invention;
fig. 4 is a flowchart of the abstract of the gait feature-based visual following robot method according to the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention are further described below with reference to the accompanying drawings and technical solutions.
According to an aspect of an embodiment of the present invention, there is provided a gait feature-based visual following robot method including:
step 1, acquiring gait videos of people walking in indoor and outdoor environments in advance, and extracting a key frame in a motion period according to the contact condition of feet and the ground;
step 2, detecting a moving human body in a gait video by using a foreground/background segmentation algorithm based on a Gaussian mixture model;
step 3, extracting a gait contour map, and carrying out morphological processing denoising and size normalization;
(1) Extracting a gait contour map: carrying out image binarization operation on the extracted gait contour map, setting the gray value of the human body contour as 1 and setting the gray value of the foreground as 0; therefore, a binary image with white human body outline and black bottom color is extracted;
(2) Denoising by using morphological image processing: corroding and expanding the binary image to realize opening operation and closing operation, thereby realizing cavity filling and boundary extraction and achieving the purpose of removing noise;
(3) The size normalization ensures that the height of the scanned mass center of the human body is consistent: scanning the image to obtain the height and width of the contour; determining coordinates of the mass center through the height and the width, respectively extending distances m along the width positive and negative directions by taking the mass center as a starting point, and respectively extending distances n along the height positive and negative directions by taking the mass center as a starting point to obtain uniform dimensions 2m x 2n;
step 4, positioning key human bones according to the Kinect, and extracting the swing angle of the lower limb joints of the human body;
(1) Positioning coordinates of human body joint points: establishing coordinates of human joints including a hip joint, a left knee joint, a right knee joint, a left ankle joint and a right ankle joint by processing depth data by using a Kinect skeleton tracking method;
(2) Extracting a joint swing angle: constructing a pendulum model based on the lower limb swinging of a human body during walking, extracting a left thigh vertical angle, a right thigh vertical angle, a left shank vertical angle, a right shank vertical angle and an angle between a hip joint and a thigh, and constructing a feature vector of a joint swinging angle according to the five features;
step 5, establishing a feature vector according to the gait contour map and the joint swing angle, and setting different weights for the two parts to realize feature fusion;
(1) Establishing a gait contour and a characteristic vector of a joint swing angle: constructing a characteristic gait contour map and a joint swing angle for establishing a characteristic vector; extracting different types of sequence period values in a database, selecting an image frame range from which a joint swing angle and a gait energy image are to be extracted according to the sequence period values, extracting features, and constructing feature vectors;
(2) And realizing feature fusion of a feature layer: by using a weighted multi-feature fusion algorithm, different weights are assigned to the feature vectors of all parts; the joint swing angle characteristic vector is defined as A, and a weight omega A is given; defining a gait profile feature vector as S, and constructing a fused feature vector by giving a weight ω S, ω A + ω S = 1:
S={ωAA,ωSS}
step 6, designing a classifier by utilizing nearest neighbor classification of Euclidean distance, and training samples to obtain identification characteristics, including sample acquisition and sample training;
(1) Collecting samples: according to the step 1, respectively collecting training samples indoors and outdoors; indoor: interference of Zhou Weizhuo equipment, chairs and other equipment needs to be considered, 10 sections of videos are collected, each section of video takes 5 gait cycles, key frames of 50 pictures are extracted in each cycle, namely 2500 samples are needed outdoors for indoor samples; outdoor: in consideration of the influence of illumination, 2 segments of videos are respectively collected at 8 o ' clock, 10 o ' clock, 12 o ' clock, 14 o ' clock and 16 o ' clock in a time-sharing collection mode, each segment of video takes 5 periods, and key frames of 50 pictures are extracted in each period, namely 2500 samples are needed outdoors. The total number of positive samples is 5000, and 20 negative samples, namely 5020 total samples, need to be added manually.
(2) Sample training: model selection using cross-validation set: training 10 models to be selected by using 60% of data as a training set; using 20% of data as a cross validation set, calculating cost function values of 10 models to be selected, and selecting a model with the minimum cost function; the value of the cost function is calculated by using the remaining 20% of data to verify the model;
step 7, recognizing human gait and constructing an object follower;
(1) Constructing an object recognizer: finding a region of interest (ROI), tracking a human body according to the ROI to acquire gait information of the human body and publishing the gait information in a ROI topic, keeping the human body in the center of a view, and if a target deviates, compensating an offset through the rotation of a robot; the/roi topic calls an identifier system, and sends a rotation command to the robot according to the change of the position of the human body, so that the real-time tracking of the gait of the human body is realized;
(2) Constructing an object follower: scanning a moving human body, subscribing topics of an area of interest/roi and a depth image camera/depth/image _ raw, and processing a depth image of the moving human body into depth information; subscribing a depth Image in openni by adopting OpenCv, sending a sensor _ msgs/Image message to a camera/depth/Image _ raw by openni, obtaining the distance of a scanning region ROI according to the depth Image of a moving human body, adjusting the speed of a following trolley according to the speed of the moving human body, and keeping a certain following distance;
and 8, matching the plate with the pedestrian to realize robot following.

Claims (1)

1. A gait feature-based visual following robot method is characterized by comprising the following steps:
step 1, acquiring gait videos of people walking in indoor and outdoor environments in advance, and extracting a key frame in a motion period according to the contact condition of feet and the ground;
step 2, detecting a moving human body in a gait video by using a foreground/background segmentation algorithm based on a Gaussian mixture model;
step 3, extracting a gait contour map, and carrying out morphological processing denoising and size normalization;
(1) Extracting a gait contour map: carrying out image binarization operation on the extracted gait contour map, setting the gray value of the human body contour as 1 and setting the gray value of the foreground as 0; therefore, a binary image with white human body outline and black background color is extracted;
(2) Denoising by using morphological image processing: corroding and expanding the binary image to realize opening operation and closing operation, thereby realizing cavity filling and boundary extraction and achieving the purpose of removing noise;
(3) The size normalization ensures that the heights of the scanned barycenters of the human body are consistent: scanning the image to obtain the height and width of the contour; determining coordinates of the mass center through the height and the width, respectively extending distances m along the width positive and negative directions by taking the mass center as a starting point, and respectively extending distances n along the height positive and negative directions by taking the mass center as a starting point to obtain uniform dimensions 2m x 2n;
step 4, positioning key human bones according to the Kinect, and extracting the swing angle of the lower limb joints of the human body;
(1) Positioning coordinates of human body joint points: establishing coordinates of human joints including hip joints, left knee joints, right knee joints, left ankle joints and right ankle joints by a Kinect skeleton tracking method through processing depth data;
(2) Extracting a joint swing angle: constructing a pendulum model based on the lower limb swinging of a human body during walking, extracting a left thigh vertical angle, a right thigh vertical angle, a left shank vertical angle, a right shank vertical angle and an angle between a hip joint and a thigh, and constructing a feature vector of a joint swinging angle according to the five features;
step 5, establishing a feature vector according to the gait contour map and the joint swing angle, and setting different weights for the two parts to realize feature fusion;
(1) Establishing a gait contour and a characteristic vector of a joint swing angle: constructing a characteristic gait contour map and a joint swing angle for establishing a characteristic vector; extracting different types of sequence period values in a database, selecting an image frame range from which a joint swing angle and a gait energy image are to be extracted according to the sequence period values, extracting features, and constructing feature vectors;
(2) And realizing feature fusion of a feature layer: by using a weighted multi-feature fusion algorithm, different weights are assigned to the feature vectors of all parts; the joint swing angle characteristic vector is defined as A, and a weight omega A is given; defining a gait profile feature vector as S, and constructing a fused feature vector by giving a weight ω S, ω A + ω S = 1:
S={ωAA,ωSS}
step 6, designing a classifier by utilizing nearest neighbor classification of Euclidean distance, and training samples to obtain identification characteristics, including sample acquisition and sample training;
(1) Collecting samples: taking the indoor and outdoor gait videos collected in the step 1 as training samples, wherein the indoor collected videos need to consider the interference of office equipment, and the outdoor collected videos need to consider the change of illumination;
(2) Sample training: model selection using cross-validation set: training a plurality of models to be selected by using 60% of data as a training set; using 20% of data as a cross validation set, calculating a cost function value of the model to be selected, and selecting the model with the minimum cost function; the value of the cost function is calculated by using the remaining 20% of data to verify the model;
step 7, recognizing human gait, and constructing an object follower;
(1) Constructing an object recognizer: finding an interested area, tracking and acquiring human body gait information according to the interested area, publishing the gait information in a roi topic, keeping the human body in the view center, and compensating the offset through the rotation of the robot if the target deviates; the/roi topic calls an identifier system, and sends a rotation command to the robot according to the change of the position of the human body, so that the real-time tracking of the gait of the human body is realized;
(2) Constructing an object follower: scanning a moving human body, subscribing topics of an interested area/roi and a depth image camera/depth/image _ raw, and then processing the depth image of the moving human body into depth information; subscribing a depth Image in openni by adopting OpenCv, sending a sensor _ msgs/Image message to a camera/depth/Image _ raw by openni, obtaining the distance of a scanned region of interest (ROI) according to the depth Image of a moving human body, adjusting the speed of a following trolley according to the speed of the moving human body, and keeping the following distance;
and 8, matching the plate with the pedestrian to realize robot following.
CN201911211427.5A 2019-12-02 2019-12-02 Gait feature-based visual following robot method Active CN111027432B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911211427.5A CN111027432B (en) 2019-12-02 2019-12-02 Gait feature-based visual following robot method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911211427.5A CN111027432B (en) 2019-12-02 2019-12-02 Gait feature-based visual following robot method

Publications (2)

Publication Number Publication Date
CN111027432A CN111027432A (en) 2020-04-17
CN111027432B true CN111027432B (en) 2022-10-04

Family

ID=70207513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911211427.5A Active CN111027432B (en) 2019-12-02 2019-12-02 Gait feature-based visual following robot method

Country Status (1)

Country Link
CN (1) CN111027432B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111950418A (en) * 2020-08-03 2020-11-17 启航汽车有限公司 Gait recognition method, device and system based on leg features and readable storage medium
CN112046662B (en) * 2020-08-13 2023-01-17 哈尔滨工业大学(深圳) Walking-replacing following robot and walking-replacing following method thereof
CN112704491B (en) * 2020-12-28 2022-01-28 华南理工大学 Lower limb gait prediction method based on attitude sensor and dynamic capture template data
CN113095268B (en) * 2021-04-22 2023-11-21 中德(珠海)人工智能研究院有限公司 Robot gait learning method, system and storage medium based on video stream
CN113238552A (en) * 2021-04-28 2021-08-10 深圳优地科技有限公司 Robot, robot movement method, robot movement device and computer-readable storage medium
CN114863567B (en) * 2022-05-19 2023-03-10 北京中科睿医信息科技有限公司 Method and device for determining gait information
CN118212697A (en) * 2024-05-20 2024-06-18 上海商涌科技有限公司 Feature extraction method, gait analysis method, recognition method, device and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056050A (en) * 2016-05-23 2016-10-26 武汉盈力科技有限公司 Multi-view gait identification method based on adaptive three dimensional human motion statistic model
CN106295544A (en) * 2016-08-04 2017-01-04 山东师范大学 A kind of unchanged view angle gait recognition method based on Kinect
CN107122718A (en) * 2017-04-05 2017-09-01 西北工业大学 A kind of new target pedestrian's trace tracking method based on Kinect

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056050A (en) * 2016-05-23 2016-10-26 武汉盈力科技有限公司 Multi-view gait identification method based on adaptive three dimensional human motion statistic model
CN106295544A (en) * 2016-08-04 2017-01-04 山东师范大学 A kind of unchanged view angle gait recognition method based on Kinect
CN107122718A (en) * 2017-04-05 2017-09-01 西北工业大学 A kind of new target pedestrian's trace tracking method based on Kinect

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于深度摄像机的3D人体步态建模和识别方法;罗坚等;《光学技术》;20191115(第06期);全文 *

Also Published As

Publication number Publication date
CN111027432A (en) 2020-04-17

Similar Documents

Publication Publication Date Title
CN111027432B (en) Gait feature-based visual following robot method
CN107423729B (en) Remote brain-like three-dimensional gait recognition system oriented to complex visual scene and implementation method
US10417775B2 (en) Method for implementing human skeleton tracking system based on depth data
Uddin et al. Human activity recognition using body joint‐angle features and hidden Markov model
WO2020042419A1 (en) Gait-based identity recognition method and apparatus, and electronic device
CN106056053B (en) The human posture's recognition methods extracted based on skeleton character point
JP5873442B2 (en) Object detection apparatus and object detection method
CN107481315A (en) A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms
CN109344694B (en) Human body basic action real-time identification method based on three-dimensional human body skeleton
CN109299659A (en) A kind of human posture recognition method and system based on RGB camera and deep learning
CN109758756B (en) Gymnastics video analysis method and system based on 3D camera
CN106909890B (en) Human behavior recognition method based on part clustering characteristics
CN108898108B (en) User abnormal behavior monitoring system and method based on sweeping robot
CN111476077A (en) Multi-view gait recognition method based on deep learning
CN117671738B (en) Human body posture recognition system based on artificial intelligence
CN107122711A (en) A kind of night vision video gait recognition method based on angle radial transformation and barycenter
CN110991398A (en) Gait recognition method and system based on improved gait energy map
CN115376034A (en) Motion video acquisition and editing method and device based on human body three-dimensional posture space-time correlation action recognition
Kanaujia et al. Part segmentation of visual hull for 3d human pose estimation
Jean et al. Body tracking in human walk from monocular video sequences
CN110111368B (en) Human body posture recognition-based similar moving target detection and tracking method
Fan et al. Pose estimation of human body based on silhouette images
CN207529394U (en) A kind of remote class brain three-dimensional gait identifying system towards under complicated visual scene
Kuang et al. An effective skeleton extraction method based on Kinect depth image
El-Sallam et al. A low cost 3D markerless system for the reconstruction of athletic techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant