CN111291690A - Route planning method, route planning device, robot, and medium - Google Patents

Route planning method, route planning device, robot, and medium Download PDF

Info

Publication number
CN111291690A
CN111291690A CN202010096131.XA CN202010096131A CN111291690A CN 111291690 A CN111291690 A CN 111291690A CN 202010096131 A CN202010096131 A CN 202010096131A CN 111291690 A CN111291690 A CN 111291690A
Authority
CN
China
Prior art keywords
crowd
density
target
map
crowd density
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010096131.XA
Other languages
Chinese (zh)
Other versions
CN111291690B (en
Inventor
吴婉银
陶大鹏
林旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Union Vision Innovation Technology Co ltd
Original Assignee
Shenzhen Union Vision Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Union Vision Innovation Technology Co ltd filed Critical Shenzhen Union Vision Innovation Technology Co ltd
Priority to CN202010096131.XA priority Critical patent/CN111291690B/en
Publication of CN111291690A publication Critical patent/CN111291690A/en
Application granted granted Critical
Publication of CN111291690B publication Critical patent/CN111291690B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of image recognition, and provides a route planning method, a route planning device, a robot and a medium, wherein the route planning method comprises the steps of collecting the acquired crowd images of a monitored area in a target time period, inputting the crowd images into a trained crowd flow prediction model, conducting crowd flow prediction through the trained crowd flow prediction model to further obtain a crowd density map set and a crowd flow map, carrying out clustering processing on the target crowd density map and the crowd flow map, enabling the obtained target crowd density cluster to reflect the crowd density distribution condition of the monitored area after the target time period, and then carrying out route planning by taking the position of the target crowd density cluster as a terminal point to obtain a driving route, so that the robot can reach an area with the largest crowd density, and the phenomenon that the driving route of the robot avoids crowds or wanders in an unmanned area is avoided, the application range of the planning scheme of the robot driving route is widened.

Description

Route planning method, route planning device, robot, and medium
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a route planning method, a route planning device, a robot, and a computer-readable storage medium.
Background
With the continuous development of artificial intelligence technology, a variety of intelligent robots are developed. For example, a robot for guiding a customer into a seat in a restaurant; as another example, a robot for providing counseling service in a shop, etc.
However, when a robot is used to provide a service to a public place, the existing robots are driven along a route set in advance in order to prevent the robot from colliding with an obstacle during driving. Specifically, in order to achieve that the robot can bypass obstacles in the environment during traveling, it is necessary to arrange a fixed traveling path according to a predetermined public place environment and to cause the robot to travel according to the fixed traveling path. However, the traffic volume or the crowd density in public places are not constant, and if the robot is caused to travel along a fixed travel route, the travel route of the robot is likely to be deviated from the crowd, or the robot wanders in an unmanned area. Therefore, the problem of narrow application range exists in the existing planning scheme of the robot driving route.
Disclosure of Invention
In view of this, embodiments of the present application provide a route planning method, a route planning apparatus, a robot, and a computer-readable storage medium, so as to solve the problem that an existing robot route planning scheme has a small application range.
A first aspect of an embodiment of the present application provides a route planning method, including:
acquiring a crowd image set of a monitored area in a target time period; the crowd image set comprises a plurality of crowd images to be identified with continuous time sequences;
inputting each image of the crowd to be identified in the crowd image set into a trained crowd flow prediction model;
outputting a corresponding crowd density map set and a corresponding crowd flow map based on the crowd image set through the trained crowd flow prediction model; wherein the crowd flow graph is used for describing the crowd flow direction corresponding to the crowd density map set;
selecting a target crowd density map from the crowd density map set and clustering the crowd flow map to obtain a plurality of crowd density clusters, and determining a target crowd density cluster from the crowd density clusters;
and planning a route by taking the position of the target crowd density cluster as a terminal point to obtain a driving route.
Further, the trained crowd flow prediction model comprises: an encoding framework and a decoding framework;
outputting a corresponding population density map set and a corresponding population mobility map based on the population image set through the trained population mobility prediction model, wherein the population density map set and the corresponding population mobility map comprise:
extracting the characteristics of each crowd image to be identified in the crowd image set through the coding frame, and transmitting the extracted crowd characteristic information to the decoding frame;
generating and outputting, by the decoding framework, a set of population density maps and a population flow map based on the population characteristic information.
Further, the decoding framework comprises a first decoding framework and a second decoding framework;
the outputting, by the decoding framework, a set of crowd density maps and a crowd flow map based on the crowd characteristic information, comprising:
generating and outputting, by the first decoding framework, a set of crowd density maps based on the crowd characteristic information; the crowd density map set comprises a plurality of crowd density maps with continuous time sequences, and the crowd density maps with continuous time sequences in the crowd density map set correspond to the crowd images to be identified with continuous time sequences in the crowd image set one by one;
and calculating the crowd flow characteristic information in the crowd density maps with continuous time sequences through the second decoding frame to generate and output a crowd flow map.
Further, the second decoding frame is used for calculating the crowd flow characteristic information in the crowd density graphs with continuous time sequences by adopting an optical flow method, and a crowd flow graph is generated.
Further, the target time period comprises a starting time point and an ending time point;
the selecting a target crowd density map from the crowd density map set and clustering the crowd flow map to obtain a plurality of crowd density clusters, and determining the target crowd density cluster from the crowd density clusters, includes:
selecting a crowd density graph corresponding to the starting time point from the crowd density graph set as a target crowd density graph;
clustering the target crowd density map and the crowd flow map according to preset field parameters to obtain Y crowd density clusters, wherein Y is an integer greater than 0;
determining a weight value of each crowd density cluster in the Y crowd density clusters through the following formula to obtain a weight value set;
Figure BDA0002385412160000031
wherein, S (c)i) The weight of the ith personal group density cluster in the Y personal group density clusters,
Figure BDA0002385412160000032
the number of the pixel points in the ith personal group density cluster,
Figure BDA0002385412160000033
ξ is a preset compensation parameter, and ξ is not equal to 0, which is the moving speed of the ith crowd density cluster;
determining a maximum weight value from the weight value set, and identifying a crowd density cluster corresponding to the maximum weight value as a target crowd density cluster; wherein the target crowd density cluster is used for representing the position of the monitored area with the largest crowd density at the termination time point.
Further, the performing route planning by using the position of the target crowd density cluster as a terminal point to obtain a driving route includes:
acquiring a target position of a target crowd density cluster in the monitored area;
carrying out route planning according to the current position and the target position to obtain a driving route; wherein the travel route is used to describe a travel path from the current location to the target location in the monitored area.
Further, before the step of inputting each image of the crowd to be identified in the set of crowd images into the trained crowd flow prediction model, the method further includes:
acquiring a plurality of original crowd image samples within a preset time period, and performing characteristic marking operation on each original crowd image sample to obtain a crowd image sample set with marks;
and training a crowd flow prediction model by using the crowd image sample set with the marks to obtain the trained crowd flow prediction model.
Further, after the step of performing route planning by taking the position of the target crowd density cluster as a terminal point to obtain a driving route, the method further includes:
and driving from the current position to the position of the target crowd density cluster according to a preset avoidance strategy and the driving route.
A second aspect of an embodiment of the present application provides a route planning apparatus, including:
the image acquisition unit is used for acquiring a crowd image set of a monitored area in a target time period; the crowd image set comprises a plurality of crowd images to be identified with continuous time sequences;
the input unit is used for inputting each to-be-identified crowd image in the crowd image set into a trained crowd flow prediction model;
a first execution unit, configured to output, through the trained crowd flow prediction model, a corresponding crowd density map set and a corresponding crowd flow map based on the crowd image set; wherein the crowd flow graph is used for describing the crowd flow direction corresponding to the crowd density map set;
the clustering unit is used for selecting a target crowd density map from the crowd density map set and clustering the crowd flow map to obtain a plurality of crowd density clusters and determining a target crowd density cluster from the crowd density clusters;
and the second execution unit is used for carrying out route planning by taking the position of the target crowd density cluster as a terminal point to obtain a driving route.
Further, the trained crowd flow prediction model comprises: an encoding framework and a decoding framework;
the first execution unit is specifically configured to perform feature extraction on each crowd image to be identified in the crowd image set through the encoding frame, and transmit extracted crowd feature information to the decoding frame; generating and outputting, by the decoding framework, a set of population density maps and a population flow map based on the population characteristic information.
Further, the first execution unit is specifically configured to generate and output, by the first decoding framework and based on the crowd characteristic information, a crowd density map set; the crowd density map set comprises a plurality of crowd density maps with continuous time sequences, and the crowd density maps with continuous time sequences in the crowd density map set correspond to the crowd images to be identified with continuous time sequences in the crowd image set one by one; and calculating the crowd flow characteristic information in the crowd density maps with continuous time sequences through the second decoding frame to generate and output a crowd flow map.
Further, the first execution unit is specifically configured to calculate, by using an optical flow method through the second decoding framework, crowd flow feature information in multiple time-series continuous crowd density maps, and generate a crowd flow map.
Further, the target time period comprises a starting time point and an ending time point;
the clustering unit is specifically configured to select a crowd density map corresponding to the starting time point from the crowd density map set as a target crowd density map; clustering the target crowd density map and the crowd flow map according to preset field parameters to obtain Y crowd density clusters, wherein Y is an integer greater than 0; determining a weight value of each crowd density cluster in the Y crowd density clusters through the following formula to obtain a weight value set;
Figure BDA0002385412160000051
wherein, S (c)i) The weight of the ith personal group density cluster in the Y personal group density clusters,
Figure BDA0002385412160000052
the number of the pixel points in the ith personal group density cluster,
Figure BDA0002385412160000053
the method comprises the steps of determining a maximum weight value from a weight value set, identifying a crowd density cluster corresponding to the maximum weight value as a target crowd density cluster, wherein ξ is a preset compensation parameter and ξ is not equal to 0, the moving speed of the ith crowd density cluster is the moving speed of the ith crowd density cluster, and the crowd density cluster corresponding to the maximum weight value is identified as the target crowd density cluster, wherein the target crowd density cluster is used for representing the position of the monitored area with the maximum crowd density at the termination time point.
Further, the second execution unit is specifically configured to acquire a target position of a target crowd density cluster in the monitored area; carrying out route planning according to the current position and the target position to obtain a driving route; wherein the travel route is used to describe a travel path from the current location to the target location in the monitored area.
Further, the route planning apparatus further includes:
the system comprises a sample set acquisition unit, a characteristic marking unit and a comparison unit, wherein the sample set acquisition unit is used for acquiring a plurality of original crowd image samples in a preset time period and carrying out characteristic marking operation on each original crowd image sample to obtain a crowd image sample set with marks;
and the training unit is used for training a crowd flow prediction model by using the crowd image sample set with the marks to obtain the trained crowd flow prediction model.
Further, the route planning apparatus further includes:
and the third execution unit is used for driving from the current position to the position of the target crowd density cluster according to a preset avoidance strategy and the driving route.
A third aspect of the embodiments of the present application provides a robot, including a memory, a processor, and a computer program stored in the memory and operable on the robot, wherein the processor, when executing the computer program, implements the steps of the route planning method provided by the first aspect.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program that, when executed by a processor, implements the steps of the route planning method provided by the first aspect.
A fifth aspect of embodiments of the present application provides a computer program product, which, when run on a robot, causes the robot to perform the steps of the route planning method according to any one of the first aspects described above.
The route planning method, the route planning device, the robot and the computer readable storage medium provided by the embodiment of the application have the following beneficial effects:
the route planning method provided by the embodiment of the application inputs the acquired crowd image set of the monitored area in the target time period into the trained crowd flow prediction model, and then carries out crowd flow prediction through the trained crowd flow prediction model to further obtain the crowd density map set and the crowd flow map, because the target crowd density map in the crowd density map set can reflect the crowd density condition at a certain moment in the target time period and the crowd flow map can reflect the crowd flow direction and speed in the target time period, the target crowd density map and the crowd flow map are clustered, the obtained target crowd density cluster can reflect the crowd density distribution condition of the monitored area after the target time period, and the running route obtained by carrying out route planning by taking the position of the target crowd density cluster as the terminal point, the robot can reach the area with the maximum pedestrian flow density, the phenomenon that the running path of the robot avoids crowds or wanders in an unmanned area is avoided, and the application range of the planning scheme of the running path of the robot is widened.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of an implementation of a route planning method according to an embodiment of the present application;
FIG. 2 is a flow chart of an implementation of a route planning method according to another embodiment of the present application;
fig. 3 is a block diagram of a route planning device according to an embodiment of the present disclosure;
fig. 4 is a block diagram of a robot according to another embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Referring to fig. 1, fig. 1 is a flowchart illustrating a route planning method according to an embodiment of the present disclosure. In the present embodiment, the route planning method is used for planning the driving route of the robot, and the execution subject of the route planning method is the robot or a computer device configured on the robot, for example, a robot for providing a pedestrian with a consultation or guiding server in a public place, and the like.
The route planning method as shown in fig. 1 comprises the following steps:
s11: acquiring a crowd image set of a monitored area in a target time period; the crowd image set comprises a plurality of sequential crowd images to be identified.
In step S11, the monitored area is an area that serves the robot and satisfies the robot travel conditions. The target time period is a reference time period for determining the travel path of the robot. The crowd image set is a set of all crowd images acquired in a target time period.
It should be noted that the crowd image set includes a plurality of to-be-identified crowd images with continuous time sequences, where continuous time sequences refer to that in the plurality of to-be-identified crowd images, each to-be-identified crowd image corresponds to a time point, and a plurality of time points corresponding to the plurality of to-be-identified crowd images form a continuous time sequence, that is, a target time end. The crowd images to be identified in the crowd image set with continuous time sequence are the crowd images to be identified in the target time period with continuous time sequence.
In all embodiments of the present application, the crowd image of the monitored area may be acquired by a monitoring device disposed in the monitored area, for example, a monitoring camera. The method comprises the steps that crowd images in a monitored area are collected through monitoring equipment and stored in a preset database in real time, and when a driving route needs to be planned for the robot, a target time period is set, so that a crowd image set of the monitored area in the target time period is obtained from the preset database.
In practical application, when the crowd flow in the monitored area has a certain rule or the crowd flow has fixed timeliness, the crowd image set of the monitored area in the target time period can be obtained from the historical monitoring video of the monitored area, that is, the time and the obtaining mode for obtaining the crowd image set are not limited here.
S12: and inputting each image of the crowd to be identified in the crowd image set into a trained crowd flow prediction model.
In step S12, the trained crowd flow prediction model is used to estimate the crowd distribution after the target time period according to the crowd image set. Because the crowd image set comprises a plurality of crowd images to be identified with continuous time sequences, namely the crowd images to be identified with adjacent time sequences can reflect the corresponding relation between the crowd density change condition and the time, the trained crowd flow prediction model is used for carrying out feature extraction on each crowd image to be identified, the crowd density condition represented by each crowd image to be identified can be identified, and the time sequence of each crowd image to be identified corresponds to the time sequence of the crowd image to be identified, so that the crowd distribution condition after the target time period can be determined.
It should be noted that, each image of the crowd to be identified in the crowd image set is input into the trained crowd flow prediction model, which is equivalent to that a plurality of images of the crowd to be identified with continuous time sequence are respectively input into the trained crowd flow prediction model. Due to the fact that time sequence continuity exists among the multiple crowd images to be recognized, namely the adjacent crowd images to be recognized have characteristic association relations in time sequence, each crowd image to be recognized is input into a trained crowd flow prediction model, then the crowd flow situation is predicted through the trained crowd flow prediction model on the basis of the characteristic association relations in time sequence among the adjacent crowd images to be recognized, and more accurate crowd distribution situations and crowd flow situations can be obtained.
It is understood that, in practical applications, in order to further improve the recognition efficiency of the trained crowd flow prediction model, the image input specification may be configured in advance, for example, a size parameter or a resolution parameter of each crowd image to be recognized in the input crowd image set is configured. Before the image of the crowd to be recognized is input into the trained crowd flow prediction model, if the image of the crowd to be recognized does not meet the preset image input specification, the image of the crowd to be recognized can be standardized, and then the image after the standardization is input into the trained crowd flow prediction model.
S13: outputting a corresponding crowd density map set and a corresponding crowd flow map based on the crowd image set through the trained crowd flow prediction model; wherein the crowd flow diagram is used for describing the crowd flow direction corresponding to the crowd density map set.
In step S13, the crowd density image set includes a plurality of crowd density images corresponding to the crowd maps to be identified, and the crowd density image set is used to represent the crowd density distribution of the monitored area at each time within the target time period. The crowd flow diagram is used for describing the crowd flow direction corresponding to the crowd density map set, namely the change trend of the crowd flow direction in the monitored area in the target time period.
It should be noted that the trained crowd flow prediction model is a model constructed based on a multilayer convolutional neural network. In practical application, a crowd flow prediction model comprising a plurality of sampling convolution layers, a maximum pooling layer and a feature fusion layer is constructed by carrying out hierarchical structure configuration, parameter configuration, feature extraction strategy configuration, information transfer configuration and the like on a frame part of a multilayer convolutional neural network. The crowd flow prediction model is trained by utilizing a preconfigured crowd density change sample set, and because a portrait head area is marked on each sample image in the crowd density change sample set and a time sequence relation exists between the sample images, the crowd flow prediction model is trained by utilizing the crowd density change sample set, the obtained trained crowd flow prediction model can output a corresponding crowd density map set based on the crowd image set, namely, a corresponding crowd density map and a crowd flow map corresponding to a target time period are output according to each crowd image to be identified.
In all embodiments of the application, a trained crowd flow prediction model is used, and a corresponding crowd density map set and a crowd flow map are output based on a crowd image set, specifically, after the crowd density map set is obtained, an optical flow method is used for analyzing the crowd density map with continuous time sequence in the crowd density map set, so as to obtain the crowd flow characteristic information. For example, each crowd density map has a plurality of target pixel points, the target pixel points correspond to the positions of the human head features in the images, tracking is performed in subsequent frame images according to the initial positions of the target pixel points in the first image in the time sequence images, and crowd flow feature information is obtained by comparing the change conditions of the dynamic target pixel points of the previous frame and the next frame. The gradient of the target pixel point is obtained by performing differential calculation on the continuous frame crowd density map, and then the crowd flow map is formed.
As a possible implementation manner of this embodiment, the trained crowd flow prediction model includes: an encoding framework and a decoding framework; step S13 specifically includes:
extracting the characteristics of each crowd image to be identified in the crowd image set through the coding frame, and transmitting the extracted crowd characteristic information to the decoding frame; generating and outputting, by the decoding framework, a set of population density maps and a population flow map based on the population characteristic information. In this embodiment, the crowd characteristic information is set information formed by a plurality of characteristic data output after performing characteristic extraction on each crowd image to be identified in the crowd image set through a sampling convolution layer in the coding frame, that is, a set formed by a plurality of characteristic data output after performing characteristic extraction on each crowd image to be identified in the crowd image set through the sampling convolution layer in the coding frame. Each crowd image to be identified in the crowd image set is used as original data, a trained crowd flow prediction model is input, operation such as feature extraction, maximum pooling, feature correction and feature fusion is carried out on each crowd image to be identified through a hierarchical structure in a coding frame in the trained crowd flow prediction model, extracted crowd feature information is transmitted to a decoding frame, the decoding frame comprises two branches, one branch generates and outputs a crowd density map set, the other branch processes the crowd density map set of a time sequence, and a crowd flow map is generated and output by calculating crowd flow feature information. In other embodiments, the feature map is a matrix formed by feature values, and when feature extraction is performed on each to-be-identified crowd image in the crowd image set through the coding frame, specifically, feature extraction is performed on each to-be-identified crowd image in the crowd image set through a convolution layer in the coding frame, so as to obtain the matrix formed by the feature values, that is, the feature map. The method comprises the steps of taking each to-be-identified crowd image in a crowd image set as original data, inputting a trained crowd flow prediction model, carrying out operations such as feature sampling convolution, maximum pooling, feature correction and feature fusion on each to-be-identified crowd image through a hierarchical structure in a coding frame in the trained crowd flow prediction model, transmitting extracted crowd feature information to a decoding frame, wherein the decoding frame comprises two branches, a crowd density graph set is generated and output through one branch, the crowd density graph set of a time sequence is processed through the other branch, and a crowd flow graph is generated and output through calculation of crowd flow feature information. As a possible implementation manner of this embodiment, the encoding framework may include multiple parallel convolution layers, a max-pooling layer, a modification layer, and an output layer; the extracting the characteristics of each crowd image to be identified in the crowd image set through the coding frame, and transmitting the extracted crowd characteristic information to the decoding frame, including:
performing feature sampling convolution on each crowd image to be identified through the multiple rows of parallel convolution layers to obtain a first feature set of each crowd image to be identified in different receptive fields; performing maximum feature pooling on the first feature set through the maximum pooling layer to obtain a second feature set; performing feature correction on the second feature set through the correction layer, and eliminating features with small redundancy and contribution to finally obtain crowd feature information; and passing the demographic information to the decoding framework through the output layer.
In this embodiment, the multiple parallel convolutional layers include multiple parallel convolutional layers of feature samples, and the convolutional cores of each of the feature sample convolutional layers are different in size. Because the convolution kernels of each feature sampling convolution layer are different in size, the receptive fields among the features obtained by performing feature sampling convolution on each crowd image to be identified through the multiple rows of parallel convolution layers are also different. The receptive field refers to the area size of each pixel point on the feature map output by each layer in the trained crowd flow prediction model, which is mapped on the original image, and the original image is each crowd image to be identified in this embodiment. The areas of the feature maps corresponding to different receptive fields, mapped on a single crowd image to be identified, of each pixel point are different, and different feature maps form first feature sets under different receptive fields.
Because the feature sampling convolution layers are arranged in a multi-row parallel mode, first feature sets under different receptive fields are extracted from each to-be-identified crowd image through a trained crowd flow prediction model, and the extraction sequences of different feature graphs in the first feature sets are the same.
It should be noted that the sizes of the regions mapped by each pixel point on the feature map corresponding to different receptive fields on the crowd image to be identified are also different. The larger the convolution kernel of the characteristic sampling convolution layer is, the larger the corresponding receptive field is, the different degrees of characteristic intensity of the characteristic images extracted under the receptive fields of different levels are different, and the image characteristics in the characteristic images with the larger receptive fields are more intense and more obvious. Performing maximum feature pooling and feature correlation calculation on feature maps extracted under different receptive fields through a maximum pooling layer pair to obtain a second feature map, and performing feature correction on the second feature map through a correction layer to obtain a target feature map; and finally, transmitting the target characteristic graph to a decoding framework through an output layer.
Because time sequence continuity exists between each to-be-identified crowd image in the crowd image set corresponding to the target time period, after each to-be-identified crowd image is subjected to processing such as feature sampling convolution and the like, the obtained crowd characteristic information can be used for representing the crowd density information in each to-be-identified crowd image, correlation calculation and other operations are performed through a decoding frame according to the crowd characteristic information, and then the crowd flow diagram which can be used for describing the flow direction of the crowd corresponding to the crowd density map set is output.
As a possible implementation manner of this embodiment, the decoding framework includes a first decoding framework and a second decoding framework;
the outputting, by the decoding framework, a set of crowd density maps and a crowd flow map based on the crowd characteristic information, comprising:
generating and outputting, by the first decoding framework, a set of crowd density maps based on the crowd characteristic information; the crowd density map set comprises a plurality of crowd density maps with continuous time sequences, and the crowd density maps with continuous time sequences in the crowd density map set correspond to the crowd images to be identified with continuous time sequences in the crowd image set one by one; and calculating the crowd flow characteristic information in the crowd density maps with continuous time sequences through the second decoding frame to generate and output a crowd flow map.
Specifically, the second decoding frame may calculate the crowd flow feature information in the crowd density maps with consecutive time sequences by using an optical flow method to generate and output the crowd flow map. The optical flow method is a method for calculating motion information of an object between adjacent frames by finding a correspondence between a previous frame and a current frame using a change of a pixel in an image sequence in a time domain and a correlation between adjacent frames.
In this embodiment, since the crowd density map set includes a plurality of crowd density maps with consecutive time sequences, and the crowd density maps with consecutive time sequences in the crowd density map set correspond to the crowd images to be identified with consecutive time sequences in the crowd image set one by one, the second decoding frame calculates the crowd flow characteristic information in the crowd density maps with consecutive time sequences based on the optical flow method, and the crowd flow map can be generated.
It can be understood that, in practical application, the crowd flow prediction model in this embodiment may also be divided into a crowd density estimation model and a crowd flow prediction model, the crowd density estimation model performs operations such as feature extraction on a crowd image set and elimination of features with small redundancy and contribution to obtain a crowd density map set, and then the crowd density map set is input into the crowd flow prediction model, because the crowd image set includes a plurality of crowd images to be identified with continuous time sequences, the crowd density estimation model performs operations such as convolution, pooling and correction on the crowd images to be identified to obtain the crowd density map set in which the crowd density maps correspond to the crowd images to be identified one by one, and the crowd flow prediction model differentiates the adjacent crowd density maps to obtain gradients of target pixel points, so that corresponding crowd flow maps are formed, and the target pixel point corresponds to the pixel position of the human head characteristic in the image.
S14: and selecting a target crowd density map from the crowd density map set and clustering the crowd flow map to obtain a plurality of crowd density clusters, and determining the target crowd density cluster from the crowd density clusters.
In step S14, the crowd density maps in the crowd density map set correspond to the crowd images to be identified in the crowd image set one by one, so that adjacent crowd density maps in the crowd density map set also have a time-series continuous relationship. The target crowd density map is the crowd density map with the earliest time sequence in the crowd density map set, namely the crowd density map corresponding to the crowd image to be identified at the starting moment of the target time period.
It should be noted that, when clustering the target population density map and the population flow map, the target population density map provides population density reference features, and the population flow map provides population density reference features and population flow direction features, and clustering is performed according to the population density reference features in the two maps. That is, the pixel point sets in the target crowd density map and the crowd flow map are classified into different clusters, that is, the pixel point sets of the two maps are subjected to similar point classification to obtain a plurality of crowd density clusters, wherein objects in each crowd density cluster are similar to each other but different from objects in other crowd density clusters.
It can be understood that, in practical application, when clustering processing is performed on the target crowd density map and the crowd flow map, pixel point classification standards can be configured in advance, that is, starting from a set of pixel points of the target crowd density map and the crowd flow map, classification can be performed automatically to obtain a plurality of crowd density clusters. Since many schemes for clustering pixels in an image have been disclosed in the prior art, the clustering method or the clustering strategy is not repeated here.
As a possible implementation manner of this embodiment, the target time period includes a start time point and an end time point; step S14 specifically includes:
selecting a crowd density graph corresponding to the starting time point from the crowd density graph set as a target crowd density graph; clustering the target crowd density map and the crowd flow map according to preset field parameters to obtain Y crowd density clusters, wherein Y is an integer greater than 0; determining a weight value of each crowd density cluster in the Y crowd density clusters through the following formula to obtain a weight value set;
Figure BDA0002385412160000141
wherein, S (c)i) The weight of the ith personal group density cluster in the Y personal group density clusters,
Figure BDA0002385412160000142
the number of the pixel points in the ith personal group density cluster,
Figure BDA0002385412160000143
the method comprises the steps of determining a maximum weight value from a weight value set, identifying a crowd density cluster corresponding to the maximum weight value as a target crowd density cluster, wherein ξ is a preset compensation parameter and ξ is not equal to 0, the moving speed of the ith crowd density cluster is the moving speed of the ith crowd density cluster, and the crowd density cluster corresponding to the maximum weight value is identified as the target crowd density cluster, wherein the target crowd density cluster is used for representing the position of the monitored area with the maximum crowd density at the termination time point.
In this embodiment, the weight value of the crowd density cluster is used to describe the crowd density degree of the crowd density cluster, that is, the larger the weight value of the crowd density cluster is, the higher the crowd density is, and the smaller the weight value of the crowd density cluster is, the lower the crowd density is.
S15: and planning a route by taking the position of the target crowd density cluster as a terminal point to obtain a driving route.
In step S15, the location of the target crowd density cluster refers to a location in the monitored area represented by the pixel point set corresponding to the target crowd density cluster. The driving route is a route of the robot in the monitored area, wherein the driving reaches the position of the target crowd density cluster.
As a possible implementation manner of this embodiment, step S15 specifically includes:
acquiring a target position of a target crowd density cluster in the monitored area; carrying out route planning according to the current position and the target position to obtain a driving route; wherein the travel route is used to describe a travel path from the current location to the target location in the monitored area.
In this embodiment, the current position is a current position of the robot, and may include a real-time position of the robot or an initial position of the robot. The real-time position of the robot and the initial position of the robot are both in the monitored area.
In all embodiments of the present application, the route planning is to plan a driving route satisfying the driving conditions of the robot in the monitored area.
For example, the travel route of the robot travel condition is limited to the sidewalk, and the travel route obtained by performing route planning with the position of the target crowd density cluster as the destination is the sidewalk travel route of the monitored area.
As can be seen from the above, in the route planning method provided in this embodiment, the acquired crowd image set of the monitored area in the target time period is input into the trained crowd flow prediction model, and then the trained crowd flow prediction model is used to perform crowd flow prediction, so as to obtain the crowd density map set and the crowd flow map, because the target crowd density map in the crowd density map set can reflect the crowd density situation at a certain moment in the target time period, and the crowd flow map can reflect the crowd flow direction and speed in the target time period, the target crowd density map and the crowd flow map are clustered, the obtained target crowd density cluster can reflect the crowd density distribution situation in the monitored area after the target time period, so as to perform route planning on the driving route with the position of the target crowd density cluster as the terminal point, the robot can reach the area with the maximum pedestrian flow density, the phenomenon that the running path of the robot avoids crowds or wanders in an unmanned area is avoided, and the application range of the planning scheme of the running path of the robot is widened.
Referring to fig. 2, fig. 2 is a flowchart illustrating a route planning method according to another embodiment of the present application. With respect to the embodiment corresponding to fig. 1, the route planning method provided by this embodiment further includes steps S21 to S22 before step S12, and further includes step S23 after step S15. The details are as follows:
s21: acquiring a plurality of original crowd image samples in a preset time period, and performing characteristic marking operation on each original crowd image sample to obtain a crowd image sample set with marks.
S22: and training a crowd flow prediction model by using the crowd image sample set with the marks to obtain the trained crowd flow prediction model.
In this embodiment, the original crowd image sample includes a plurality of portrait head regions, and the feature marking operation is performed on each original crowd image sample, namely, similar points with feature properties in the original crowd image sample are marked.
For example, the coordinates of pixel points corresponding to the head area of the portrait in the original crowd image sample are marked, and then a crowd image sample set with the marks is obtained.
It should be noted that, dotting and marking are performed on each portrait head area in each original crowd image sample in the crowd image sample set, and the coordinates of each dotting position are recorded; establishing a binary matrix with the same image size as the original crowd image sample, and enabling elements in the matrix to correspond to pixel points in the original crowd image sample one by one, and initializing all the elements to be 0; updating the numerical value of the element at the corresponding position in the matrix to be 1 according to the coordinate of each dotting position in the original crowd image sample; and performing Gaussian blur processing by taking each dotting position as a center.
In this embodiment, the pulse function β (x-x) is seti) Representing the head position of the portrait in the image, if N persons exist, then N impulse functions exist, wherein x represents pixel points in the image, and x represents the head position of the portrait in the imageiRepresenting the pixel position of the portrait head in the image; to convert the tag function to a continuous density function, processing is performed with gaussian and convolution functions to obtain:
Figure BDA0002385412160000161
wherein the pulse function β (x-x)i) Representing the position of the head of the portrait in the image; gβi(x) Indicating the application of a Gaussian kernel βiI (x) represents β (x-x) at position x after Gaussian blur processing based on Gaussian kernel β ii) A pulse function. Because the formula can not express the head features of the portrait in different sizes, a Gaussian kernel constraint condition is added on the basis
Figure BDA0002385412160000171
Figure BDA0002385412160000172
As the head position x of the portraitiThe distance of the nearest m persons' heads,
Figure BDA0002385412160000173
the average distance between the heads is represented, α is a preset parameter, and the value of each dotting position in the matrix is changed from 1 to a value smaller than 1 through Gaussian fuzzy processing, so that the whole matrix is smoother and more suitable for practical application.
In this embodiment, in order to make the crowd flow prediction model converge better when the crowd flow prediction model is trained by using the marked crowd image sample set, a corresponding loss function is configured:
Figure BDA0002385412160000174
where δ is the model parameter to be optimized, M is the number of original population image samples, xiIs an input image, I (x)i(ii) a Delta) is population flow prediction model generationPopulation density map of (1)iIs corresponding to xiThe population density map of (1).
It can be understood that the loss function is used for a function for supervising the machine learning process, namely for representing the difference between the model output and the standard answer, and the convergence speed of the model can be accelerated by configuring the loss function.
In this embodiment, the steps S21 to S22 and the step S11 are performed in a non-sequential order, and when a crowd flow prediction model is trained by performing a feature marking operation on each original crowd image sample and using a set of crowd image samples with marks obtained after the marking operation, and further a trained crowd flow prediction model is obtained, the steps S12 to S15 are performed.
In the present embodiment, after step S15, step S23 is further included, specifically:
s23: and driving from the current position to the position of the target crowd density cluster according to a preset avoidance strategy and the driving route.
In step S23, the preset avoidance strategy is a method strategy for controlling the robot to avoid obstacles during driving from the current position to the position of the target crowd density cluster according to the driving route.
In this embodiment, the current position is a position where the robot is located, and may be a preset starting position of the robot, or may be any position in the monitored area.
In practical applications, the main implementation of the route planning method is a robot, which is a robot having an avoidance function and a travel function. After the driving route is obtained, the moving assembly arranged on the robot moves according to the driving route, and meanwhile, the obstacle avoidance device on the robot can avoid obstacles or pedestrians in the driving process.
As a first usage scenario of the route planning method, taking the robot as an example to provide service for pedestrians in a mall, a certain mall in the mall is a monitored area, and when the robot provides a server for the pedestrians in the mall, a monitoring camera in the mall acquires a crowd image of the monitored area in real time and transmits the crowd image to the robot in real time. After the crowd images within a period of time are summarized through the robot, crowd density recognition and people stream density recognition are carried out, then the region with the largest people stream density in the monitored region is determined according to the recognition result, a corresponding driving route is generated by combining the current position of the robot, and finally the robot drives to the region with the largest people stream density from the current position according to a preset avoidance strategy and the driving route.
Since the driving end point of the robot is the area with the highest traffic density, in other embodiments, when the robot drives to the position of the target crowd density cluster, the stopping position and the stopping route are also planned with the preset driving radius to wander around the position of the target crowd density cluster until the driving route is determined again.
As can be seen from the above, in the route planning method provided in this embodiment, the acquired crowd image set of the monitored area in the target time period is input into the trained crowd flow prediction model, and then the trained crowd flow prediction model is used to perform crowd flow prediction, so as to obtain the crowd density map set and the crowd flow map, because the target crowd density map in the crowd density map set can reflect the crowd density situation at a certain moment in the target time period, and the crowd flow map can reflect the crowd flow direction and speed in the target time period, the target crowd density map and the crowd flow map are clustered, the obtained target crowd density cluster can reflect the crowd density distribution situation in the monitored area after the target time period, so as to perform route planning on the driving route with the position of the target crowd density cluster as the terminal point, the robot can reach the area with the maximum pedestrian flow density, the phenomenon that the running path of the robot avoids crowds or wanders in an unmanned area is avoided, and the application range of the planning scheme of the running path of the robot is widened.
In addition, the robot starts to drive from the current position to the position where the target crowd density cluster is located according to a preset avoidance strategy and a driving route, so that the robot can serve as a pedestrian server at the position where the crowd flow density is maximum, and the service efficiency of the robot is improved.
Referring to fig. 3, fig. 3 is a block diagram of a route planning device according to an embodiment of the present disclosure. The route planning device in this embodiment includes units for performing the steps in the embodiments corresponding to fig. 1 to 2. Please refer to fig. 1 to 2 and fig. 1 to 2 for the corresponding embodiments. For convenience of explanation, only the portions related to the present embodiment are shown. Referring to fig. 3, the route planning device 30 includes: an image acquisition unit 31, an input unit 32, a first execution unit 33, a clustering unit 34, and a second execution unit 35. Wherein:
the image acquisition unit 31 is used for acquiring a crowd image set of the monitored area in a target time period; the crowd image set comprises a plurality of crowd images to be identified with continuous time sequences;
an input unit 32, configured to input each to-be-identified crowd image in the crowd image set into a trained crowd flow prediction model;
a first executing unit 33, configured to output, through the trained people flow prediction model, a corresponding people density map set and people flow map based on the people image set; wherein the crowd flow graph is used for describing the crowd flow direction corresponding to the crowd density map set;
a clustering unit 34, configured to select a target crowd density map from the crowd density map set and perform clustering on the crowd flow map to obtain a plurality of crowd density clusters, and determine a target crowd density cluster from the crowd density clusters;
and the second executing unit 35 is configured to perform route planning by using the position where the target crowd density cluster is located as a destination to obtain a driving route.
As an embodiment of the present application, the trained crowd flow prediction model includes: an encoding framework and a decoding framework.
The first executing unit 33 is specifically configured to perform feature extraction on each to-be-identified crowd image in the crowd image set through the encoding framework, and transmit extracted crowd feature information to the decoding framework; generating and outputting, by the decoding framework, a set of population density maps and a population flow map based on the population characteristic information.
According to an embodiment of the present application, the encoding portion includes a plurality of parallel convolution layers, a max-pooling layer, a correction layer, and an output layer.
The first execution unit 33 is further specifically configured to perform feature extraction on each to-be-identified crowd image through the multiple rows of the parallel convolutional layers to obtain a first feature set of each to-be-identified crowd image in different receptive fields; performing maximum feature pooling on the first feature set through the maximum pooling layer to obtain a second feature set; performing characteristic correction on the second characteristic set through the correction layer to obtain the crowd characteristics; communicating the demographic characteristics to the decoding framework through the output layer.
As an embodiment of the present application, the first executing unit 33 is specifically configured to generate and output a crowd density map set based on the crowd characteristic information through the first decoding framework; the crowd density map set comprises a plurality of crowd density maps with continuous time sequences, and the crowd density maps with continuous time sequences in the crowd density map set correspond to the crowd images to be identified with continuous time sequences in the crowd image set one by one; and calculating the crowd flow characteristic information in the crowd density maps with continuous time sequences through the second decoding frame to generate and output a crowd flow map.
In an embodiment of the present invention, the first executing unit 33 is specifically configured to calculate the crowd flow feature information in the multiple time-series consecutive crowd density maps by using an optical flow method through the second decoding framework, so as to generate the crowd flow map.
As an embodiment of the present application, the target time period includes a start time point and an end time point;
the clustering unit 34 is specifically configured to select a crowd density map corresponding to the starting time point from the crowd density map set as a target crowd density map; clustering the target crowd density map and the crowd flow map according to preset field parameters to obtain Y crowd density clusters, wherein Y is an integer greater than 0; determining a weight value of each crowd density cluster in the Y crowd density clusters through the following formula to obtain a weight value set;
Figure BDA0002385412160000201
wherein, S (c)i) The weight of the ith personal group density cluster in the Y personal group density clusters,
Figure BDA0002385412160000202
the number of the pixel points in the ith personal group density cluster,
Figure BDA0002385412160000203
the method comprises the steps of determining a maximum weight value from a weight value set, identifying a crowd density cluster corresponding to the maximum weight value as a target crowd density cluster, wherein ξ is a preset compensation parameter and ξ is not equal to 0, the moving speed of the ith crowd density cluster is the moving speed of the ith crowd density cluster, and the crowd density cluster corresponding to the maximum weight value is identified as the target crowd density cluster, wherein the target crowd density cluster is used for representing the position of the monitored area with the maximum crowd density at the termination time point.
As an embodiment of the present application, the second executing unit 35 is specifically configured to obtain a target position of a target crowd density cluster in the monitored area; carrying out route planning according to the current position and the target position to obtain a driving route; wherein the travel route is used to describe a travel path from the current location to the target location in the monitored area.
As an embodiment of the present application, the route planning apparatus 30 further includes: a sample set acquisition unit 36 and a training unit 37. Specifically, the method comprises the following steps:
the sample set acquiring unit 36 is configured to acquire a plurality of original crowd image samples within a preset time period, and perform a feature marking operation on each original crowd image sample to obtain a crowd image sample set with a mark.
And the training unit 37 is configured to train a crowd flow prediction model by using the marked crowd image sample set to obtain a trained crowd flow prediction model.
As an embodiment of the present application, the route planning apparatus 30 further includes: a third execution unit 38.
And the third executing unit 38 is configured to start from the current position to the position where the target crowd density cluster is located according to a preset avoidance strategy and the driving route.
It can be seen from the above that, in the embodiment, the acquired crowd image set of the monitored area in the target time period is input into the trained crowd flow prediction model, and then the trained crowd flow prediction model is used for crowd flow prediction, so as to obtain the crowd density map set and the crowd flow map, because the target crowd density map in the crowd density map set can reflect the crowd density situation at a certain moment in the target time period, and the crowd flow map can reflect the crowd flow direction and speed in the target time period, the target crowd density map and the crowd flow map are clustered, the obtained target crowd density cluster can reflect the crowd density distribution situation of the monitored area after the target time period, so as to perform route planning on the position of the target crowd density cluster as the destination to obtain the driving route, the robot can reach the area with the maximum pedestrian flow density, the phenomenon that the running path of the robot avoids crowds or wanders in an unmanned area is avoided, and the application range of the planning scheme of the running path of the robot is widened.
In addition, the robot starts to drive from the current position to the position where the target crowd density cluster is located according to a preset avoidance strategy and a driving route, so that the robot can serve as a pedestrian server at the position where the crowd flow density is maximum, and the service efficiency of the robot is improved.
Fig. 4 is a block diagram of a robot according to another embodiment of the present disclosure. As shown in fig. 4, the robot 4 of this embodiment includes: a processor 40, a memory 41 and a computer program 42, such as a program of a route planning method, stored in said memory 41 and executable on said processor 40. The processor 40, when executing the computer program 42, implements the steps in the various embodiments of the route planning methods described above, such as S11-S15 shown in fig. 1. Alternatively, when the processor 40 executes the computer program 42, the functions of the units in the embodiment corresponding to fig. 3, for example, the functions of the units 41 to 48 shown in fig. 3, are implemented, for which reference is specifically made to the relevant description in the embodiment corresponding to fig. 3, which is not repeated herein.
Illustratively, the computer program 42 may be divided into one or more units, which are stored in the memory 41 and executed by the processor 40 to accomplish the present application. The one or more units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 42 in the robot 4. For example, the computer program 42 may be divided into an image acquisition unit, an input unit, a first execution unit, a clustering unit, and a second execution unit, each unit having the specific functions as described above.
The robot may include, but is not limited to, a processor 40, a memory 41. Those skilled in the art will appreciate that fig. 4 is merely an example of a robot 4 and is not intended to be limiting of robot 4 and may include more or fewer components than shown, or some components in combination, or different components, e.g., the robot may also include input output devices, network access devices, buses, etc.
The Processor 40 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may be an internal storage unit of the robot 4, such as a hard disk or a memory of the robot 4. The memory 41 may also be an external storage device of the robot 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the robot 4. Further, the memory 41 may also include both an internal storage unit and an external storage device of the robot 4. The memory 41 is used for storing the computer program and other programs and data required by the robot. The memory 41 may also be used to temporarily store data that has been output or is to be output.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method of route planning, comprising:
acquiring a crowd image set of a monitored area in a target time period; the crowd image set comprises a plurality of crowd images to be identified with continuous time sequences;
inputting each image of the crowd to be identified in the crowd image set into a trained crowd flow prediction model;
outputting a corresponding crowd density map set and a corresponding crowd flow map based on the crowd image set through the trained crowd flow prediction model; wherein the crowd flow graph is used for describing the crowd flow direction corresponding to the crowd density map set;
selecting a target crowd density map from the crowd density map set and clustering the crowd flow map to obtain a plurality of crowd density clusters, and determining a target crowd density cluster from the crowd density clusters;
and planning a route by taking the position of the target crowd density cluster as a terminal point to obtain a driving route.
2. The route planning method according to claim 1, wherein the trained crowd flow prediction model comprises: an encoding framework and a decoding framework;
outputting a corresponding population density map set and a corresponding population mobility map based on the population image set through the trained population mobility prediction model, wherein the population density map set and the corresponding population mobility map comprise:
extracting the characteristics of each crowd image to be identified in the crowd image set through the coding frame, and transmitting the extracted crowd characteristic information to the decoding frame;
generating and outputting, by the decoding framework, a set of population density maps and a population flow map based on the population characteristic information.
3. The route planning method according to claim 2, wherein the decoding frame comprises a first decoding frame and a second decoding frame;
generating and outputting, by the decoding framework, a set of crowd density maps and crowd flow maps based on the crowd characteristic information, including:
generating and outputting, by the first decoding framework, a set of crowd density maps based on the crowd characteristic information; the crowd density map set comprises a plurality of crowd density maps with continuous time sequences, and the crowd density maps with continuous time sequences in the crowd density map set correspond to the crowd images to be identified with continuous time sequences in the crowd image set one by one;
and calculating the crowd flow characteristic information in the crowd density maps with continuous time sequences through the second decoding frame to generate and output a crowd flow map.
4. The route planning method according to claim 1, wherein the target time period includes a start time point and an end time point;
the selecting a target crowd density map from the crowd density map set and clustering the crowd flow map to obtain a plurality of crowd density clusters, and determining the target crowd density cluster from the crowd density clusters, includes:
selecting a crowd density graph corresponding to the starting time point from the crowd density graph set as a target crowd density graph;
clustering the target crowd density map and the crowd flow map according to preset field parameters to obtain Y crowd density clusters, wherein Y is an integer greater than 0;
determining a weight value of each crowd density cluster in the Y crowd density clusters through the following formula to obtain a weight value set;
Figure FDA0002385412150000021
wherein, S (c)i) The weight of the ith personal group density cluster in the Y personal group density clusters,
Figure FDA0002385412150000022
the number of the pixel points in the ith personal group density cluster,
Figure FDA0002385412150000023
ξ is a preset compensation parameter, and ξ is not equal to 0, which is the moving speed of the ith crowd density cluster;
determining a maximum weight value from the weight value set, and identifying a crowd density cluster corresponding to the maximum weight value as a target crowd density cluster; wherein the target crowd density cluster is used for representing the position of the monitored area with the largest crowd density at the termination time point.
5. The route planning method according to claim 1, wherein the step of performing route planning by taking the position of the target crowd density cluster as a terminal point to obtain a driving route comprises:
acquiring a target position of a target crowd density cluster in the monitored area;
carrying out route planning according to the current position and the target position to obtain a driving route; wherein the travel route is used to describe a travel path from the current location to the target location in the monitored area.
6. The route planning method according to claim 1, wherein before the step of inputting each image of the crowd to be identified in the set of crowd images into the trained crowd flow prediction model, the method further comprises:
acquiring a plurality of original crowd image samples within a preset time period, and performing characteristic marking operation on each original crowd image sample to obtain a crowd image sample set with marks;
and training a crowd flow prediction model by using the crowd image sample set with the marks to obtain the trained crowd flow prediction model.
7. The route planning method according to any one of claims 1 to 6, wherein after the step of obtaining the driving route by performing route planning with the position of the target crowd density cluster as a terminal point, the method further comprises:
and driving from the current position to the position of the target crowd density cluster according to a preset avoidance strategy and the driving route.
8. A route planning apparatus, comprising:
the image acquisition unit is used for acquiring a crowd image set of a monitored area in a target time period; the crowd image set comprises a plurality of crowd images to be identified with continuous time sequences;
the input unit is used for inputting each to-be-identified crowd image in the crowd image set into a trained crowd flow prediction model;
a first execution unit, configured to output, through the trained crowd flow prediction model, a corresponding crowd density map set and a corresponding crowd flow map based on the crowd image set; wherein the crowd flow graph is used for describing the crowd flow direction corresponding to the crowd density map set;
the clustering unit is used for selecting a target crowd density map from the crowd density map set and clustering the crowd flow map to obtain a plurality of crowd density clusters and determining a target crowd density cluster from the crowd density clusters;
and the second execution unit is used for carrying out route planning by taking the position of the target crowd density cluster as a terminal point to obtain a driving route.
9. A robot, characterized in that the robot comprises a memory, a processor and a computer program stored in the memory and executable on the robot, the processor, when executing the computer program, implementing the steps of the route planning method according to any of claims 1-7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the route planning method according to any one of claims 1 to 7.
CN202010096131.XA 2020-02-17 2020-02-17 Route planning method, route planning device, robot and medium Active CN111291690B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010096131.XA CN111291690B (en) 2020-02-17 2020-02-17 Route planning method, route planning device, robot and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010096131.XA CN111291690B (en) 2020-02-17 2020-02-17 Route planning method, route planning device, robot and medium

Publications (2)

Publication Number Publication Date
CN111291690A true CN111291690A (en) 2020-06-16
CN111291690B CN111291690B (en) 2023-12-05

Family

ID=71024454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010096131.XA Active CN111291690B (en) 2020-02-17 2020-02-17 Route planning method, route planning device, robot and medium

Country Status (1)

Country Link
CN (1) CN111291690B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184523A (en) * 2020-09-27 2021-01-05 中南大学 Subway carriage passenger guidance method and system based on environmental monitoring and illumination guidance
CN112200956A (en) * 2020-09-27 2021-01-08 北京百度网讯科技有限公司 Access control method, system, electronic device and storage medium
CN114295389A (en) * 2021-11-30 2022-04-08 合众新能源汽车有限公司 Method and device for testing adaptability of pure electric vehicle in different areas
CN114323024A (en) * 2021-12-31 2022-04-12 北京泰豪智能工程有限公司 Indoor navigation method and system based on Building Information Model (BIM)
CN115824313A (en) * 2023-02-13 2023-03-21 北京昆仑海岸科技股份有限公司 Integrated multi-parameter monitoring control method and system for grain condition monitoring
WO2023193424A1 (en) * 2022-04-07 2023-10-12 哈尔滨工业大学(深圳) Global navigation method for mobile robot in man-machine coexistence environment following pedestrian norm

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160133025A1 (en) * 2014-11-12 2016-05-12 Ricoh Company, Ltd. Method for detecting crowd density, and method and apparatus for detecting interest degree of crowd in target position
CN109446989A (en) * 2018-10-29 2019-03-08 上海七牛信息技术有限公司 Crowd massing detection method, device and storage medium
US20190325231A1 (en) * 2018-07-02 2019-10-24 Baidu Online Network Technology (Beijing) Co., Ltd. Method, apparatus, device, and storage medium for predicting the number of people of dense crowd
CN110502988A (en) * 2019-07-15 2019-11-26 武汉大学 Group positioning and anomaly detection method in video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160133025A1 (en) * 2014-11-12 2016-05-12 Ricoh Company, Ltd. Method for detecting crowd density, and method and apparatus for detecting interest degree of crowd in target position
US20190325231A1 (en) * 2018-07-02 2019-10-24 Baidu Online Network Technology (Beijing) Co., Ltd. Method, apparatus, device, and storage medium for predicting the number of people of dense crowd
CN109446989A (en) * 2018-10-29 2019-03-08 上海七牛信息技术有限公司 Crowd massing detection method, device and storage medium
CN110502988A (en) * 2019-07-15 2019-11-26 武汉大学 Group positioning and anomaly detection method in video

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184523A (en) * 2020-09-27 2021-01-05 中南大学 Subway carriage passenger guidance method and system based on environmental monitoring and illumination guidance
CN112200956A (en) * 2020-09-27 2021-01-08 北京百度网讯科技有限公司 Access control method, system, electronic device and storage medium
CN112184523B (en) * 2020-09-27 2022-06-07 中南大学 Subway carriage passenger guidance method and system based on environmental monitoring and illumination guidance
CN114295389A (en) * 2021-11-30 2022-04-08 合众新能源汽车有限公司 Method and device for testing adaptability of pure electric vehicle in different areas
CN114323024A (en) * 2021-12-31 2022-04-12 北京泰豪智能工程有限公司 Indoor navigation method and system based on Building Information Model (BIM)
WO2023193424A1 (en) * 2022-04-07 2023-10-12 哈尔滨工业大学(深圳) Global navigation method for mobile robot in man-machine coexistence environment following pedestrian norm
CN115824313A (en) * 2023-02-13 2023-03-21 北京昆仑海岸科技股份有限公司 Integrated multi-parameter monitoring control method and system for grain condition monitoring

Also Published As

Publication number Publication date
CN111291690B (en) 2023-12-05

Similar Documents

Publication Publication Date Title
CN111291690B (en) Route planning method, route planning device, robot and medium
CN110414432B (en) Training method of object recognition model, object recognition method and corresponding device
Li et al. Deep neural network for structural prediction and lane detection in traffic scene
Hu et al. Data-driven estimation of driver attention using calibration-free eye gaze and scene features
WO2018192570A1 (en) Time domain motion detection method and system, electronic device and computer storage medium
CN111091708A (en) Vehicle track prediction method and device
CN112734808B (en) Trajectory prediction method for vulnerable road users in vehicle driving environment
US11074438B2 (en) Disentangling human dynamics for pedestrian locomotion forecasting with noisy supervision
US11789466B2 (en) Event camera based navigation control
EP3989106B1 (en) Unsupervised training of a video feature extractor
CN111052128A (en) Descriptor learning method for detecting and locating objects in video
CN112070071B (en) Method and device for labeling objects in video, computer equipment and storage medium
CN115690153A (en) Intelligent agent track prediction method and system
US20230095533A1 (en) Enriched and discriminative convolutional neural network features for pedestrian re-identification and trajectory modeling
Gosala et al. Skyeye: Self-supervised bird's-eye-view semantic mapping using monocular frontal view images
Nawaratne et al. A generative latent space approach for real-time road surveillance in smart cities
CN115861383A (en) Pedestrian trajectory prediction device and method based on multi-information fusion in crowded space
Yang et al. PTPGC: Pedestrian trajectory prediction by graph attention network with ConvLSTM
CN113936175A (en) Method and system for identifying events in video
Kadim et al. Deep-learning based single object tracker for night surveillance.
Seidel et al. NAPC: A neural algorithm for automated passenger counting in public transport on a privacy-friendly dataset
CN116432736A (en) Neural network model optimization method and device and computing equipment
Zhang et al. ForceFormer: exploring social force and transformer for pedestrian trajectory prediction
Basalamah et al. Deep learning framework for congestion detection at public places via learning from synthetic data
KR102529876B1 (en) A Self-Supervised Sampler for Efficient Action Recognition, and Surveillance Systems with Sampler

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant