CN115713787A - Pedestrian detection method, computer equipment and storage medium - Google Patents

Pedestrian detection method, computer equipment and storage medium Download PDF

Info

Publication number
CN115713787A
CN115713787A CN202310031146.1A CN202310031146A CN115713787A CN 115713787 A CN115713787 A CN 115713787A CN 202310031146 A CN202310031146 A CN 202310031146A CN 115713787 A CN115713787 A CN 115713787A
Authority
CN
China
Prior art keywords
data
preset
seed
point cloud
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310031146.1A
Other languages
Chinese (zh)
Other versions
CN115713787B (en
Inventor
谢东恒
邵肖伟
彭科勋
曾文彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hongyida Technology Co ltd
Original Assignee
Shenzhen Hongyida Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Hongyida Technology Co ltd filed Critical Shenzhen Hongyida Technology Co ltd
Priority to CN202310031146.1A priority Critical patent/CN115713787B/en
Publication of CN115713787A publication Critical patent/CN115713787A/en
Application granted granted Critical
Publication of CN115713787B publication Critical patent/CN115713787B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention belongs to the technical field of identification and detection, and particularly relates to a pedestrian detection method, computer equipment and a storage medium. A pedestrian detection method, comprising the steps of: scanning the environment and pedestrians through a laser radar to obtain three-dimensional point cloud data; preprocessing the three-dimensional point cloud data to obtain three-dimensional point cloud preprocessing data; carrying out clustering detection on the three-dimensional point cloud preprocessing data by using a mean shift algorithm to obtain a clustering result; and analyzing the clustering result to obtain a detection result. The method comprises the steps of preprocessing three-dimensional point cloud data acquired by a laser radar to obtain three-dimensional point cloud preprocessed data, carrying out clustering detection on the three-dimensional point cloud preprocessed data by using mean shift clustering, analyzing a clustering result, and judging whether an object meets the condition of a pedestrian or not, thereby realizing the detection of the pedestrian. Because it does not need to pass through the multilayer calculation of big data, also promoted pedestrian's efficiency of discerning simultaneously.

Description

Pedestrian detection method, computer equipment and storage medium
Technical Field
The invention belongs to the technical field of identification and detection, and particularly relates to a pedestrian detection method, computer equipment and a storage medium.
Background
In public places, large crowds have intensive activities, and due to the intensive, unpredictable and non-uniform activities, risks of threatening the safety of lives and properties of people such as fire, social security, treading and the like exist. In order to prevent such accidents from happening again, real-time people flow information in the site needs to be acquired for management and decision-making in the site. On the other hand, in the places such as subways and shopping malls, the pedestrian flow conditions of all stations or entrances can be mastered by counting pedestrian flow information, so that departure time and times can be reasonably planned for the subway stations, travel guidance is provided for passengers, and a proper marketing scheme and a staff arrangement in a rush hour can be formulated by the shopping malls.
To realize the people flow information statistics, the pedestrians need to be detected. At present, laser radar is a common pedestrian detection mode at home and abroad. However, uncertainty in the external environment and the density of pedestrians may affect the accuracy of the pedestrian detection technology. The most mainstream method of the laser radar in the market at present is to combine with deep learning for utilization, and data needs to be labeled in advance, so that the cost of the whole research and development is increased, and the research and development period is prolonged.
Disclosure of Invention
In view of the above, the present invention provides a pedestrian detection method, a computer device, and a storage medium, which can improve pedestrian recognition efficiency.
The technical scheme adopted by the invention is as follows: a pedestrian detection method, comprising the steps of:
scanning the environment and pedestrians through a laser radar to obtain three-dimensional point cloud data;
preprocessing the three-dimensional point cloud data to obtain three-dimensional point cloud preprocessing data;
carrying out clustering detection on the three-dimensional point cloud preprocessing data by using a mean shift algorithm to obtain a clustering result;
and analyzing the clustering result to obtain a detection result.
Preferably, the mean shift algorithm is a mean shift algorithm based on height value weight;
the method for carrying out clustering detection on the three-dimensional point cloud preprocessing data by using the mean shift algorithm to obtain a clustering result comprises the following steps:
selecting height from the three-dimensional point cloud preprocessing data as weight to obtain weight parameters;
combining the weight parameters with a mean shift algorithm to obtain a mean shift algorithm based on height value weight;
setting all data points in the three-dimensional point cloud preprocessing data as seed points;
calculating the seed points by using the mean shift algorithm based on the height value weight to obtain a seed clustering result;
and merging the seed clustering results of which the distances between the seed points are within a preset range after all the seed points are subjected to mean shift to obtain a clustering result.
Preferably, the calculating the seed points by using the mean shift algorithm based on the height weight to obtain a seed clustering result includes the following steps:
obtaining the data point density within the preset range of the seed points according to the distribution condition of the seed points;
calculating a drift vector of the seed point according to the data point density, and adding the drift vector and the seed point to obtain a final seed point of the seed point;
setting a path of the seed point moving to the final seed point as a moving path;
the data points within the preset radius range of the final seed point are classified into the clustering result to which the seed point belongs to obtain the final belonged cluster;
and classifying the data points of the seed points in the preset range on the moving path into the final belonged cluster to obtain a seed clustering result.
Preferably, the calculating a drift vector of the seed point according to the data point density, and adding the drift vector to the seed point to obtain a final seed point of the seed point includes the following steps:
acquiring data points which take the seed points as the circle centers and are within a preset radius range;
calculating a drift mean vector of the first iteration by using a seed point calculation formula in combination with the data point density;
adding the drifting average value vector and the seed point to obtain a seed point of next iteration;
and repeating the steps until the calculated drift vector is a preset drift vector or the difference value of the calculated drift vector and the preset drift vector is less than or equal to the preset vector difference, and taking the seed point of the last iteration as the final seed point.
Preferably, the analyzing the clustering result to obtain the detection result includes the following steps:
calculating the data attribute in the clustering result to obtain a clustering data attribute;
and judging whether the cluster data attribute meets a preset condition or not to obtain a detection result.
Preferably, the step of judging whether the cluster data attribute meets a preset condition includes the following steps:
judging whether the point cloud number of the clustering data attributes is lower than a preset number value or not; if the point cloud number of the clustering data attributes is lower than a preset number value, determining that the clustering result is not pedestrian data; if the point cloud number of the clustering data attribute is higher than a preset number value, judging through the head data of the clustering data attribute;
judging whether the length of the head data is lower than a length preset value and the width is lower than a width preset value; if the length of the head data is lower than a preset length value or the width of the head data is lower than a preset width value, determining that the clustering result is not pedestrian data; if the length of the head data is higher than a preset length value and the width of the head data is higher than a preset width value, judging according to the height of the head data;
judging whether the height of the head data is lower than a preset height value or not; if the height of the head data is lower than a preset height value, determining that the clustering result is not pedestrian data; if the height of the head data is higher than a preset height value, judging through a characteristic comprehensive grade value of the cluster data attribute;
summing the cluster data attributes according to weights to obtain the characteristic comprehensive grade value; if the characteristic comprehensive score value is lower than a preset score value, determining that the clustering result is not pedestrian data; and if the comprehensive feature score value is higher than a preset score value, determining that the clustering result is pedestrian data.
Preferably, the method for scanning the environment and the pedestrians through the laser radar to obtain the three-dimensional point cloud data comprises the following steps:
scanning an environment and pedestrians through the laser radar to obtain a depth image;
and calculating the depth image and preset parameters of the laser radar to obtain the three-dimensional point cloud data.
Preferably, the preprocessing the three-dimensional point cloud data to obtain three-dimensional point cloud preprocessed data includes the following steps:
rotating and translating the three-dimensional point cloud data to enable the environment and the pedestrians scanned by the laser radar to be located on an x-o-y plane;
and screening data which are positioned in a preset detection area and are higher than a preset height of a z axis to obtain the three-dimensional point cloud preprocessing data.
A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to perform the steps of the method as described above.
A computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the method as set forth above.
By adopting the embodiment of the invention, the following beneficial effects are achieved:
according to the invention, the three-dimensional point cloud data acquired by the laser radar is preprocessed to obtain the three-dimensional point cloud preprocessed data, the three-dimensional point cloud preprocessed data is subjected to clustering detection by means of mean shift clustering, the clustering result is analyzed, and whether the scanned object meets the judgment condition of the pedestrian can be judged without labeling data in advance, so that the detection of the pedestrian is realized. Because it does not need to pass through the multilayer calculation of big data, also promoted pedestrian's efficiency of discerning simultaneously.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Wherein:
FIG. 1 is a flow diagram of a pedestrian detection method provided in one embodiment;
FIG. 2 is a schematic flow chart illustrating cluster detection of three-dimensional point cloud pre-processed data using a mean shift algorithm according to an embodiment;
FIG. 3 is a schematic flow chart illustrating a calculation of seed points using a mean shift algorithm based on height weight according to an embodiment;
FIG. 4 is a diagram illustrating a second seed excursion in one embodiment;
FIG. 5 is a schematic diagram of a process for obtaining a final seed point of the seed points provided in one embodiment;
FIG. 6 is a diagram illustrating a first seed excursion in one embodiment;
FIG. 7 is a schematic flow chart of cluster result analysis provided in an embodiment;
FIG. 8 is a flowchart illustrating a process of determining cluster data attributes according to an embodiment;
FIG. 9 is a schematic flow chart of scanning an environment and an object with a lidar in accordance with one embodiment;
FIG. 10 is a depth image of a lidar scanning environment and a pedestrian in one embodiment;
FIG. 11 is a schematic flow chart illustrating pre-processing of three-dimensional point cloud data according to one embodiment;
FIG. 12 is a diagram illustrating the effect of preprocessing three-dimensional point cloud data according to an embodiment;
FIG. 13 is a schematic diagram of ground registration in one embodiment;
FIG. 14 is a diagram showing an internal structure of a computer device in one embodiment;
the invention is further explained below with reference to the figures and examples.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/", and generally means that the former and latter related objects are in an "or" relationship.
The pedestrian detection method provided by the present application is described in detail below with reference to the accompanying drawings by specific embodiments and application scenarios thereof.
Fig. 1 is a flowchart illustrating a pedestrian detection method according to an embodiment. As shown in fig. 1, the pedestrian detection method provided by the application may include the steps of:
step 101, scanning an environment and pedestrians through a laser radar to obtain three-dimensional point cloud data.
Specifically, the lidar may observe the environment and pedestrians through an angle of overlooking. When the laser radar irradiates a person, a depth image can be acquired. The depth image is combined with preset parameters in the laser radar to obtain three-dimensional point cloud data.
And 102, preprocessing the three-dimensional point cloud data to obtain three-dimensional point cloud preprocessing data.
Specifically, the three-dimensional point cloud data may be a top view perspective view. The three-dimensional point cloud data can filter background point clouds, and the heights of data points in the three-dimensional point cloud data from the ground are distinguished by different colors.
And 103, carrying out clustering detection on the three-dimensional point cloud preprocessing data by using a mean shift algorithm to obtain a clustering result.
Specifically, the mean shift algorithm is based on kernel density estimation (kernel density estimation), and the method assumes that a data point set is obtained by sampling from a probability distribution. In some embodiments, the mean-shift algorithm may be calculated from the density of data points in the three-dimensional point cloud pre-processed data.
And 104, analyzing the clustering result to obtain a detection result.
Specifically, the clustering result reflects the coordinate characteristics of the three-dimensional point cloud preprocessing data. By analyzing the characteristics of the three-dimensional point cloud preprocessing data, the detection result of the three-dimensional point cloud preprocessing data can be obtained, and the detection result of the pedestrian obtained by the laser radar can be further obtained.
Fig. 2 is a schematic flow chart illustrating cluster detection of three-dimensional point cloud preprocessing data by using a mean shift algorithm according to an embodiment. As shown in fig. 2, the clustering detection of the three-dimensional point cloud preprocessed data by using the mean shift algorithm to obtain a clustering result provided in the embodiment of the present application includes the following steps:
step 201, selecting height from the three-dimensional point cloud preprocessing data as weight to obtain weight parameters.
Specifically, the three-dimensional point cloud preprocessing data comprises three dimensions of length, width and height, the height is used as an important reference data basis in the identification data, and the height can be selected as a weight to obtain a weight parameter.
Step 202, combining the weight parameter with a mean shift algorithm to obtain a mean shift algorithm based on the height value weight.
Specifically, in the personnel data, after the iteration of the algorithm is completed, the seed point position of the seed point is required to fall at the head position, and the head position is characterized by being located at the highest position of the human body. Therefore, aiming at the characteristic, the calculation drift vector equation in the conventional mean shift clustering algorithm is improved, and the height value weight is introduced to obtain the mean shift algorithm based on the height value weight.
And step 203, setting all data points in the three-dimensional point cloud preprocessing data as seed points.
And 204, calculating the seed points by using a mean shift algorithm based on the height value weight to obtain a seed clustering result.
Specifically, each data point contains three dimensional coordinates of x, y, and z, where the magnitude of the z coordinate can be regarded as the height value of the data point from the ground. The data x, y coordinates are then used to calculate an offset vector and the z coordinate value calculates the height weight.
And step 205, merging the seed clustering results of which the distances between the seed points are within the preset range after all the seed points are subjected to mean shift, so as to obtain a clustering result.
Specifically, after the seed clustering results of all the seed points are combined, all the three-dimensional point cloud preprocessing data are divided into a plurality of clusters to obtain clustering results. Each cluster class includes certain point cloud data.
Fig. 3 is a flowchart illustrating a calculation of seed points by using a mean shift algorithm based on height value weights according to an embodiment. As shown in fig. 3, the calculating the seed points by using the mean shift algorithm based on the height weight to obtain the seed clustering result according to the embodiment of the present application includes the following steps:
and 301, obtaining the data point density within the preset range of the seed points according to the distribution condition of the seed points.
Specifically, the mean shift algorithm is based on a kernel density estimation function. And obtaining the data point density in the preset range of the seed points by using the kernel density estimation based on the radial symmetric kernel function according to the distribution condition of the seed points.
Step 302, calculating a drift vector of the seed point according to the data point density, and adding the drift vector and the seed point to obtain a final seed point of the seed point.
Specifically, the seed point is shifted to the place with the highest local density in the preset radius range by combining the data point density. And obtaining a formula for calculating the mean shift vector of the seed points through a formula of the data point density. And obtaining a drift vector through a formula of the mean value drift vector, and adding the drift vector and the seed point to obtain a seed point of the next iteration.
And by analogy, obtaining the final seed point of the seed points.
Step 303, setting the path of the seed point moving to the final seed point as the moving path.
Specifically, the seed point may form a path in the process of drifting to the final seed point, and the path may be set as a moving path.
And step 304, classifying the data points within the preset radius range of the final seed point into the clustering result to which the seed point belongs to obtain the final belonged cluster.
And 305, classifying data points of the seed points in a preset range on the moving path into a cluster to which the seed points belong finally to obtain a seed clustering result.
Specifically, the preset range of the seed point on the moving path may be a preset range of a point where the seed point stays on the moving path.
FIG. 4 is a diagram illustrating a second seed excursion in one embodiment. As shown in fig. 4, the seed point starting position is a large black dot within a circle, moved to a light gray point for the first time and a dark gray point for the second time. When calculating the seed point clustering result, it is necessary to attribute all data points that were located near the seed point to the seed point. Data such as the black dot outside the circle in fig. 4 needs to be classified in the final cluster to which the seed point belongs.
Fig. 5 is a schematic flow chart of obtaining a final seed point of the seed points in one embodiment. As shown in fig. 5, the calculating a drift vector of a seed point according to a data point density and adding the drift vector to the seed point to obtain a final seed point of the seed point according to the embodiment of the present application includes the following steps:
step 401, obtaining data points within a preset radius range with the seed point as a circle center.
Step 402, calculating a drift mean vector of a first iteration by using a seed point calculation formula in combination with the data point density.
In particular, for a given seed point
Figure 941795DEST_PATH_IMAGE001
Bandwidth of
Figure 289600DEST_PATH_IMAGE002
. The acquisition is carried out by taking the seed point as the center of a circle,
Figure 761033DEST_PATH_IMAGE002
data points in the range of radii
Figure 18839DEST_PATH_IMAGE003
The number of the data points is n, and the mean value drift vector of the current iteration of the seed points is calculated by using the following formula
Figure 42158DEST_PATH_IMAGE004
Figure 18205DEST_PATH_IMAGE005
Showing the result of the iteration of this time,
Figure 27749DEST_PATH_IMAGE006
the result after derivation of the radially symmetric kernel function is shown.
Figure 264695DEST_PATH_IMAGE007
FIG. 6 is a diagram illustrating a first seed excursion in one embodiment. As shown in fig. 6, when the seed point is a large black point, the data points in the range are small black points in the circle, and the data points of the small black points are marked as belonging to the seed point. The offset is calculated by the above formula, and the position of the next movement of the seed point, which is a gray point in fig. 5, can be obtained.
And 403, adding the drifting average value vector and the seed point to obtain a seed point of the next iteration.
Specifically, the seed point is added to the mean shift vector of the iteration
Figure 334283DEST_PATH_IMAGE004
Generating seed points for the next iteration
Figure 797625DEST_PATH_IMAGE008
Figure 466985DEST_PATH_IMAGE009
And step 404, repeating the steps until the calculated drift vector is the preset drift vector or the difference value between the calculated drift vector and the preset drift vector is less than or equal to the difference value between the calculated drift vector and the preset drift vector, and taking the seed point of the last iteration as the final seed point.
Specifically, the preset drift vector may be 0.
Fig. 7 is a schematic flow chart of cluster result analysis provided in an embodiment. As shown in fig. 7, analyzing the clustering result to obtain the detection result according to the embodiment of the present application includes the following steps:
step 501, calculating data attributes in clustering results to obtain clustering data attributes;
specifically, the cluster data attributes include the number of point clouds, the standard deviation of x values, the standard deviation of y values, the standard deviation of z values, the variance of x values, the variance of y values, the variance of z values, the covariance of x and y values, the principal component direction, the length (top view angle) of the point cloud data, the width (top view angle) of the point cloud data, the length and width of the head data, and the head height.
And 502, judging whether the cluster data attribute meets a preset condition or not to obtain a detection result.
Specifically, the detection result may be whether the monitoring object is a pedestrian.
Fig. 8 is a schematic flowchart of determining an attribute of clustered data according to an embodiment. As shown in fig. 8, the determining whether the cluster data attribute meets the preset condition to obtain the detection result according to the embodiment of the present application includes the following steps:
601, judging whether the point cloud number of the clustering data attributes is lower than a number preset value; if the point cloud number of the clustering data attributes is lower than a preset number value, determining that the clustering result is not pedestrian data; if the point cloud number of the clustering data attribute is higher than a preset number value, judging through the head data of the clustering data attribute;
specifically, the number of point clouds may be the number of point cloud data included in each cluster class in the clustering result.
Because the human body has a certain size, after the three-dimensional point cloud preprocessing data is subjected to drift clustering, the number of point cloud data in each cluster of the clustering result is greater than a certain number preset value. Pedestrian data may be excluded if less than the quantity preset value.
Step 602, judging whether the length of the head data is lower than a length preset value and the width is lower than a width preset value; if the length of the head data is lower than a preset length value or the width of the head data is lower than a preset width value, determining that the clustering result is not pedestrian data; if the length of the head data is higher than the preset length value and the width of the head data is higher than the preset width value, judging according to the height of the head data;
specifically, the head data may be point cloud data in a range from a head height to 0.2m below the head height.
The length of the pedestrian head data is of a certain size, and a human cannot possess a head bag lower than a preset length value, so that the clustering result is determined not to be the pedestrian data. Similarly, the width of the head data is lower than the preset width value, and the data can be excluded from being pedestrian data.
Step 603, judging whether the height of the head data is lower than a preset height value; if the height of the head data is lower than a preset height value, determining that the clustering result is not pedestrian data; if the height of the head data is higher than a preset height value, judging through the characteristic comprehensive grade value of the cluster data attribute;
specifically, the maximum z value in the point cloud data of each cluster class in the clustering result is called as the head height.
The pedestrian is a three-dimensional object with a certain height, and whether the pedestrian is a pedestrian can be judged through the height. And if the height of the head data is lower than the preset height value, determining that the clustering result is not the pedestrian data.
Step 604, summing the cluster data attributes according to the weights to obtain a feature comprehensive scoring value; if the comprehensive feature score value is lower than a preset score value, determining that the clustering result is not the pedestrian data; and if the comprehensive feature score value is higher than the preset score value, determining that the clustering result is pedestrian data.
Specifically, the cluster data attributes include a standard deviation of an x value, a standard deviation of a y value, a standard deviation of a z value, a variance of an x value, a variance of a y value, a variance of a z value, a covariance of x and y values, a principal component direction, a length of point cloud data, and a width of point cloud data. The preset score value may be between 0 and 1. If the score value is lower than 0.5, determining that the clustering result is not the pedestrian data; if the score value is higher than 0.5, the pedestrian is identified.
FIG. 9 is a schematic flow chart of scanning an environment and an object with a lidar in one embodiment. As shown in fig. 9, the method for obtaining three-dimensional point cloud data by scanning an environment and pedestrians with a laser radar according to the embodiment of the present application includes the following steps:
step 701, scanning an environment and pedestrians through a laser radar to obtain a depth image;
specifically, fig. 10 is a depth image of the environment and pedestrian scanned by the lidar in one embodiment. As shown in fig. 10, each pixel in the depth image includes the distance from the position of the pixel to the radar, and the gray value is assigned according to the distance value of each pixel. The darker the color, the closer the representation is to the radar.
And step 702, calculating the depth image and preset parameters of the laser radar to obtain three-dimensional point cloud data.
Specifically, the preset parameters of the laser radar include cx (central point x in the horizontal direction of the camera), cy (central point y in the vertical direction of the camera), fx (focal length in the horizontal direction), fy (focal length in the vertical direction), and unit (unit of distance).
Assume a depth map size of W x H, where the position of a depth map pixel is: (
Figure 699383DEST_PATH_IMAGE010
) And d is the depth value of the pixel, and the calculation formula for obtaining the point cloud is as follows:
Figure 205451DEST_PATH_IMAGE011
the coordinates of the point cloud are (x, y, z).
Fig. 11 is a schematic flow chart illustrating preprocessing of three-dimensional point cloud data according to an embodiment. As shown in fig. 11, the preprocessing three-dimensional point cloud data to obtain three-dimensional point cloud preprocessed data provided in the embodiment of the present application includes the following steps:
step 801, rotating and translating the three-dimensional point cloud data to enable the environment and pedestrians scanned by the laser radar to be located on an x-o-y plane;
specifically, fig. 12 is an effect diagram of processing the depth image into three-dimensional point cloud data according to an embodiment. As shown in fig. 12, the left image is a depth image obtained by scanning with the laser radar, the depth image is calculated with preset parameters of the laser radar to obtain three-dimensional point cloud data, and the point cloud shown in the right three-dimensional point cloud image is obtained after the background point cloud is filtered through rotation and translation processing. The right three-dimensional point cloud chart is a chart photographed in a plan view, and the data points are colored according to the difference in height values. The color represents the height from the ground, and the ascending order of the height value is from white to gray and then to black. The head data at the highest position of the person is seen to be black.
FIG. 13 is a schematic diagram of ground registration in one embodiment. The data is preprocessed by rotating and moving the three-dimensional point cloud as shown in fig. 13 so that the ground data is located in the x-o-y plane, facilitating subsequent processing. The three-dimensional point cloud obtained by the area array laser radar is based on a coordinate system with a sensor as an origin during preprocessing data, the point cloud data are inconvenient to visually process under the coordinate system, and the point cloud data have rotational translation invariance, so that the point cloud data can rotate and translate.
And 802, screening data in a preset detection area and at a preset height higher than the z axis to obtain three-dimensional point cloud preprocessing data.
Specifically, the pedestrian has a certain height, and the data below the preset height can be excluded as the pedestrian data.
In an embodiment, a computer-readable storage medium is provided, which stores a computer program, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned embodiment of the pedestrian detection method, and can achieve the same technical effect, and for avoiding repetition, details are not repeated here.
In an embodiment, a computer device is provided, which includes a memory and a processor, where the memory stores a computer program, and when the computer program is executed by the processor, the computer program implements each process of the pedestrian detection method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
FIG. 14 is a diagram illustrating an internal structure of a computer device according to an embodiment. The computer device may specifically be a terminal, and may also be a server. As shown in fig. 14, the computer device includes a processor, a memory, and a network interface connected by a system bus. The memory comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement the pedestrian detection method. The internal memory may also have a computer program stored therein, which when executed by the processor, causes the processor to perform a pedestrian detection method. Those skilled in the art will appreciate that the architecture shown in fig. 14 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A pedestrian detection method, characterized by comprising the steps of:
scanning the environment and pedestrians through a laser radar to obtain three-dimensional point cloud data;
preprocessing the three-dimensional point cloud data to obtain three-dimensional point cloud preprocessing data;
carrying out clustering detection on the three-dimensional point cloud preprocessing data by using a mean shift algorithm to obtain a clustering result;
and analyzing the clustering result to obtain a detection result.
2. The pedestrian detection method of claim 1, wherein the mean-shift algorithm is a height value weight-based mean-shift algorithm;
the method for carrying out clustering detection on the three-dimensional point cloud preprocessing data by using the mean shift algorithm to obtain a clustering result comprises the following steps:
selecting height from the three-dimensional point cloud preprocessing data as weight to obtain weight parameters;
combining the weight parameters with a mean shift algorithm to obtain a mean shift algorithm based on height value weight;
setting all data points in the three-dimensional point cloud preprocessing data as seed points;
calculating the seed points by using the mean shift algorithm based on the height value weight to obtain a seed clustering result;
and merging the seed clustering results of which the distances between the seed points are within a preset range after all the seed points are subjected to mean shift to obtain a clustering result.
3. The pedestrian detection method according to claim 2, wherein the step of calculating the seed points by using the mean shift algorithm based on the height value weight to obtain a seed clustering result comprises the following steps:
obtaining the data point density within the preset range of the seed points according to the distribution condition of the seed points;
calculating a drift vector of the seed point according to the data point density, and adding the drift vector and the seed point to obtain a final seed point of the seed point;
setting a path of the seed point moving to the final seed point as a moving path;
the data points within the preset radius range of the final seed point are classified into the clustering result to which the seed point belongs to obtain the final belonged cluster;
and classifying the data points of the seed points in the preset range on the moving path into the final belonged cluster to obtain a seed clustering result.
4. The pedestrian detection method of claim 3, wherein the step of calculating a drift vector of the seed point according to the data point density, and adding the drift vector to the seed point to obtain a final seed point of the seed point comprises the steps of:
acquiring data points which take the seed points as the circle centers and are within a preset radius range;
calculating a drift mean vector of the first iteration by using a seed point calculation formula in combination with the data point density;
adding the drifting average value vector and the seed point to obtain a seed point of next iteration;
and repeating the steps until the calculated drift vector is a preset drift vector or the difference value between the calculated drift vector and the preset drift vector is less than or equal to the difference value between the calculated drift vector and the preset drift vector, and taking the seed point of the last iteration as the final seed point.
5. The pedestrian detection method according to claim 1, wherein the analyzing the clustering result to obtain a detection result comprises the following steps:
calculating the data attribute in the clustering result to obtain a clustering data attribute;
and judging whether the cluster data attribute meets a preset condition or not to obtain a detection result.
6. The pedestrian detection method according to claim 5, wherein judging whether the cluster data attribute meets a preset condition comprises:
judging whether the point cloud number of the clustering data attributes is lower than a preset number value or not; if the point cloud number of the clustering data attributes is lower than a preset number value, determining that the clustering result is not pedestrian data; if the point cloud number of the cluster data attribute is higher than a preset number value, judging through the head data of the cluster data attribute;
judging whether the length of the head data is lower than a length preset value and the width is lower than a width preset value; if the length of the head data is lower than a preset length value or the width of the head data is lower than a preset width value, determining that the clustering result is not pedestrian data; if the length of the head data is higher than a preset length value and the width of the head data is higher than a preset width value, judging according to the height of the head data;
judging whether the height of the head data is lower than a preset height value or not; if the height of the head data is lower than a preset height value, determining that the clustering result is not pedestrian data; if the height of the head data is higher than a preset height value, judging through a characteristic comprehensive grade value of the cluster data attribute;
summing the clustering data attributes according to weights to obtain the feature comprehensive scoring value; if the characteristic comprehensive score value is lower than a preset score value, determining that the clustering result is not pedestrian data; and if the characteristic comprehensive score value is higher than a preset score value, determining that the clustering result is pedestrian data.
7. The pedestrian detection method according to any one of claims 1 to 6, wherein the scanning of the environment and the pedestrian by the laser radar to obtain the three-dimensional point cloud data comprises the following steps:
scanning an environment and pedestrians through the laser radar to obtain a depth image;
and calculating the depth image and preset parameters of the laser radar to obtain the three-dimensional point cloud data.
8. The pedestrian detection method according to any one of claims 1 to 6, wherein the preprocessing the three-dimensional point cloud data to obtain three-dimensional point cloud preprocessed data includes the steps of:
rotating and translating the three-dimensional point cloud data to enable the environment and the pedestrians scanned by the laser radar to be located on an x-o-y plane;
and screening data which are positioned in a preset detection area and are higher than a preset height of a z axis to obtain the three-dimensional point cloud preprocessing data.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 8.
10. A computer arrangement comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 8.
CN202310031146.1A 2023-01-10 2023-01-10 Pedestrian detection method, computer equipment and storage medium Active CN115713787B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310031146.1A CN115713787B (en) 2023-01-10 2023-01-10 Pedestrian detection method, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310031146.1A CN115713787B (en) 2023-01-10 2023-01-10 Pedestrian detection method, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115713787A true CN115713787A (en) 2023-02-24
CN115713787B CN115713787B (en) 2023-07-04

Family

ID=85236272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310031146.1A Active CN115713787B (en) 2023-01-10 2023-01-10 Pedestrian detection method, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115713787B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116243273A (en) * 2023-05-09 2023-06-09 中国地质大学(武汉) Photon counting laser radar data filtering method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114495026A (en) * 2022-01-07 2022-05-13 武汉市虎联智能科技有限公司 Laser radar identification method and device, electronic equipment and storage medium
WO2022111682A1 (en) * 2020-11-30 2022-06-02 深圳市普渡科技有限公司 Moving pedestrian detection method, electronic device and robot
CN114612795A (en) * 2022-03-02 2022-06-10 南京理工大学 Laser radar point cloud-based road surface scene target identification method
CN115184951A (en) * 2022-06-23 2022-10-14 深圳市鸿逸达科技有限公司 Method, equipment and medium for detecting pedestrian and vehicle flow in real time based on laser radar

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022111682A1 (en) * 2020-11-30 2022-06-02 深圳市普渡科技有限公司 Moving pedestrian detection method, electronic device and robot
CN114495026A (en) * 2022-01-07 2022-05-13 武汉市虎联智能科技有限公司 Laser radar identification method and device, electronic equipment and storage medium
CN114612795A (en) * 2022-03-02 2022-06-10 南京理工大学 Laser radar point cloud-based road surface scene target identification method
CN115184951A (en) * 2022-06-23 2022-10-14 深圳市鸿逸达科技有限公司 Method, equipment and medium for detecting pedestrian and vehicle flow in real time based on laser radar

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LUCIANO SPINELLOA ET AL.: "A Layered Approach to People Detection in 3D Range Data" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116243273A (en) * 2023-05-09 2023-06-09 中国地质大学(武汉) Photon counting laser radar data filtering method and device
CN116243273B (en) * 2023-05-09 2023-09-15 中国地质大学(武汉) Photon counting laser radar data filtering method for vegetation canopy extraction

Also Published As

Publication number Publication date
CN115713787B (en) 2023-07-04

Similar Documents

Publication Publication Date Title
CN106778586B (en) Off-line handwritten signature identification method and system
Xu et al. Multiple-entity based classification of airborne laser scanning data in urban areas
KR102143108B1 (en) Lane recognition modeling method, device, storage medium and device, and recognition method, device, storage medium and device
Che et al. Multi-scan segmentation of terrestrial laser scanning data based on normal variation analysis
CN111353512B (en) Obstacle classification method, obstacle classification device, storage medium and computer equipment
JP6621445B2 (en) Feature extraction device, object detection device, method, and program
US20110019920A1 (en) Method, apparatus, and program for detecting object
JP5257274B2 (en) MOBILE BODY DETECTING DEVICE, MOBILE BODY DETECTING METHOD, AND COMPUTER PROGRAM
San et al. Building extraction from high resolution satellite images using Hough transform
CN109255298A (en) Safety helmet detection method and system in dynamic background
CN110728252B (en) Face detection method applied to regional personnel motion trail monitoring
CN104915642B (en) Front vehicles distance measuring method and device
CN106033601A (en) Method and apparatus for detecting abnormal situation
CN111144228A (en) Obstacle identification method based on 3D point cloud data and computer equipment
Aijazi et al. Detecting and updating changes in lidar point clouds for automatic 3d urban cartography
CN115713787A (en) Pedestrian detection method, computer equipment and storage medium
US20170053172A1 (en) Image processing apparatus, and image processing method
KR20200123537A (en) Method for detecting sinkhole using deep learning and data association and sinkhole detecting system using it
Wu et al. A variable dimension-based method for roadside LiDAR background filtering
Yao et al. Automated detection of 3D individual trees along urban road corridors by mobile laser scanning systems
Anders et al. Rule set transferability for object-based feature extraction: An example for cirque mapping
CN103150573B (en) Based on the nerve dendritic spine image classification method of many resolving power fractal characteristic
CN115019163A (en) City factor identification method based on multi-source big data
CN114359825A (en) Monitoring method and related product
Tatebe et al. Can we detect pedestrians using low-resolution LIDAR?-Integration of multi-frame point-clouds

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant