CN117471484B - Pedestrian navigation method, computer-readable storage medium and electronic equipment - Google Patents

Pedestrian navigation method, computer-readable storage medium and electronic equipment Download PDF

Info

Publication number
CN117471484B
CN117471484B CN202311827987.XA CN202311827987A CN117471484B CN 117471484 B CN117471484 B CN 117471484B CN 202311827987 A CN202311827987 A CN 202311827987A CN 117471484 B CN117471484 B CN 117471484B
Authority
CN
China
Prior art keywords
pedestrian
information
sensor
laser radar
pedestrians
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311827987.XA
Other languages
Chinese (zh)
Other versions
CN117471484A (en
Inventor
张钦满
石雪丽
李立凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LeiShen Intelligent System Co Ltd
Original Assignee
LeiShen Intelligent System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LeiShen Intelligent System Co Ltd filed Critical LeiShen Intelligent System Co Ltd
Priority to CN202311827987.XA priority Critical patent/CN117471484B/en
Publication of CN117471484A publication Critical patent/CN117471484A/en
Application granted granted Critical
Publication of CN117471484B publication Critical patent/CN117471484B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a pedestrian navigation method, a computer readable storage medium and an electronic device, comprising: after receiving the authentication information, obtaining the position information of the pedestrian at the current moment based on a sensor, wherein the sensor comprises a plurality of laser radars; combining the position information of the pedestrian at the current moment with map information to define first related information; updating the position information of the pedestrian in the first related information in real time based on the sensor; and sending the first related information updated in real time to the pedestrian. According to the method and the device, the pedestrians are associated with the laser radar where the pedestrians are located through the authentication information of the pedestrians, so that the pedestrians can know the position information of the pedestrians through the laser radar, and the pedestrians can navigate without depending on GPS signals through combination with the map information.

Description

Pedestrian navigation method, computer-readable storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of positioning navigation technologies, and in particular, to a pedestrian navigation method, a computer readable storage medium, and an electronic device.
Background
Navigation generally relies on Global Positioning System (GPS) support in everyday life. However, due to the presence of terminals of pedestrians, building shielding, weather, and the like, a situation in which GPS signals are lost (or signal strength is low) often occurs. Once the GPS signal is lost (or the signal strength is low), accurate navigation cannot be achieved. In order to solve the above problems, it is common to perform positioning assistance by GPS positioning and cellular network positioning, and in the event of a loss of GPS signal (or a low signal strength), the positioning assistance is performed by cellular network positioning. The positioning is performed by the GPS signal without loss of the GPS signal. However, cellular network positioning relies mainly on the operator's base station network, with low accuracy, typically with an error of several tens to hundreds of meters. In map navigation with high accuracy, particularly in navigation for the person, it is difficult to satisfy the demand.
Disclosure of Invention
In view of this, the present application provides a pedestrian navigation method, a computer-readable storage medium, and an electronic device, which can effectively realize high-precision navigation for pedestrians without depending on GPS signals.
In a first aspect, the present application provides a pedestrian navigation method, including:
after receiving the authentication information, obtaining the position information of the pedestrian at the current moment based on a sensor, wherein the sensor comprises a plurality of laser radars;
combining the position information of the pedestrian at the current moment with map information to define first related information;
updating the position information of the pedestrian in the first related information in real time based on the sensor;
and sending the first related information updated in real time to the pedestrian.
In an optional embodiment, after receiving the authentication information, obtaining the position information of the pedestrian at the current moment based on the sensor includes:
the method comprises the steps of receiving an activating signal of a pedestrian, and determining which sensor the pedestrian is in a scanning range;
receiving action information of pedestrians, and determining objects of the pedestrians in the sensor;
and obtaining the position information of the pedestrian at the current moment based on the object of the pedestrian in the sensor.
In an alternative embodiment, the receiving the motion information of the pedestrian, determining the object of the pedestrian in the sensor includes any of the following ways:
based on the activation signal of the pedestrian, a corresponding specific action signal is sent to the pedestrian, and based on the perception data of the sensor, the object of the pedestrian in the sensor is determined;
or,
an object of the pedestrian in the sensor is determined based on a selection action of the sensed data of the pedestrian in the sensor.
In an optional embodiment, after receiving the authentication information, obtaining the position information of the pedestrian at the current moment based on the sensor includes:
receiving request information which is sent by pedestrians and needs to be matched;
based on the request information which is sent out by the pedestrian and needs to be matched, a corresponding specific action signal is sent out to the pedestrian;
based on the sensing data of the sensors, determining which sensor the pedestrian is in the scanning range, and determining the object of the pedestrian in the sensor;
and obtaining the position information of the pedestrian at the current moment based on the object of the pedestrian in the sensor.
In an optional embodiment, the updating, based on the sensor, the position information of the pedestrian in the first relevant information in real time includes:
the sensor comprises a plurality of cameras;
defining a laser radar of an acquisition area where a pedestrian is located at the current moment as a reference laser radar, and obtaining parameter information of the pedestrian based on point cloud data of the reference laser radar, wherein the parameter information comprises contour information and motion state parameters;
defining a camera of the pedestrian in the acquisition area as a reference camera, and obtaining color information of the pedestrian based on image data of the reference camera;
if a pedestrian leaves an acquisition area of a reference laser radar, defining adjacent laser radars of the reference laser radar as first-class laser radars;
screening the first type of laser radars based on the motion state parameters of pedestrians, and defining a screening result as a third type of laser radars;
and obtaining a tracked object of the pedestrian based on the contour information and the color information of the pedestrian of the third type of laser radar.
In an optional embodiment, the obtaining the tracked object of the pedestrian based on the contour information and the color information of the pedestrian of the third type of lidar includes:
and if the contour information and the color information of the pedestrians of the third type of laser radar meet the following two conditions at the same time, defining the pedestrians of the third type of laser radar as tracking objects of the pedestrians:
if the difference value between the contour information of the pedestrians and the contour information of the pedestrians of the third type of laser radar is within a preset value;
and if the difference value between the color information of the pedestrians and the color information of the pedestrians of the third type of laser radar is within a preset value.
In an optional embodiment, if the difference between the profile information of the pedestrian of the third type of lidar and the profile information of the pedestrian is within a preset value, the method includes:
the profile information comprises profile information and profile line length information, and if the difference value between the profile information of the pedestrians of the third type of laser radar and the profile information of the pedestrians is within a preset value, the difference value between the profile information of the pedestrians of the third type of laser radar and the profile information of the pedestrians is determined to be within the preset value;
otherwise, if the difference value between the contour line length information of the pedestrian of the third type of laser radar and the contour line length information of the pedestrian is within the preset value, the difference value between the contour information of the pedestrian of the third type of laser radar and the contour information of the pedestrian is determined to be within the preset value.
In an optional embodiment, the filtering the first type of lidar based on the motion state parameter of the pedestrian, and defining a filtering result as a third type of lidar includes:
the motion state parameters comprise course angle information and speed information;
screening the first type of laser radar based on the course angle information, and defining a screening result as a second type of laser radar;
and screening the second type of laser radar by combining the speed information of the pedestrians based on the distance between the reference laser radar and the second type of radar, and defining a screening result as a third type of laser radar.
In an optional implementation manner, the step of screening the second class lidar based on the distance between the reference lidar and the second class lidar and combining speed information of the pedestrian, and defining a screening result as a third class lidar includes:
based on the distance between the reference laser radar and the second class radar, combining the scanning ranges of the reference laser radar and the second class radar to obtain the distance between the blind areas of the reference laser radar and the second class radar, wherein the distance is defined as the blind area distance;
based on the speed information of the pedestrians leaving the acquisition area of the reference laser radar, combining the blind area distance to obtain the time for the pedestrians to finish the blind areas of the reference laser radar and the second type of radar, wherein the time is defined as the blind area time;
obtaining the number of frames of the pedestrian entering an acquisition area of the second type radar based on the blind area time and the frame rate of the second type radar, wherein the number is defined as the blind area frame number;
and screening the second type of laser radars based on the blind area frame number, and defining a screening result as a third type of laser radars.
In a second aspect, the present application provides a pedestrian navigation method, including:
the pedestrian sends authentication information to the sensor, and the authentication information is used for determining which sensor the pedestrian is in the scanning range; the sensor comprises a plurality of laser radars;
the pedestrian receives first related information; the first related information is defined as the combination of the position information of the pedestrian at the current moment and the map information; the pedestrian position information at the current moment is acquired based on the perception of the laser radar to the pedestrian;
the pedestrian receives the first related information updated in real time.
In a third aspect, the present application provides a computer readable storage medium storing a computer program which when executed by a processor performs the steps of the method of the first or second aspect of the present application.
In a fourth aspect, the present application provides an electronic device comprising one or more processors and memory associated with the one or more processors, program instructions which, when read for execution by the one or more processors, perform the steps of the method of the first or second aspect of the present application.
The embodiment of the application has the following beneficial effects:
according to the method and the device, the pedestrians are associated with the laser radar where the pedestrians are located through the authentication information of the pedestrians, so that the pedestrians can know the position information of the pedestrians through the laser radar, and the pedestrians can navigate without depending on GPS signals through combination with the map information.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 shows a flowchart of a pedestrian navigation method of embodiment 1 of the present application;
FIG. 2 shows a flow chart of one implementation of S1000 in example 1 of the present application;
FIG. 3 shows a flow chart of another implementation of S1000 in example 1 of the present application;
FIG. 4 shows a flow chart of one implementation of S3000 in example 1 of the present application;
FIG. 5 shows a flow chart of one implementation of S3400 in example 1 of the present application;
FIG. 6 shows a flow chart of one implementation of S3420 in example 1 of the present application;
fig. 7 shows a flowchart of a pedestrian navigation method of embodiment 2 of the present application.
Description of the embodiments
The following description of the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments.
The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
In the following, the terms "comprises", "comprising", "having" and their cognate terms may be used in various embodiments of the present application are intended only to refer to a particular feature, number, step, operation, element, component, or combination of the foregoing, and should not be interpreted as first excluding the existence of or increasing the likelihood of one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing. Furthermore, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of this application belong. The terms (such as those defined in commonly used dictionaries) will be interpreted as having a meaning that is identical to the meaning of the context in the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in connection with the various embodiments.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The embodiments described below and features of the embodiments may be combined with each other without conflict.
For ease of understanding, the background art will now be explained in detail.
Navigation generally relies on Global Positioning System (GPS) support in everyday life. However, due to the presence of terminals of pedestrians, building shielding, weather, and the like, a situation in which GPS signals are lost (or signal strength is low) often occurs. Once the GPS signal is lost (or the signal strength is low), accurate navigation cannot be achieved. In order to solve the above problems, it is common to perform positioning assistance by GPS positioning and cellular network positioning, and in the event of a loss of GPS signal (or a low signal strength), the positioning assistance is performed by cellular network positioning. The positioning is performed by the GPS signal without loss of the GPS signal. However, cellular network positioning is mainly dependent on the operator's base station network, which is less accurate, and typically has an error of several tens to hundreds of meters. In map navigation with high accuracy, it is difficult to meet the demands.
In particular, in some large indoor buildings (such as subway stations) or underground passages, pedestrians are substantially unable to receive GPS signals (or have low signal strength) due to the shielding of the building. Generally, the ultimate goal of a pedestrian is to reach a certain location on the ground (i.e., from ground a location through a subway station/underground tunnel to ground B location). However, since the occupied area of a part of subway stations or underground passages is quite large, the entrances and exits are numerous, and when entering the subway stations or underground passages, pedestrians easily lose direction sense in the subway stations or underground passages through multiple turning. Once the pedestrian loses the sense of direction, the doorway cannot be selected based on the relative direction of the ground a position and the ground B position. Although, the existing subway station or the gateway of the underground passage all write the road names of some standard buildings or the ground at the corresponding positions. However, for some roads with large traffic flow, the road itself is not provided with zebra crossings, and two sides of the road can only communicate through subway stations or underground passages. For people losing the sense of direction and GPS signal signals, accurate entrances and exits cannot be selected at all, the people can easily wrap around in subway stations or underground passages all the time, and the commute time of pedestrians is seriously wasted.
Against the above background, the pedestrian navigation method will be described below in connection with some specific embodiments.
Example 1
This embodiment 1 is a specific embodiment based on the system side. Exemplary, as shown in fig. 1, embodiment 1 provides a pedestrian navigation method, including:
s1000: after receiving the authentication information, obtaining the position information of the pedestrian at the current moment based on a sensor, wherein the sensor comprises a plurality of laser radars;
specifically, a unified coordinate system is required to be set for each laser radar, point cloud data obtained by each laser radar are converted into the unified coordinate system, and regional point cloud data obtained by scanning each laser radar are fused to obtain complete point cloud data. When specific point cloud data are fused, the cross overlapping area is cut off, the area with an unsuitable angle is cut off, and then the point cloud data after cutting off are combined to obtain the point cloud data of the whole area. Compared with a camera, the point cloud data obtained by the laser radar has position information, so that pedestrians can know the position information of the pedestrians based on the point cloud data of the laser radar.
In one embodiment, as shown in fig. 2, S1000 includes:
s1100: the method comprises the steps of receiving an activating signal of a pedestrian, and determining which sensor the pedestrian is in a scanning range;
s1200: receiving action information of a pedestrian, and determining an object of the pedestrian in a sensor;
s1300: and obtaining the position information of the pedestrian at the current moment based on the object of the pedestrian in the sensor.
Specifically, the above embodiment mainly shows that after the system receives the activation signal of the pedestrian, it can be clear which laser radar the current pedestrian is in the scanning range, so as to determine the position information of the current pedestrian.
In one embodiment, S1200 includes:
s1210: based on the activation signal of the pedestrian, a corresponding specific action signal is sent to the pedestrian, and based on the perception data of the sensor, the object of the pedestrian in the sensor is determined;
or,
s1220: based on the selection action of the sensed data of the pedestrian in the sensor, the object of the pedestrian in the sensor is determined.
Specifically, how to let the system determine which object of the point cloud data corresponds to the point cloud data of which laser radar the pedestrian is in through the laser radar point cloud data. In this embodiment, the operation of which sensor the pedestrian is currently located in is clarified by the activation signal of the pedestrian, and the above-mentioned activation signal of the pedestrian clarifies the operation of which sensor the pedestrian is currently located in by a button (each button corresponds to one laser radar) or a scanning identification code (each identification code corresponds to one laser radar) or the like, which is not limited in this application. In the embodiment, two modes are provided for the pedestrian to correspond to which object in the point cloud data, and the first mode is that after the pedestrian determines the corresponding laser radar, the pedestrian obtains the point cloud data of the laser radar through a terminal (such as a mobile phone), and the pedestrian directly selects the object in the point cloud data as the object of the pedestrian. And the second is that after the corresponding laser radar is determined by the pedestrian, the pedestrian swings out a specific action, and the system judges which object in the current object makes the specific action according to the point cloud data of the laser radar, so that the system can automatically know that the object in the point cloud data is the object of the pedestrian. The specific action may be an action which does not generally occur such as rotating the whole body in situ for several turns or rotating the arm for several turns against the center of the laser radar. Since a typical pedestrian does not make these actions in the above-described scene, the probability of two people making the above-described actions simultaneously occurring in the lidar is extremely low, and therefore, as long as the system recognizes that a person makes the above-described specific action, it can be determined that the object in the point cloud data is the object of the pedestrian. The definition of specific actions is not limited in the application, and any unusual actions which are not generally appeared in the scene can be used for being identified by the system through the point cloud data.
In one embodiment, as shown in fig. 3, S1000 includes:
s'1100: and receiving request information which is sent by pedestrians and needs to be matched.
S'1200: based on the request information which is sent out by the pedestrian and needs to be matched, a corresponding specific action signal is sent out to the pedestrian.
S'1300: based on the sensing data of the sensors, determining which sensor scanning range the pedestrian is in, and determining the object of the pedestrian in the sensor.
Specifically, how to let the system determine which object of the point cloud data corresponds to the point cloud data of which laser radar the pedestrian is in through the laser radar point cloud data. According to the embodiment, the system receives request information which is sent by the pedestrian and needs to be matched, corresponding specific actions are generated according to the request information, and then, based on whether the laser radar has the specific actions in the specified time period or not, the point cloud data of which laser radar the pedestrian is in can be determined, and which object of the point cloud data corresponds to. The definition of the specific actions is the same as the previous embodiments, and will not be repeated here.
S'1400: and obtaining the position information of the pedestrian at the current moment based on the object of the pedestrian in the sensor.
S2000: combining the position information of the pedestrian at the current moment with map information to define first related information;
specifically, the position information of the pedestrian at the current moment can be obtained based on the point cloud data of the laser radar, so that the current position information of the pedestrian can be combined with the map information to obtain a navigation function similar to a GPS signal. It should be noted that, the map information may be map information of an environment where a pedestrian is currently located, for example, the pedestrian walks on a subway station, and the map information of the subway station and the position information of the pedestrian may be combined, so that the pedestrian definitely locates at a specific position of the subway station and a relative position of a desired entrance and exit to the pedestrian. For example, the pedestrian walks on the underground passage, and the map information of the road surface corresponding to the underground passage and the position information of the pedestrian can be combined, so that the pedestrian can know the specific position of the pedestrian on the road surface even if walking in the underground passage, and the method has very important auxiliary significance for the pedestrian losing the sense of direction to accurately arrive at the appointed position of a certain road surface in the relatively complex underground passage.
S3000: updating the position information of the pedestrian in the first related information in real time based on the sensor;
specifically, since the pedestrian moves continuously according to the first related information. At this time, the position of the pedestrian is continuously changed, so that uninterrupted tracking is required for the pedestrian, so that real-time position information of the pedestrian is ensured to be obtained, and confusion with position information of other pedestrians is avoided. At this time, there are two cases for tracking pedestrians, one is that the scanning range of each laser radar already covers the whole area, and at this time, since there is no scanning blind area of the laser radar, the pedestrians can be tracked in real time directly by each laser radar. Another case is that the scanning range of each laser radar cannot cover the whole area, and then there is necessarily a scanning blind area of the laser radar. Therefore, further processing is required for the scanning blind area where the lidar exists.
In an embodiment, as shown in fig. 4, the sensor in S3000 includes a plurality of cameras, and further includes:
s3100: defining a laser radar of an acquisition area where a pedestrian is located at the current moment as a reference laser radar, and obtaining parameter information of the pedestrian based on point cloud data of the reference laser radar, wherein the parameter information comprises contour information and motion state parameters;
s3200: defining a camera of the pedestrian in the acquisition area as a reference camera, and obtaining color information of the pedestrian based on image data of the reference camera;
specifically, S3100 and S3200 may be performed in parallel or sequentially, which is not limited in this application.
S3300: if the pedestrians leave the acquisition area of the reference laser radar, defining the adjacent laser radars of the reference laser radar as first-class laser radars;
s3400: screening the first type of laser radars based on the motion state parameters of pedestrians, and defining screening results as a third type of laser radars;
in particular, the real-time tracking of pedestrians based on the laser radars means that the real-time tracking of pedestrians by each laser radar is required. If the scanning blind areas of the laser radars do not exist, the pedestrians can be directly tracked in real time through the laser radars. If a scanning blind area of the laser radar exists, screening and estimating adjacent laser radars of pedestrians after the pedestrians leave the current laser radar are needed. The main screening mode is mainly to screen based on the motion state parameters of pedestrians to obtain a third type of laser radar. And the tracking object can be obtained in a third type of laser radar according to the characteristic that the contour information and the color information of the pedestrian cannot change when the pedestrian spans different laser radar scanning areas.
In one embodiment, as shown in fig. 5, the motion state parameters in S3400 include heading angle information and speed information; further comprises:
s3410: screening the first type of laser radar based on the course angle information, and defining a screening result as a second type of laser radar;
s3420: and screening the second class of laser radars based on the distance between the reference laser radars and the second class of radars and combining the speed information of pedestrians, and defining a screening result as a third class of laser radars.
Specifically, coarse screening is performed on the first type of laser radar (all the laser radars which do not belong to the corresponding course angle are removed) according to the course angle information and the speed information of the pedestrians to obtain the second type of laser radar.
In one embodiment, as shown in fig. 6, S3420 includes:
s3421: based on the distance between the reference laser radar and the second class radar, combining the scanning range of the reference laser radar and the second class radar to obtain the distance between the reference laser radar and the blind area of the second class radar, wherein the distance is defined as the blind area distance;
s3422: based on the speed information of the pedestrians leaving the acquisition area of the reference laser radar, combining the blind area distance to obtain the time for the pedestrians to finish the blind areas of the reference laser radar and the second type radar, wherein the time is defined as the blind area time;
s3423: obtaining the number of frames of pedestrians entering an acquisition area of the second type radar based on the blind area time and the frame rate of the second type radar, wherein the number is defined as the blind area frame number;
s3424: and screening the second type of laser radars based on the blind area frame number, and defining a screening result as a third type of laser radars.
Specifically, since the blind area distance between the second type of lidar and the reference radar is known, the speed information of the pedestrian leaving the reference lidar can be combined to determine when the pedestrian will finish the blind area distance (i.e. the blind area time) and enter the sampling area of the second type of lidar. In the case of the lidar, since the frame rate of the lidar is substantially fixed, based on the dead zone time and the frame rate, it is possible to obtain, for the second type of lidar, how many frames are likely to appear in the image data of the second type of lidar after the pedestrian leaves the reference lidar. Therefore, the second type of laser radar can be fine-screened, and the third type of laser radar can be obtained.
S3500: and obtaining the tracked object of the pedestrian based on the contour information and the color information of the pedestrian of the third type of laser radar.
In one embodiment, S3500 includes:
if the profile information and the color information of the pedestrian of the third type of lidar meet the following two conditions at the same time, defining the pedestrian of the third type of lidar as a tracked object of the pedestrian:
condition 1: if the difference value between the contour information of the pedestrians and the contour information of the pedestrians of the third type of laser radar is within a preset value;
condition 2: if the difference value between the color information of the pedestrian of the third type of laser radar and the color information of the pedestrian is within the preset value.
In one embodiment, condition 1 in S3500 includes:
the contour information comprises contour information and contour line length information, and if the difference value between the contour information of the pedestrians of the third type of laser radar and the contour information of the pedestrians is within a preset value, the difference value between the contour information of the pedestrians of the third type of laser radar and the contour information of the pedestrians is determined to be within the preset value;
otherwise, if the difference value between the contour line length information of the pedestrian of the third type of laser radar and the contour line length information of the pedestrian is within the preset value, the difference value between the contour information of the pedestrian of the third type of laser radar and the contour information of the pedestrian is determined to be within the preset value.
The pedestrian may have a change in the shape parameter due to the walking motion when crossing different lidar scanning areas. But the length information of the contour line itself will generally not change. Therefore, the pedestrians can judge based on the shape parameters in the profile information, if the shape parameters are different, the pedestrians can judge based on the length information of the profile lines in the profile information, and finally, the difference value between the profile information of the pedestrians and the profile information of the pedestrians in the third type of laser radar is obtained.
S4000: and sending the first related information updated in real time to the pedestrian.
Example 2
Specifically, this embodiment 2 is a specific implementation manner based on the embodiment 1 based on the pedestrian. For the definition of nouns in embodiment 2, reference is made to the previous embodiments unless otherwise indicated. Details of each step, technical effects, technical problems to be solved and other relevant information in embodiment 2 can be referred to in the foregoing embodiment 1 unless otherwise specified. Some of the options or preferences in the foregoing embodiment 1 apply to embodiment 2 without violating the design objectives and physical laws.
Exemplary, as shown in fig. 7, embodiment 2 provides a pedestrian navigation method, including:
t100: the pedestrian sends authentication information to the sensor, and the authentication information is used for determining which sensor the pedestrian is in the scanning range; the sensor comprises a plurality of laser radars;
t200: receiving first related information by pedestrians; the first related information is defined as the combination of the position information of the pedestrian at the current moment and the map information; the pedestrian position information at the current moment is acquired based on the perception of the laser radar to the pedestrian;
t300: the pedestrian receives the first related information updated in real time.
Example 3
Specifically, this embodiment 3 is a specific embodiment based on embodiment 1, in which the above-described underground passage is used as the use environment. For the definition of nouns in embodiment 3, reference is made to the previous embodiments unless otherwise indicated. Details of each step, technical effects, technical problems to be solved and other relevant information in embodiment 3 can be referred to in the foregoing embodiment 1 unless otherwise specified. Some of the options or preferences described in example 1 above apply to example 3 without violating design objectives and laws of physics
Embodiment 3 provides a pedestrian navigation method suitable for a subterranean passage. Because the underground passage is below the ground, based on the shielding of the building, the pedestrian cannot generally receive the GPS signal at the moment, only the cellular network signal is available, and the pedestrian cannot realize high-precision navigation under the condition of losing the GPS signal. In addition, when a pedestrian enters the underground passage, the pedestrian can go through various modes such as going upstairs, taking an elevator and the like, and the originally relatively clear direction sense of the pedestrian can become gradually blurred. After the pedestrian reaches the underground passage, the pedestrian can not find the walking path accurately by simply relying on the 'direction sense' because the relative direction of the target position and the pedestrian can not be determined.
For this reason, in embodiment 3, each entrance of the passage needs to be within the scanning range of the lidar, and a plurality of lidars are provided inside the underground passage. When a pedestrian enters an entrance of an underground passage, the mobile phone scans the identification code of the entrance, and the identification code is used for informing a background that the pedestrian is currently located in a specific laser radar. After the pedestrian is associated with the lidar in the background, it is determined which object in the point cloud data of the lidar corresponds to the pedestrian. The pedestrian can see the point cloud data of the current laser radar on the scanning area at the mobile phone interface, a plurality of suspected objects are obtained through the point cloud data, the pedestrian selects the suspected objects, and the background pedestrian is informed of being matched with the selected objects in the point cloud objects. The laser radar is used for tracking the object all the time, taking the position information of the object as the position information of the pedestrian, and feeding back the position information to the mobile phone of the pedestrian.
Then, the pedestrians can start to enter the underground passage, and if the scanning range of the whole laser radar covers all areas of the underground passage, the pedestrians can be tracked in real time directly through each laser radar. If the scanning range of the whole laser radar cannot cover all the areas of the underground passage, screening and judging are needed based on the point cloud data and the like, and reference may be made to the foregoing embodiments, which are not repeated here. When a pedestrian walks in the underground passage, the position information of the pedestrian in the underground passage is always updated and received by the pedestrian due to the real-time tracking of the laser radar, and at the moment, the pedestrian can obtain the high-precision position information based on the pedestrian only through the cellular network signal under the condition of lacking the GPS signal.
At this time, the pedestrian can have two navigation options on the mobile phone, one is to combine the current position information of the pedestrian with the current environment of the underground passage, so that only the relative position change of the pedestrian in the underground passage needs to be updated. The mode is similar to the existing ground traffic navigation, and is suitable for the situations that various inlets and outlets exist in various underground channels and the underground channels are huge. The other is that the current position of the pedestrian is vertically projected on the ground and combined with the ground environment. At this time, although the pedestrian is in the underground passage, the position of the pedestrian on the ground can be confirmed at any time. The above-described manner is applicable to a specified location (e.g., a certain store on the left or right of a certain road) where a pedestrian needs to reach a certain ground through a subterranean passage. Of course, for a relatively large and complex underground passage, pedestrians can alternately use the two different combination modes, quickly determine own walking paths and avoid getting lost.
In this embodiment 3, the position information of the pedestrian is determined by the laser radar and then sent to the pedestrian, and by combining different map information, the pedestrian can quickly determine the walking path without GPS signals and direction sense, thereby realizing high-precision navigation.
Example 4
Embodiment 4 also provides an electronic device including one or more processors; and a memory associated with the one or more processors, the memory for storing program instructions that, when read and executed by the one or more processors, perform the pedestrian navigation method described above.
The processor may be an integrated circuit chip with signal processing capabilities. The processor may be a general purpose processor including at least one of a central processing unit (Central Processing Unit, CPU), a graphics processor (Graphics Processing Unit, GPU) and a network processor (Network Processor, NP), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like that may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application.
The Memory may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc. The memory is used for storing a computer program, and the processor can correspondingly execute the computer program after receiving the execution instruction.
Example 5
The present embodiment 5 also provides a readable storage medium storing the computer program used in the above-described terminal device.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners as well. The apparatus embodiments described above are merely illustrative, for example, of the flow diagrams and block diagrams in the figures, which illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules or units in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a smart phone, a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application.

Claims (10)

1. A pedestrian navigation method, characterized by comprising:
after receiving the authentication information, obtaining the position information of the pedestrian at the current moment based on a sensor, wherein the sensor comprises a plurality of laser radars;
combining the position information of the pedestrian at the current moment with map information to define first related information;
updating the position information of the pedestrian in the first related information in real time based on the sensor;
transmitting the first related information updated in real time to a pedestrian;
after receiving the authentication information, obtaining the pedestrian position information at the current moment based on the sensor, wherein the method comprises the following steps:
the method comprises the steps of receiving an activating signal of a pedestrian, and determining which sensor the pedestrian is in a scanning range;
receiving action information of pedestrians, and determining objects of the pedestrians in the sensor;
based on the object of the pedestrian in the sensor, obtaining the position information of the pedestrian at the current moment;
or after receiving the authentication information, obtaining the position information of the pedestrian at the current moment based on the sensor, wherein the method comprises the following steps:
receiving request information which is sent by pedestrians and needs to be matched;
based on the request information which is sent out by the pedestrian and needs to be matched, a corresponding specific action signal is sent out to the pedestrian;
based on the sensing data of the sensors, determining which sensor the pedestrian is in the scanning range, and determining the object of the pedestrian in the sensor;
and obtaining the position information of the pedestrian at the current moment based on the object of the pedestrian in the sensor.
2. The pedestrian navigation method of claim 1 wherein the receiving motion information of the pedestrian, determining an object of the pedestrian in the sensor, comprises any of:
based on the activation signal of the pedestrian, a corresponding specific action signal is sent to the pedestrian, and based on the perception data of the sensor, the object of the pedestrian in the sensor is determined;
or,
an object of the pedestrian in the sensor is determined based on a selection action of the sensed data of the pedestrian in the sensor.
3. The pedestrian navigation method according to claim 1 or 2, characterized in that the sensor-based real-time updating of the position information of the pedestrian in the first related information includes:
the sensor comprises a plurality of cameras;
defining a laser radar of an acquisition area where a pedestrian is located at the current moment as a reference laser radar, and obtaining parameter information of the pedestrian based on point cloud data of the reference laser radar, wherein the parameter information comprises contour information and motion state parameters;
defining a camera of the pedestrian in the acquisition area as a reference camera, and obtaining color information of the pedestrian based on image data of the reference camera;
if a pedestrian leaves an acquisition area of a reference laser radar, defining adjacent laser radars of the reference laser radar as first-class laser radars;
screening the first type of laser radars based on the motion state parameters of pedestrians, and defining a screening result as a third type of laser radars;
and obtaining a tracked object of the pedestrian based on the contour information and the color information of the pedestrian of the third type of laser radar.
4. The pedestrian navigation method according to claim 3, wherein the obtaining the tracked object of the pedestrian based on the contour information and the color information of the pedestrian of the third type of lidar includes:
and if the contour information and the color information of the pedestrians of the third type of laser radar meet the following two conditions at the same time, defining the pedestrians of the third type of laser radar as tracking objects of the pedestrians:
if the difference value between the contour information of the pedestrians and the contour information of the pedestrians of the third type of laser radar is within a preset value;
and if the difference value between the color information of the pedestrians and the color information of the pedestrians of the third type of laser radar is within a preset value.
5. The pedestrian navigation method according to claim 4, wherein if a difference between the profile information of the pedestrian of the third type of lidar and the profile information of the pedestrian is within a preset value, comprising:
the profile information comprises profile information and profile line length information, and if the difference value between the profile information of the pedestrians of the third type of laser radar and the profile information of the pedestrians is within a preset value, the difference value between the profile information of the pedestrians of the third type of laser radar and the profile information of the pedestrians is determined to be within the preset value;
otherwise, if the difference value between the contour line length information of the pedestrian of the third type of laser radar and the contour line length information of the pedestrian is within the preset value, the difference value between the contour information of the pedestrian of the third type of laser radar and the contour information of the pedestrian is determined to be within the preset value.
6. The pedestrian navigation method according to claim 3, wherein the step of screening the first type of lidar based on the motion state parameter of the pedestrian, to define a screening result as a third type of lidar, includes:
the motion state parameters comprise course angle information and speed information;
screening the first type of laser radar based on the course angle information, and defining a screening result as a second type of laser radar;
and screening the second type of laser radar by combining the speed information of the pedestrians based on the distance between the reference laser radar and the second type of radar, and defining a screening result as a third type of laser radar.
7. The pedestrian navigation method according to claim 6, wherein the step of screening the second class lidar based on the distance between the reference lidar and the second class lidar and the scanning range between the reference lidar and the second class lidar in combination with the speed information of the pedestrian, and defining the screening result as a third class lidar includes:
based on the distance between the reference laser radar and the second class radar, combining the scanning ranges of the reference laser radar and the second class radar to obtain the distance between the blind areas of the reference laser radar and the second class radar, wherein the distance is defined as the blind area distance;
based on the speed information of the pedestrians leaving the acquisition area of the reference laser radar, combining the blind area distance to obtain the time for the pedestrians to finish the blind areas of the reference laser radar and the second type of radar, wherein the time is defined as the blind area time;
obtaining the number of frames of the pedestrian entering an acquisition area of the second type radar based on the blind area time and the frame rate of the second type radar, wherein the number is defined as the blind area frame number;
and screening the second type of laser radars based on the blind area frame number, and defining a screening result as a third type of laser radars.
8. A pedestrian navigation method, characterized by comprising:
the pedestrian sends authentication information to the sensor, and the authentication information is used for determining which sensor the pedestrian is in the scanning range; the sensor comprises a plurality of laser radars;
the pedestrian receives first related information; the first related information is defined as the combination of the position information of the pedestrian at the current moment and the map information; the pedestrian position information at the current moment is acquired based on the perception of the laser radar to the pedestrian;
the pedestrian receives first related information updated in real time based on the sensor;
the pedestrian sends authentication information to the sensor, the authentication information is used for determining which sensor the pedestrian is in the scanning range, and the method comprises the following steps:
the pedestrian sends out an activation signal which is used for determining which sensor the pedestrian is in the scanning range;
or the pedestrian sends authentication information to the sensor, wherein the authentication information is used for determining which sensor the pedestrian is in the scanning range, and the authentication information comprises the following steps:
the pedestrian sends out the request information to be matched;
based on the request information which is sent by the pedestrian and needs to be matched, the pedestrian receives a corresponding specific action signal;
the pedestrian makes a corresponding specific action towards the sensor.
9. A computer-readable storage medium comprising,
a computer program is stored which, when executed by a processor, implements the steps of the method of any one of claims 1 to 8.
10. An electronic device, comprising:
one or more processors;
and a memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the steps of the method of any of claims 1 to 8.
CN202311827987.XA 2023-12-28 2023-12-28 Pedestrian navigation method, computer-readable storage medium and electronic equipment Active CN117471484B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311827987.XA CN117471484B (en) 2023-12-28 2023-12-28 Pedestrian navigation method, computer-readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311827987.XA CN117471484B (en) 2023-12-28 2023-12-28 Pedestrian navigation method, computer-readable storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN117471484A CN117471484A (en) 2024-01-30
CN117471484B true CN117471484B (en) 2024-03-05

Family

ID=89627842

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311827987.XA Active CN117471484B (en) 2023-12-28 2023-12-28 Pedestrian navigation method, computer-readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN117471484B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112540383A (en) * 2019-09-17 2021-03-23 隆博科技(常熟)有限公司 Efficient following method based on laser human body detection
CN113267773A (en) * 2021-04-14 2021-08-17 北京航空航天大学 Millimeter wave radar-based accurate detection and accurate positioning method for indoor personnel
EP3913534A1 (en) * 2020-05-22 2021-11-24 Tata Consultancy Services Limited System and method for real-time radar-based action recognition using spiking neural network(snn)
DE102020210380A1 (en) * 2020-08-14 2022-02-17 Conti Temic Microelectronic Gmbh Method for determining a movement of an object
CN114492535A (en) * 2022-02-09 2022-05-13 上海芯物科技有限公司 Action recognition method, device, equipment and storage medium
CN115905945A (en) * 2022-11-15 2023-04-04 深圳锐越微技术有限公司 Pedestrian action recognition method, device, equipment and storage medium
CN117036868A (en) * 2023-10-08 2023-11-10 之江实验室 Training method and device of human body perception model, medium and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11061406B2 (en) * 2018-10-22 2021-07-13 Waymo Llc Object action classification for autonomous vehicles

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112540383A (en) * 2019-09-17 2021-03-23 隆博科技(常熟)有限公司 Efficient following method based on laser human body detection
EP3913534A1 (en) * 2020-05-22 2021-11-24 Tata Consultancy Services Limited System and method for real-time radar-based action recognition using spiking neural network(snn)
DE102020210380A1 (en) * 2020-08-14 2022-02-17 Conti Temic Microelectronic Gmbh Method for determining a movement of an object
CN113267773A (en) * 2021-04-14 2021-08-17 北京航空航天大学 Millimeter wave radar-based accurate detection and accurate positioning method for indoor personnel
CN114492535A (en) * 2022-02-09 2022-05-13 上海芯物科技有限公司 Action recognition method, device, equipment and storage medium
CN115905945A (en) * 2022-11-15 2023-04-04 深圳锐越微技术有限公司 Pedestrian action recognition method, device, equipment and storage medium
CN117036868A (en) * 2023-10-08 2023-11-10 之江实验室 Training method and device of human body perception model, medium and electronic equipment

Also Published As

Publication number Publication date
CN117471484A (en) 2024-01-30

Similar Documents

Publication Publication Date Title
EP3570061B1 (en) Drone localization
EP3343172B1 (en) Creation and use of enhanced maps
KR102128851B1 (en) Method and system for determining global location of first landmark
EP4027167A1 (en) Sensor calibration method and apparatus
CN110945320B (en) Vehicle positioning method and system
EP4206609A1 (en) Assisted driving reminding method and apparatus, map assisted driving reminding method and apparatus, and map
US11866167B2 (en) Method and algorithm for flight, movement, autonomy, in GPS, communication, degraded, denied, obstructed non optimal environment
KR102480000B1 (en) Electronic apparatus, route guidance method of electronic apparatus, computer program and computer readable recording medium
KR20210026918A (en) Navigation system and method using drone
JP6437174B1 (en) Mobile device, map management device, and positioning system
CN114620072A (en) Vehicle control method, device, storage medium, electronic device and vehicle
CN109696173A (en) A kind of car body air navigation aid and device
CN114829971A (en) Laser radar calibration method and device and storage medium
JP2020030200A (en) System and method for locating vehicle using accuracy specification
CN112985419B (en) Indoor navigation method and device, computer equipment and storage medium
CN117471484B (en) Pedestrian navigation method, computer-readable storage medium and electronic equipment
CN113739784A (en) Positioning method, user equipment, storage medium and electronic equipment
SE539099C2 (en) Method and control unit for building a database and for predicting a route
US11776392B2 (en) Method for assigning ego vehicle to a lane
CN113701738A (en) Vehicle positioning method and device
CN117496756B (en) Garage management method and system, computer readable storage medium and electronic device
US20230194301A1 (en) High fidelity anchor points for real-time mapping with mobile devices
WO2023218913A1 (en) Control method, apparatus, device and storage medium for automatic driving vehicle
US9880001B2 (en) Method and system for multi-layer positioning system
US20240221499A1 (en) Method and Apparatus for Obtaining Traffic Information, and Storage Medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant