CN114636422A - Positioning and navigation method for information machine room scene - Google Patents

Positioning and navigation method for information machine room scene Download PDF

Info

Publication number
CN114636422A
CN114636422A CN202111420185.8A CN202111420185A CN114636422A CN 114636422 A CN114636422 A CN 114636422A CN 202111420185 A CN202111420185 A CN 202111420185A CN 114636422 A CN114636422 A CN 114636422A
Authority
CN
China
Prior art keywords
information
positioning
inspection personnel
personnel
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111420185.8A
Other languages
Chinese (zh)
Inventor
董贇
孟椿智
黄承速
陈剑皓
陈柏龄
潘俊冰
王海
黄潜
孟春辰
黎俐芬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Power Grid Co Ltd
Original Assignee
Guangxi Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Power Grid Co Ltd filed Critical Guangxi Power Grid Co Ltd
Priority to CN202111420185.8A priority Critical patent/CN114636422A/en
Publication of CN114636422A publication Critical patent/CN114636422A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The invention relates to the technical field of indoor navigation, and discloses a positioning navigation method for information machine room scenes, which is provided with a background server and AR glasses for inspection personnel to use, wherein the background server stores a navigation map and information machine room space information; setting a preset travelling route of an inspection worker according to the navigation map; when the patrol personnel travels, real-time image information in front of the patrol personnel is collected through AR eyes; calculating the current real-time position of the inspection personnel through an image positioning algorithm; and if the position of the patrol personnel deviates from the preset travelling route, prompting the patrol personnel. The positioning and navigation method for the scene of the information machine room can guide the routing inspection work of the information machine room, help routing inspection personnel to perform positioning and navigation, is high in accuracy, and does not need to arrange positioning equipment in the scene.

Description

Positioning and navigation method for information machine room scene
Technical Field
The invention relates to the technical field of indoor navigation, in particular to a positioning navigation method for an information machine room scene.
Background
The indoor navigation technology is developed to facilitate the user of the mobile terminal to quickly find the target position in the building. Indoor navigation is to let users in buildings still determine their own position and find the desired place by using accurate positioning function.
Common positioning technologies include ZigBee technology, radio frequency identification technology (RFID), ultra wideband technology (UWB), WiFi technology, geomagnetism, inertial sensors, visible light, and the like.
Generally, the positioning method is divided into 3 steps: 1) physical measurement, collecting data needed by positioning according to a positioning technology; 2) data preprocessing, namely performing noise reduction, dimension reduction and other processing on the acquired data to enable the data to better meet the algorithm requirement; 3) and selecting an algorithm, determining a calculation rule of the position information, and finally realizing position estimation or tracking.
Chinese patent (published: 2021, 09.03.s.a., publication number: CN113340294A) discloses a navigation method of an AR indoor map fused with landmarks. An AR indoor map navigation method and system fused with landmarks are disclosed. The system comprises: the method comprises four steps of building an indoor map, positioning by Bluetooth, planning a path and integrating AR navigation of landmarks. And constructing a ranging model through an RSSI radio signal attenuation equation, and then obtaining a final positioning result by using a triangular weighted centroid positioning algorithm and Kalman filtering processing. The positioning navigation system has better positioning accuracy in an open environment without electromagnetic interference, and can meet the navigation requirement, but has several problems in practical application: 1) bluetooth positioning needs to deploy a Bluetooth beacon, so a Bluetooth network needs to be built, and the network building and maintenance cost is high; 2) the accuracy of Bluetooth positioning is low and is related to the density of Bluetooth beacons, and the accuracy is about 1-5 meters.
Chinese patent (published: 30/07/2021, publication number: CN113188545A) discloses an off-line mobile terminal AR indoor navigation method and system. The method carries out positioning through an off-line GIS. The GIS generally used at present mainly comprises a vector, grid or hybrid system added by the vector and grid, and the hybrid system actually stores the two types of data separately and adopts different data forms when different tasks need to be executed. In the case of vector structures, the disadvantage is that the processing of the positional relationship is rather time consuming and lacks the ability to integrate directly with the DEM and RS. In the aspect of the grid structure, the problems of low grid data resolution, poor precision, difficult establishment of topological relation between ground objects, difficult operation of single target, large grid data storage amount and the like exist
The traditional indoor positioning technology needs to deploy positioning equipment in a positioning scene, and a large positioning error is caused by the obstruction in a complex scene. With the continuous promotion of the information construction of the Chinese power grid, the wearable device has a good effect in the aspect of assisting the inspection personnel to complete the work. Meanwhile, in recent years, the application of the machine learning technology in industry is more and more extensive, a series of object detection algorithms are proposed successively after the RCNN object detection algorithm is proposed, and the performance of the algorithms can meet the requirements of the current industrial scene.
Disclosure of Invention
The invention aims to provide a positioning and navigation method for information machine room scenes, which can be used for carrying out navigation guidance on routing inspection work of an information machine room and helping routing inspection personnel to carry out positioning and navigation, has high accuracy and does not need to arrange positioning equipment in the scenes.
In order to achieve the purpose, the positioning navigation method for the scene of the information machine room, which is designed by the invention, is provided with a background server and AR glasses for inspection personnel to use, wherein a navigation map and information machine room space information are stored in the background server;
setting a preset travelling route of the inspection personnel according to the navigation map;
when the inspection personnel move, acquiring real-time image information in front of the inspection personnel through the AR glasses;
calculating the current real-time position of the inspection personnel through an image positioning algorithm;
and if the position of the inspection personnel deviates from the preset travelling route, prompting the inspection personnel.
Preferably, the current traveling direction of the inspection personnel is calculated through a motion prediction algorithm, and if the traveling direction of the inspection personnel deviates from the traveling direction of the preset traveling route, the inspection personnel is prompted.
Preferably, the image localization algorithm comprises the following steps:
A) carrying out digital image preprocessing on the real-time image information collected by the AR glasses;
B) detecting a plurality of specific target objects in the scene of the information machine room from the image information preprocessed in the step A);
C) detecting characteristic points from the specific target object detected in the step B), and solving the characteristic points of the specific target object;
D) c), carrying out monocular camera ranging on the characteristic points of the specific target object obtained in the step C), and calculating the distance from the inspection personnel to the specific target object;
E) reading historical motion track data of the inspection personnel by a background server;
F) performing Kalman filtering on historical coordinate data in the historical motion trajectory data of the inspection personnel obtained in the step E), and solving an estimation coordinate of the current position of the inspection personnel;
G) and D), obtaining the distance from the inspection personnel to the specific target object in the step D), obtaining the estimated coordinates of the current position of the inspection personnel obtained in the step F) and the information room space information stored in the background server, and resolving to obtain the coordinates of the specific target object.
H) And C), performing monocular camera ranging on the specific target object characteristic points obtained in the step C) and the specific target object coordinates obtained in the step G), resolving to obtain the current position of the inspection personnel, and storing the current position information into a background server.
Preferably, in the step a), the digital image preprocessing comprises the following steps:
b1) The noise of the image is eliminated by bilateral smoothing;
B2) and carrying out distortion correction on the image.
Preferably, in the step B), the YOLOv4 network model is used to detect the target object in the information machine room scene, and a specific detection target object is searched in the preprocessed image information according to the routing inspection route determined in advance.
Preferably, the input image is first cropped to increase the effective area ratio, then all cropped images are detected with Class [0] of a defined detection target object list as a background, and after the detection is completed, a target candidate frame of each Class and a score of the target candidate frame are selected and stored for a plurality of detected targets in a Class unit.
Preferably, in the step C), after calculating the accurate distance between the inspection staff AR glasses camera lens and the three target objects, the coordinates of the location of the inspection staff are solved by using the three-point distance.
Preferably, the motion prediction algorithm is used for predicting the motion trajectory of the inspection personnel by using Kalman filtering and historical coordinate information, the Kalman filtering is divided into two steps of prediction and correction, the prediction is to estimate the current time state based on the last time state, and the correction is to integrate the estimation state and the observation state of the current time to estimate the optimal state.
Compared with the prior art, the invention has the following advantages:
1. positioning equipment does not need to be arranged in a scene, and the universality is high;
2. the accuracy is high, and the accuracy rate cannot be influenced by the shielding of the barrier;
3. the calculated amount is small, and the requirement of real-time positioning can be met.
Drawings
FIG. 1 is a flow chart of an image localization algorithm of the present invention;
FIG. 2 is a schematic diagram of three-point location calculation according to the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments.
The invention relates to a positioning navigation method for information machine room scenes, which is provided with a background server and AR glasses for inspection personnel to use, wherein the background server stores a navigation map and information machine room space information;
setting a preset travelling route of an inspection worker according to the navigation map;
when the inspection personnel move, real-time image information in front of the inspection personnel is collected through AR eyes;
calculating the current real-time position of the patrol personnel through an image positioning algorithm;
and if the position of the patrol personnel deviates from the preset travelling route, prompting the patrol personnel.
In addition, the current advancing direction of the patrol personnel can be calculated through a motion prediction algorithm, and the patrol personnel is prompted if the advancing direction of the patrol personnel deviates from the advancing direction of the preset advancing route.
Specifically, as shown in fig. 1, the image localization algorithm includes the following steps:
A) carrying out digital image preprocessing on real-time image information acquired by AR glasses;
B) detecting a plurality of specific target objects in the scene of the information machine room from the image information preprocessed in the step A);
C) detecting characteristic points from the specific target object detected in the step B), and solving the characteristic points of the specific target object;
D) c), carrying out monocular camera ranging on the characteristic points of the specific target object obtained in the step C), and calculating the distance from the inspection personnel to the specific target object;
E) reading historical motion track data of the inspection personnel by a background server;
F) performing Kalman filtering on historical coordinate data in the historical motion trajectory data of the inspection personnel obtained in the step E), and solving the estimated coordinates of the current position of the inspection personnel;
G) and D), obtaining the distance from the inspection personnel to the specific target object in the step D), obtaining the estimated coordinates of the current position of the inspection personnel obtained in the step F) and the information room space information stored in the background server, and resolving to obtain the coordinates of the specific target object.
H) And C), carrying out monocular camera ranging on the specific target object feature points obtained in the step C) and the specific target object coordinates obtained in the step G), resolving to obtain the current position of the inspection personnel, and storing the current position information into a background server.
Wherein, in the step A), the digital image preprocessing comprises the following steps:
b1) The noise of the image is eliminated by bilateral smoothing;
B2) and carrying out distortion correction on the image.
In the step B), a YOLOv4 network model is used for carrying out target object detection on the scene of the information machine room, and a specific detection target object is searched in the preprocessed image information according to a predetermined routing inspection route. Specifically, an input image is first cropped to increase the effective area ratio, then all cropped images are detected with Class [0] of a defined detection target object list as a background, and after detection is completed, a target candidate frame of each Class and the scores of the target candidate frames are selected and stored for a plurality of detected targets by taking the Class as a unit.
In this embodiment, after calculating the accurate distance between the inspection personnel AR glasses camera lens and the three target objects, the coordinates of the location of the inspection personnel can be solved by using the three-point distance.
According to the distance measurement principle, a target area is obtained by detecting a specific target, four end points of the specific target are obtained by detecting characteristic points in the target area, and the distance from operation and maintenance personnel to the specific target is calculated by using the prior information and the detection information of the specific target.
The target area is first cropped and converted to an image scaled to a 39 x 39 grayscale as input to the DCNN network [12 ]. The first part of the DCNN network consists of three convolutional neural networks, which are respectively: f1, EN1, NM1, where the input to the F1 network is the entire target area, the input to EN1 is the left half of the target area, and the input to NM1 is the right part of the target area. The feature points are detected as end points of four corners of the target, the number of each feature map of the first layer to the fourth layer of the F1 network is respectively 20, 40, 60 and 80, and finally an 8-dimensional feature vector is output. The EN1 network is used for detecting two feature points on the left half part, firstly, the size of an image obtained by cutting out the left half area is 15 multiplied by 39, the number of each feature map of the first layer to the fourth layer of the EN1 network is 20, 40, 60 and 80 respectively, and finally 4-dimensional feature vectors are output. The NM1 network is used for detecting the characteristic points of the right half part and has the same structure as the FN1 network. And finally, solving the average of each feature point according to the output result of each network to obtain the average. The second part is composed of 8 CNN networks, the input of each CNN network is 15 multiplied by 15, the number of the first layer feature maps is 20, the number of the second layer feature maps is 40, the dimension of the full connection layer is 60, and finally the output result is 2. The final result is obtained by averaging 8 CNN networks. The last part is composed of 8 CNN networks, the image is further cut as input according to the result calculated by the previous layer structure, and the feature points are recalculated by every two CNN networks.
And calculating the predicted position of the current operation and maintenance personnel by using Kalman filtering and the position of the operation and maintenance personnel at the last moment. It is assumed here that the phasesThe moving direction and speed of the operation and maintenance personnel in the adjacent 2 frames of images are unchanged, and the moving state of the operation and maintenance personnel can be defined as (x, y, v)x,vy). Wherein x and y represent the coordinates of the operation and maintenance personnel in the ground coordinate system, and vx,vyThe x and y direction velocity components of the operation and maintenance personnel, respectively. Assuming that the four variables are random and obey Gaussian distribution, the position of the operation and maintenance personnel at the current moment is related to the position and the movement speed at the last moment, and the relationship between the position and the speed is represented by a covariance matrix.
Figure RE-GDA0003626841790000071
Figure RE-GDA0003626841790000072
Figure BDA0003377040420000073
vk=vk-1+aΔt
Wherein
Figure RE-GDA0003626841790000074
For evaluation of the current state, vkVelocity at time k, pkIs a state matrix at time k, FkWhen the transition matrix of the state at the moment k, a is the acceleration, BkThe state transition matrix is measured for time k. The update equation for the kalman gain is:
Figure RE-GDA0003626841790000075
Pk=Pk-K HkPk
Figure BDA0003377040420000076
wherein HkIs the transformation matrix of the state to the measurement at time k,
Figure BDA0003377040420000077
is a noiseless measurement, K' is the Kalman gain, RkThe covariance of the noise is determined by the covariance,
Figure BDA0003377040420000078
for a new optimal estimate of time k, P ″kIs the new state matrix at time k.
And matching the positioning targets. And matching the detected specific target with a pre-stored specific target by using the predicted coordinates of the operation and maintenance personnel and the distance between the operation and maintenance personnel and the specific target. The distance set from the operation and maintenance personnel to the specific target obtained by monocular distance measurement is V ═ { D1, D2, D3 … }, wherein di is a calculation result of normalizing the coordinates on the uniform horizontal plane, and the distance set calculated according to the predicted coordinates of the operation and maintenance personnel and the coordinate information stored in the specific target is E ═ { D1, D2, D3 … }. And constructing a matrix M, wherein mij is | di-Dj | and then using a Hungarian algorithm to find a combined optimal solution.
Randomly selecting 3 targets T from the image at each positioning1、T2、T3The distances of the AR glasses obtained by calculation were D1, D2, and D3, respectively. By T1As the circle center D1 as the radius and T2The circle with the circle center D2 as the radius intersects at A, B points to connect T1 T2、AT1、 AT2AB, AB and T1T2Intersect point C, where the center coordinates and radius are known, T1And T2The distance between the two points is known information of an information machine room, and the three-point positioning calculation is shown in fig. 2:
Figure BDA0003377040420000081
wherein
Figure BDA0003377040420000082
Represents T1,T2The distance between them, other similar considerations, are solved:
Figure BDA0003377040420000083
the coordinates of the intersection point are:
Figure BDA0003377040420000084
wherein
Figure BDA0003377040420000085
Is a target T1The horizontal and vertical coordinates of (a) and (b),
Figure BDA0003377040420000086
the abscissa and ordinate of the target T2 are obtained, and the 3 coordinates obtained by calculating the set of 3 targets are averaged to obtain the positioning result of the set of targets.
In the embodiment, the information machine room scene is trained to obtain a model capable of identifying various targets in the information machine room scene, the images shot by the AR glasses are subjected to target detection, the characteristics used for positioning are extracted, the information machine room is subjected to spatial modeling, the extracted characteristic sequences used for positioning are mapped into the information machine room space, personnel position information is obtained, navigation is carried out according to the preset advancing route and the position of the patrol personnel, and the positioning result is corrected according to the historical track and the real-time information.
The method comprises the steps of firstly carrying out target detection on a specific target in a current scene, then extracting target region characteristic points, calculating the distance from the specific target to an inspector and combining the prior information of an information machine room to obtain the coordinate of the specific target and the distance from the inspector to the specific target, and finally calculating the current position of the inspector by three-point positioning and Kalman filtering. Experiments prove that the information machine room navigation method based on visual positioning has higher positioning precision compared with other positioning methods, and can meet the requirement of machine room environment navigation real-time positioning.
The positioning navigation method for the information machine room scene does not need to arrange positioning equipment in the scene, and has strong universality; the accuracy is high, and the accuracy rate cannot be influenced by the shielding of the barrier; the calculated amount is small, and the requirement of real-time positioning can be met.

Claims (8)

1. A positioning and navigation method for information machine room scenes is characterized by comprising the following steps: the system is provided with a background server and AR glasses for inspection personnel to use, wherein the background server stores a navigation map and information room space information;
setting a preset travelling route of the inspection personnel according to the navigation map;
when the inspection personnel move, acquiring real-time image information in front of the inspection personnel through the AR eyes;
calculating the current real-time position of the inspection personnel through an image positioning algorithm;
and if the position of the inspection personnel deviates from the preset travelling route, prompting the inspection personnel.
2. The positioning and navigation method for information room scenes according to claim 1, characterized in that: and calculating the current coordinate of the patrol personnel through a motion prediction algorithm, and prompting the patrol personnel if the advancing direction of the patrol personnel deviates from the advancing direction of the preset advancing route.
3. The positioning and navigation method for information room scenes as claimed in claim 1, wherein: the image positioning algorithm comprises the following steps:
A) carrying out digital image preprocessing on the real-time image information collected by the AR glasses;
B) detecting a plurality of specific target objects in the scene of the information machine room from the image information preprocessed in the step A);
C) detecting characteristic points from the specific target object detected in the step B), and solving the characteristic points of the specific target object;
D) c), carrying out monocular camera ranging on the characteristic points of the specific target object obtained in the step C), and calculating the distance from the inspection personnel to the specific target object;
E) reading historical motion track data of the inspection personnel by a background server;
F) performing Kalman filtering on historical coordinate data in the historical motion trajectory data of the inspection personnel obtained in the step E), and solving an estimated coordinate of the current position of the inspection personnel;
G) and D), obtaining the distance from the inspection personnel to the specific target object in the step D), obtaining the estimated coordinates of the current position of the inspection personnel obtained in the step F) and the information room space information stored in the background server, and resolving to obtain the coordinates of the specific target object.
H) And C), performing monocular camera ranging on the specific target object feature points obtained in the step C) and the specific target object coordinates obtained in the step G), resolving to obtain the current position of the inspection personnel, and storing the current position information into a background server.
4. The positioning and navigation method for information room scenes of claim 3, characterized in that: in the step A), the digital image preprocessing comprises the following steps:
B1) the noise of the image is eliminated by bilateral smoothing;
B2) and carrying out distortion correction on the image.
5. The positioning and navigation method for information room scenes of claim 3, characterized in that: in the step B), a YOLOv4 network model is used for detecting target objects in the scene of the information machine room, and specific detection target objects are searched in the preprocessed image information according to a predetermined routing inspection route.
6. The positioning and navigation method for information room scenes of claim 5, characterized in that: firstly, the input image is cut to make the effective area ratio higher, then all the cut images are detected by using Class [0] of the defined detection target object list as background, and after the detection is completed, the target candidate frame of each Class and the score of the target candidate frame are selected and stored by using Class as unit for the detected multiple targets.
7. The positioning and navigation method for information room scenes of claim 3, characterized in that: and in the step C), calculating the accurate distance between the AR glasses camera lens of the inspector and the three target objects, and then solving the position coordinates of the inspector by using the three-point distance.
8. The positioning and navigation method for information room scenes according to claim 2, characterized in that: the motion prediction algorithm is used for predicting the motion trail of the inspection personnel by using Kalman filtering and historical coordinate information, the Kalman filtering is divided into two steps of prediction and correction, the prediction is to estimate the current time state based on the last time state, and the correction is to synthesize the estimation state and the observation state of the current time to estimate the optimal state.
CN202111420185.8A 2021-11-26 2021-11-26 Positioning and navigation method for information machine room scene Pending CN114636422A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111420185.8A CN114636422A (en) 2021-11-26 2021-11-26 Positioning and navigation method for information machine room scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111420185.8A CN114636422A (en) 2021-11-26 2021-11-26 Positioning and navigation method for information machine room scene

Publications (1)

Publication Number Publication Date
CN114636422A true CN114636422A (en) 2022-06-17

Family

ID=81946477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111420185.8A Pending CN114636422A (en) 2021-11-26 2021-11-26 Positioning and navigation method for information machine room scene

Country Status (1)

Country Link
CN (1) CN114636422A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117213468A (en) * 2023-11-02 2023-12-12 北京亮亮视野科技有限公司 Method and device for inspecting outside of airplane and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117213468A (en) * 2023-11-02 2023-12-12 北京亮亮视野科技有限公司 Method and device for inspecting outside of airplane and electronic equipment
CN117213468B (en) * 2023-11-02 2024-04-05 北京亮亮视野科技有限公司 Method and device for inspecting outside of airplane and electronic equipment

Similar Documents

Publication Publication Date Title
CN110531759B (en) Robot exploration path generation method and device, computer equipment and storage medium
CN109916393B (en) Multi-grid-value navigation method based on robot pose and application thereof
US10466056B2 (en) Trajectory matching using ambient signals
US10415978B2 (en) Landmark location determination
Zhao et al. A novel system for tracking pedestrians using multiple single-row laser-range scanners
CN109509210A (en) Barrier tracking and device
CN107544501A (en) A kind of intelligent robot wisdom traveling control system and its method
KR20180079428A (en) Apparatus and method for automatic localization
US11885900B2 (en) Method and system for tracking a mobile device
CN108810133A (en) A kind of intelligent robot localization method and positioning system based on UWB and TDOA algorithms
KR100866380B1 (en) Method for esrimating location using ObjectionRecognition of a robot
CN115661204B (en) Collaborative searching and tracking positioning method for moving target by unmanned aerial vehicle cluster
CN106125087A (en) Dancing Robot indoor based on laser radar pedestrian tracting method
Papaioannou et al. Tracking people in highly dynamic industrial environments
Mueller et al. GIS-based topological robot localization through LIDAR crossroad detection
CN114998276B (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
CN113433937B (en) Hierarchical navigation obstacle avoidance system and hierarchical navigation obstacle avoidance method based on heuristic exploration
CN113405560B (en) Unified modeling method for vehicle positioning and path planning
CN111721279A (en) Tail end path navigation method suitable for power transmission inspection work
CN115808170B (en) Indoor real-time positioning method integrating Bluetooth and video analysis
Papaioannou et al. Accurate positioning via cross-modality training
CN111123953B (en) Particle-based mobile robot group under artificial intelligence big data and control method thereof
KR101264306B1 (en) Apparatus of tracking user indoor using user motion model learning and recording media therefor
Santos et al. Tracking of multi-obstacles with laser range data for autonomous vehicles
CN114636422A (en) Positioning and navigation method for information machine room scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination