CN117671505A - Real-time visual SLAM method based on affine information under dynamic environment - Google Patents

Real-time visual SLAM method based on affine information under dynamic environment Download PDF

Info

Publication number
CN117671505A
CN117671505A CN202311710990.3A CN202311710990A CN117671505A CN 117671505 A CN117671505 A CN 117671505A CN 202311710990 A CN202311710990 A CN 202311710990A CN 117671505 A CN117671505 A CN 117671505A
Authority
CN
China
Prior art keywords
points
real
affine
frame
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311710990.3A
Other languages
Chinese (zh)
Inventor
陶发展
周遥
付主木
宋书中
唐小林
冀保峰
陈远哲
王俊
孙力帆
王楠
张中才
朱龙龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University of Science and Technology
Original Assignee
Henan University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University of Science and Technology filed Critical Henan University of Science and Technology
Priority to CN202311710990.3A priority Critical patent/CN117671505A/en
Publication of CN117671505A publication Critical patent/CN117671505A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

A real-time visual SLAM method in a dynamic environment based on affine information comprises the following steps: s1, constructing affine consistency constraint conditions based on affine relations among matching points according to output characteristics of an image acquisition sensor; s2, analyzing the real-time image acquired by the image acquisition sensor frame by frame, and generating an actual matching point set based on the current frame; s3, detecting abnormal points in the actual matching point set through affine consistency constraint conditions to obtain an abnormal duty ratio, and setting a semantic segmentation priori value according to the abnormal duty ratio; s4, if the semantic segmentation priori value corresponding to the previous frame reaches a preset recognition threshold, taking the current frame as a key frame, and extracting a dynamic object from the key frame by using a real-time semantic segmentation model; s5, updating the map based on the information of the abnormal points and the dynamic objects; s6, real-time mapping is performed based on the selection result of the key frames and the dynamic objects. The invention can solve the problems of low positioning speed, low positioning precision and poor mapping effect of the vSLAM in the prior art under a dynamic environment.

Description

Real-time visual SLAM method based on affine information under dynamic environment
Technical Field
The invention relates to the technical field of simultaneous localization and mapping, in particular to a real-time visual SLAM method in a dynamic environment based on affine information.
Background
The simultaneous localization and mapping (Simultaneous Localization and Mapping, SLAM) technology is a key technology for unmanned navigation of an autonomous robot, and can simultaneously estimate the pose and construct a 3D dense point cloud map in an unknown environment, so that the method has wide application in the fields of automatic driving, unmanned aerial vehicles (Unmanned Aerial Vehicle, UAV), virtual reality (Augmented Reality, AR), unmanned underwater autonomous vehicles (Autonomous Underwater Vehicles, AUV) and the like.
A vision SLAM (vsslam) system is an important field of SLAM system research due to its single sensor, low cost and lightweight characteristics. However, static environments are a prerequisite for most vision simultaneous localization and mapping systems. Such a powerful assumption limits the practical application of most existing SLAM systems.
When a moving object enters the view field of the camera, the dynamic matching points can directly interrupt the positioning of the camera, and noise blocks formed by the moving object can pollute a constructed map. In order to ensure long-term application of vsslam in real scenes, how to overcome the negative effects on positioning accuracy and mapping effect due to moving objects (e.g., pedestrians, vehicles) in dynamic environments has become the focus of research. In vsram systems, the negative impact of dynamic objects is usually overcome by breaking this limitation by introducing either learning-based methods to segment the dynamic objects or geometry-based methods to sparsely detect dynamic features. However, geometry-based methods lack the ability to construct effective maps, while learning-based methods tend to be less real-time.
Disclosure of Invention
In order to solve the defects of low positioning speed, low positioning precision and poor mapping effect of the vSLAM in the dynamic environment in the prior art, the invention provides a real-time visual SLAM method based on affine information in the dynamic environment.
In order to achieve the above purpose, the invention adopts the following specific scheme: a real-time visual SLAM method in a dynamic environment based on affine information comprises the following steps:
s1, constructing affine consistency constraint conditions based on affine relations among matching points according to output characteristics of an image acquisition sensor; s2, analyzing the real-time image acquired by the image acquisition sensor frame by frame, and generating an actual matching point set based on the current frame;
s3, detecting abnormal points in the actual matching point set through affine consistency constraint conditions to obtain an abnormal duty ratio, and setting a semantic segmentation priori value according to the abnormal duty ratio;
s4, if the semantic segmentation priori value corresponding to the previous frame reaches a preset recognition threshold, taking the current frame as a key frame, and extracting a dynamic object from the key frame by utilizing a real-time semantic segmentation model;
s5, updating the map based on the information of the abnormal points and the dynamic objects;
s6, real-time mapping is performed based on the selection result of the key frames and the dynamic objects.
As a further optimization of the real-time visual SLAM method under the affine information-based dynamic environment described above: the specific method of S1 comprises the following steps:
s11, converting an output image of an image acquisition sensor into a gray level image, and extracting at least two feature point sets, wherein each feature point set comprises a plurality of feature points;
s12, performing feature matching on all feature points, and obtaining a plurality of basic matching point sets containing a plurality of pairs of matching points by using a rotation consistency constraint condition;
s13, constructing affine consistency constraint conditions according to constraint relations in the basic matching point sets.
As a further optimization of the real-time visual SLAM method under the affine information-based dynamic environment described above: in S12, the specific method for obtaining the basic matching point set includes:
s121, taking the characteristic points as points to be judged, and calculating the pixel distances between the characteristic points and the points to be judged in other characteristic point sets, wherein the calculating method comprises the following steps:
dis=(p x -q x ) 2 +(p y -q y ) 2 wherein dis is the simple pixel distance between the pixel points p and q, and the corner marks x and y are the abscissas and ordinates of the pixel points respectively;
s122, screening other characteristic points according to the pixel distance, wherein the specific method comprises the following steps:
where set () represents the set, n is the number of elements in the set, dis i Is the distance of the other feature point i from the point to be determined, f () is a screening function, and there is +.>
S123, combining the points to be judged and the screened first U points which are closer to the points to be judged into a basic matching point set;
s124, repeating S121 to S123 to obtain a plurality of basic matching point sets.
As a further optimization of the real-time visual SLAM method under the affine information-based dynamic environment described above: in S123, u=k×n m Wherein N is m K is a scale parameter for the number of pixels from the feature point set.
As a further optimization of the real-time visual SLAM method under the affine information-based dynamic environment described above: the affine consistency constraint is expressed as: when the number N of one-to-one matches in the U pair matching points in the basic matching point set c Satisfy N c When > (U-2), a pair of matching points is judged to be a correct match.
As a further optimization of the real-time visual SLAM method under the affine information-based dynamic environment described above: in S3, the anomaly ratio of the anomaly point is calculated by r=n o /N m Wherein N is o For the number of outliers detected, and the outlier duty cycle is taken as the semantic segmentation prior value.
As a further optimization of the real-time visual SLAM method under the affine information-based dynamic environment described above: in S4, setting a real-time semantic segmentation model as a Yolov5S-seg model, wherein the semantic segmentation method of the Yolov5S-seg model based on affine information comprises the following steps:
where set () represents a set, ini_mask is an initial mask matrix divided by a YOLOv5s-seg model, YOLOv5s-seg () is a real-time inference function of the YOLOv5s-seg model, frame is a current frame of an input, fin_mask is a remaining mask matrix commonly determined via affine information, k is the number of initial divided regions, i.e., the number of objects initially recognized, g () is a function of judging whether the current object is dynamic by combining affine information, and there is
Wherein N is l_o And N l_m The number of outliers and matching points falling in the object to be judged, respectively.
As a further optimization of the real-time visual SLAM method under the affine information-based dynamic environment described above: in S5, the method for updating the map based on the information of the outlier and the dynamic object includes screening the actual matching point set, and removing the outlier and the dynamic object from the map, and the specific method includes:
tracking_matching_points are map points for updating a map, matching_points are actual matching point sets, O A Is an abnormal point in the actual matching point set, D S Is a dynamic object identified, h () represents a filter function of map points, indicating that the latter is deleted from the former and the remainder is retained.
As a further optimization of the real-time visual SLAM method under the affine information-based dynamic environment described above: in S6, when a new key frame is determined, map points are added, and the specific method is as follows:
wherein, the creation represents the newly added map points, and the drop represents the not newly added map points.
As a further optimization of the real-time visual SLAM method under the affine information-based dynamic environment described above: in S6, the method for real-time mapping is as follows:
mappint_thread(key_frame,depth_frame);
where map_thread is the mapping function, key_frame is the RGB map of the key frame, and depth_frame is the current depth frame.
The beneficial effects are that: the invention uses the visual sensor as an input object, the visual sensor can be a monocular camera, a binocular camera, an RGB-D camera and other common equipment, and the application range is wide; further, affine consistency constraint based on affine principle is designed for detecting abnormal values in the matching point set, and the influence of dynamic characteristics on system tracking is reduced by efficiently detecting the abnormal values, so that the positioning precision and robustness of the visual SLAM system in a dynamic environment are improved; on the basis, a semantic method based on outlier priori is designed, and the real-time performance and tracking precision of the system are improved, which mainly comprises two aspects: firstly, based on the ratio of abnormal values in a matching set, designing a priori value used by a semantic method, and sparsely using the semantic method, so as to realize reservation of the trace characteristic points of the beneficial system and reduction of the time used by the semantic method; secondly, dividing the current frame by using a real-time semantic segmentation model, obtaining all object masks in a training ring, combining outlier information obtained from radiation constraint, and constructing an effective semantic method based on the ratio of outliers in each mask relative to all matching points in a mask region so as to realize the detection of dynamic objects; based on the abnormal value and the dynamic object detection result, an improved map updating method is designed to ensure that dynamic characteristics do not exist in a map for long-term tracking, so that the tracking precision of the system is further improved; based on a key frame selection result and a semantic segmentation result, a real-time mapping method is designed aiming at the characteristic that the RGB-D sensor is easy to acquire a depth map, and effective mapping aiming at the RGB-D sensor in a dynamic environment is realized.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a graph of affine relationships between a pair of mismatching points under improved affine consistency constraints;
fig. 3 is a schematic diagram of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1 to 3, a real-time visual SLAM method in a dynamic environment based on affine information includes S1 to S6.
S1, constructing affine consistency constraint conditions based on affine relations among matching points according to output characteristics of the image acquisition sensor. The specific method of S1 includes S11 to S13.
S11, converting an output image of the image acquisition sensor into a gray level image, and extracting at least two feature point sets, wherein each feature point set comprises a plurality of feature points. The feature extraction may be performed by using an ORB (Oriented FAST and Rotated BRIEF, directional fast rotating presentation) feature method, which is a common method in the art and will not be described herein.
And S12, performing feature matching on all feature points, and obtaining a plurality of basic matching point sets containing a plurality of pairs of matching points by using a rotation consistency constraint condition. In S12, a specific method of obtaining the basic matching point set includes S121 to S124.
S121, taking the characteristic points as points to be judged, and calculating the pixel distances between the characteristic points and the points to be judged in other characteristic point sets, wherein the calculating method comprises the following steps:
dis=(p x -q x ) 2 +(p y -q y ) 2 wherein dis is the simple pixel distance between the pixels p and q, and the corner marks x and y are the abscissas of the pixels, respectively, that is, the abscissas of a pixel in the image.
S122, screening other characteristic points according to the pixel distance, wherein the specific method comprises the following steps:
wherein set () represents a set, dis_points_ini is an initial set for judging whether a certain feature point matches correctly, each feature point includes index information of the feature point and two parameters of a distance between the feature point and a point to be judged, n is the number of elements in the set, dis i Is the distance of the other feature point i from the point to be determined, f () is a screening function, and there is +.>Epsilon is set to 400 in the invention, and the correlation between the feature points with the closer distance is more beneficial to improving affine consistency constraint than the feature points with the larger distance of the points to be judgedEffects.
S123, combining the points to be judged and the screened first U points which are closer to the points to be judged into a basic matching point set. The specific method is that the source_indices=source (dis_points_ini, dis, U), wherein the source_indices are index information of ordered feature points, used for judging how many feature points around are matched with points to be judged one by one, or satisfying affine theorem, the source () is an ordering function, and returns index information of the feature points, and the operation represents a certain parameter of an object. In S123, u=k×n m Wherein N is m K is a scale parameter for the number of pixels from the feature point set. In this embodiment, k is 0.01.
S124, repeating S121 to S123 to obtain a plurality of basic matching point sets.
S13, constructing an Affine consistency constraint condition Scope-Affine according to the constraint relation in the basic matching point set. The affine consistency constraint is expressed as: when the number N of one-to-one matches in the U pair matching points in the basic matching point set c Satisfy N c When > (U-2), a pair of matching points is judged to be a correct match.
S2, analyzing the real-time image acquired by the image acquisition sensor frame by frame, and generating an actual matching point set based on the current frame.
S3, detecting abnormal points in the actual matching point set through affine consistency constraint conditions to obtain an abnormal duty ratio, and setting a semantic segmentation prior value according to the abnormal duty ratio. In S3, the anomaly ratio of the anomaly point is calculated by r=n o /N m Wherein N is o For the number of outliers detected, and the outlier duty cycle is taken as the semantic segmentation prior value. Therefore, the dynamic characteristics can be efficiently detected, the service time of a semantic method in a system is saved, the tracking precision of the system is properly improved, and the characteristics favorable for system tracking are reserved.
And S4, if the semantic segmentation priori value corresponding to the previous frame reaches a preset recognition threshold, taking the current frame as a key frame, and extracting a dynamic object from the key frame by using a real-time semantic segmentation model. Specifically, firstly, the prior value of semantic segmentation is judged, and the method comprises the following steps:
the Scope-Affine and Semantic respectively represent a Scope-Affine method and a Semantic method, and the Semantic method acts on the current frame when the abnormal duty ratio r is not less than ζ, and only Scope-Affine acts on the system tracking process when the abnormal duty ratio r is less than ζ. As ζ increases, the Scope-Affine is required to have a better effect on outlier detection because more frames use only the outlier tracked by the Scope-Affine detection system. Increasing the scope affine constraint range is an effective method, i.e. increasing co or U, however, since the dynamic features are forced into the detection process of the static features due to the increase of the constraint range, more static features are also detected as outliers. Thus, for comprehensive considerations, ζ is set to 0.6.
In S4, setting a real-time semantic segmentation model as a Yolov5S-seg model, wherein the semantic segmentation method of the Yolov5S-seg model based on affine information comprises the following steps:
where set () represents a set, ini_mask is an initial mask matrix divided by the YOLOv5s-seg model, including all object masks, YOLOv5s-seg () is a real-time inference function of the YOLOv5s-seg model for dividing an input image, frame is a current frame of the input, fin_mask is a remaining mask matrix commonly determined via affine information, i.e., a mask matrix belonging to a dynamic object, k is the number of initial divided regions, i.e., the number of initially recognized dynamic objects, g () is a function of determining whether a current object is dynamic by combining affine information, and there is
Wherein N is l_o And N l_m Respectively the number of outliers and matching points falling in the object to be judged, i.e. respectivelyThe number of outliers and matching points that fall in the mask to be determined.
For a segmented object, if the ratio of outliers in the segmented region to all matching points in the region is greater than another threshold, the object is considered dynamic; based on the dynamic feature detection result, a final feature matching point set is obtained.
Further, after determining the dynamic object, the design dynamic feature detection method is further optimized:
wherein mask () is an indexing method of an object mask matrix, and is composed of all fin_masks, p.x and p.y respectively represent the abscissa in the image matrix of one of a pair of matched feature points, which is located later in time series, drop represents that this point is recognized as a dynamic object, and other times represents that this point is reserved and used for tracking later.
S5, updating the map based on the information of the abnormal points and the dynamic objects. In S5, the method for updating the map based on the information of the outlier and the dynamic object includes screening the actual matching point set, and removing the outlier and the dynamic object from the map, and the specific method includes:
tracking_matching_points are map points for updating a map, matching_points are actual matching point sets, O A Is an abnormal point in the actual matching point set, D S Is a dynamic object identified, h () represents a filter function of map points, indicating that the latter is deleted from the former and the remainder is retained.
S6, real-time mapping is performed based on the selection result of the key frames and the dynamic objects. In S6, when a new key frame is determined, map points are added, and the specific method is as follows:
wherein, the creation represents the newly added map points, and the drop represents the not newly added map points.
In S6, the method for real-time mapping is as follows:
mappint_thread(key_frame,depth_frame);
where map_thread is the mapping function, key_frame is the RGB map of the key frame, and depth_frame is the current depth frame.
The invention uses the vision sensor as an input object, the vision sensor can be a common device such as a monocular camera, a binocular camera and an RGB-D camera, and affine consistency constraint based on affine principle is designed for detecting abnormal values in a matching point set, and the influence of dynamic characteristics on system tracking is reduced by efficiently detecting the abnormal values, so that the positioning precision and robustness of the vision SLAM system in a dynamic environment are improved; on the basis, a semantic method based on outlier priori is designed, and the real-time performance and tracking precision of the system are improved, which mainly comprises two aspects: firstly, based on the ratio of abnormal values in a matching set, designing a priori value used by a semantic method, and sparsely using the semantic method, so as to realize reservation of the trace characteristic points of the beneficial system and reduction of the time used by the semantic method; secondly, dividing the current frame by using a real-time semantic segmentation model, obtaining all object masks in a training ring, combining outlier information obtained from radiation constraint, and constructing an effective semantic method based on the ratio of outliers in each mask relative to all matching points in a mask region so as to realize the detection of dynamic objects; based on the abnormal value and the dynamic object detection result, an improved map updating method is designed to ensure that dynamic characteristics do not exist in a map for long-term tracking, so that the tracking precision of the system is further improved; based on a key frame selection result and a semantic segmentation result, a real-time mapping method is designed aiming at the characteristic that the RGB-D sensor is easy to acquire a depth map, and effective mapping aiming at the RGB-D sensor in a dynamic environment is realized.
In summary, the invention breaks the limitation of long-term application in real scenes caused by static assumption of the environment of the existing vSLAM system and achieves the aim of integrally improving the tracking instantaneity, the tracking precision and the mapping effect of the system.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. The real-time visual SLAM method based on affine information in a dynamic environment is characterized by comprising the following steps of:
s1, constructing affine consistency constraint conditions based on affine relations among matching points according to output characteristics of an image acquisition sensor;
s2, analyzing the real-time image acquired by the image acquisition sensor frame by frame, and generating an actual matching point set based on the current frame;
s3, detecting abnormal points in the actual matching point set through affine consistency constraint conditions to obtain an abnormal duty ratio, and setting a semantic segmentation priori value according to the abnormal duty ratio;
s4, if the semantic segmentation priori value corresponding to the previous frame reaches a preset recognition threshold, taking the current frame as a key frame, and extracting a dynamic object from the key frame by utilizing a real-time semantic segmentation model;
s5, updating the map based on the information of the abnormal points and the dynamic objects;
s6, real-time mapping is performed based on the selection result of the key frames and the dynamic objects.
2. The real-time visual SLAM method in a dynamic environment based on affine information of claim 1, wherein the specific method of S1 comprises:
s11, converting an output image of an image acquisition sensor into a gray level image, and extracting at least two feature point sets, wherein each feature point set comprises a plurality of feature points;
s12, performing feature matching on all feature points, and obtaining a plurality of basic matching point sets containing a plurality of pairs of matching points by using a rotation consistency constraint condition;
s13, constructing affine consistency constraint conditions according to constraint relations in the basic matching point sets.
3. The real-time visual SLAM method in dynamic environment based on affine information of claim 2, wherein in S12, the specific method for obtaining the basic matching point set comprises:
s121, taking the characteristic points as points to be judged, and calculating the pixel distances between the characteristic points and the points to be judged in other characteristic point sets, wherein the calculating method comprises the following steps:
dis=(p x -q x ) 2 +(p y -q y ) 2 wherein dis is the simple pixel distance between the pixel points p and q, and the corner marks x and y are the abscissas and ordinates of the pixel points respectively;
s122, screening other characteristic points according to the pixel distance, wherein the specific method comprises the following steps:
where set () represents the set, n is the number of elements in the set, dis i Is the distance of the other feature point i from the point to be determined, f () is a screening function, and there is +.>
S123, combining the points to be judged and the screened first U points which are closer to the points to be judged into a basic matching point set;
s124, repeating S121 to S123 to obtain a plurality of basic matching point sets.
4. The method for real-time visual SLAM in dynamic environment based on affine information of claim 3, wherein in S123, u=k×n m Wherein N is m K is a scale parameter for the number of pixels from the feature point set.
5. The method for real-time visual SLAM in dynamic environment based on affine information of claim 4, wherein affine consistency constraint is expressed as: when the number N of one-to-one matches in the U pair matching points in the basic matching point set c Satisfy N c When > (U-2), a pair of matching points is judged to be a correct match.
6. The method for real-time visual SLAM in dynamic environment based on affine information of claim 4, wherein in S3, the calculation mode of the anomaly duty ratio of the anomaly point is r=N o /N m Wherein N is o For the number of outliers detected, and the outlier duty cycle is taken as the semantic segmentation prior value.
7. The real-time visual SLAM method in a dynamic environment based on affine information of claim 1, wherein in S4, the real-time semantic segmentation model is set as a YOLOv5S-seg model, and the semantic segmentation method based on affine information of the YOLOv5S-seg model is as follows:
where set () represents a set, ini_mask is an initial mask matrix divided by a YOLOv5s-seg model, YOLOv5s-seg () is a real-time inference function of the YOLOv5s-seg model, frame is a current frame of an input, fin_mask is a remaining mask matrix commonly determined via affine information, k is the number of initial divided regions, i.e., the number of objects initially recognized, g () is a function of judging whether the current object is dynamic by combining affine information, and there is
Wherein N is l_o And N l_m The number of outliers and matching points falling in the object to be judged, respectively.
8. The method for updating the map based on the information of the outlier and the dynamic object according to claim 7, wherein in S5, the method for updating the map based on the information of the outlier and the dynamic object comprises the steps of filtering the actual matching point set, and removing the outlier and the dynamic object from the map, wherein the method comprises the following steps:
tracking_matching_points are map points for updating a map, matching_points are actual matching point sets, O A Is an abnormal point in the actual matching point set, D S Is a dynamic object identified, h () represents a filter function of map points, indicating that the latter is deleted from the former and the remainder is retained.
9. The method for real-time visual SLAM in dynamic environment based on affine information according to claim 8, wherein in S6, map points are added when determining new key frames, specifically comprising:
wherein, the creation represents the newly added map points, and the drop represents the not newly added map points.
10. The method for real-time visual SLAM in dynamic environment based on affine information of claim 9,
in S6, the method for real-time mapping is as follows:
mappint_thread(key_frame,depth_frame);
where map_thread is the mapping function, key_frame is the RGB map of the key frame, and depth_frame is the current depth frame.
CN202311710990.3A 2023-12-13 2023-12-13 Real-time visual SLAM method based on affine information under dynamic environment Pending CN117671505A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311710990.3A CN117671505A (en) 2023-12-13 2023-12-13 Real-time visual SLAM method based on affine information under dynamic environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311710990.3A CN117671505A (en) 2023-12-13 2023-12-13 Real-time visual SLAM method based on affine information under dynamic environment

Publications (1)

Publication Number Publication Date
CN117671505A true CN117671505A (en) 2024-03-08

Family

ID=90076841

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311710990.3A Pending CN117671505A (en) 2023-12-13 2023-12-13 Real-time visual SLAM method based on affine information under dynamic environment

Country Status (1)

Country Link
CN (1) CN117671505A (en)

Similar Documents

Publication Publication Date Title
CN110264416B (en) Sparse point cloud segmentation method and device
CN111462135B (en) Semantic mapping method based on visual SLAM and two-dimensional semantic segmentation
CN110942449B (en) Vehicle detection method based on laser and vision fusion
CN110097553B (en) Semantic mapping system based on instant positioning mapping and three-dimensional semantic segmentation
CN111340797A (en) Laser radar and binocular camera data fusion detection method and system
CN114359181B (en) Intelligent traffic target fusion detection method and system based on image and point cloud
CN110766002B (en) Ship name character region detection method based on deep learning
CN113327296B (en) Laser radar and camera online combined calibration method based on depth weighting
CN111723778B (en) Vehicle distance measuring system and method based on MobileNet-SSD
CN115235493B (en) Method and device for automatic driving positioning based on vector map
CN115619826A (en) Dynamic SLAM method based on reprojection error and depth estimation
CN113012197A (en) Binocular vision odometer positioning method suitable for dynamic traffic scene
CN114140527A (en) Dynamic environment binocular vision SLAM method based on semantic segmentation
CN115147328A (en) Three-dimensional target detection method and device
CN109543634B (en) Data processing method and device in positioning process, electronic equipment and storage medium
CN114662587A (en) Three-dimensional target sensing method, device and system based on laser radar
CN113808203A (en) Navigation positioning method based on LK optical flow method and ORB-SLAM2
CN117496401A (en) Full-automatic identification and tracking method for oval target points of video measurement image sequences
CN113096016A (en) Low-altitude aerial image splicing method and system
CN116310902A (en) Unmanned aerial vehicle target detection method and system based on lightweight neural network
CN117671505A (en) Real-time visual SLAM method based on affine information under dynamic environment
CN116052120A (en) Excavator night object detection method based on image enhancement and multi-sensor fusion
WO2023283929A1 (en) Method and apparatus for calibrating external parameters of binocular camera
Mao et al. Power transmission line image segmentation method based on binocular vision and feature pyramid network
CN113011212B (en) Image recognition method and device and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination