CN111322993A - Visual positioning method and device - Google Patents

Visual positioning method and device Download PDF

Info

Publication number
CN111322993A
CN111322993A CN201811521793.6A CN201811521793A CN111322993A CN 111322993 A CN111322993 A CN 111322993A CN 201811521793 A CN201811521793 A CN 201811521793A CN 111322993 A CN111322993 A CN 111322993A
Authority
CN
China
Prior art keywords
visual
repositioning
odometer
result
direct
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811521793.6A
Other languages
Chinese (zh)
Other versions
CN111322993B (en
Inventor
龙学雄
易雨亭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikrobot Co Ltd
Original Assignee
Hangzhou Hikrobot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikrobot Technology Co Ltd filed Critical Hangzhou Hikrobot Technology Co Ltd
Priority to CN201811521793.6A priority Critical patent/CN111322993B/en
Publication of CN111322993A publication Critical patent/CN111322993A/en
Application granted granted Critical
Publication of CN111322993B publication Critical patent/CN111322993B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a visual positioning method, which is applied to a direct method odometer and comprises the steps of inputting each frame of image from an image data stream into the direct method odometer for front-end processing, and selecting a key frame; triggering a keyframe based visual repositioning in accordance with the odometer's motion state; fusing the constraint of the vision repositioning result into a direct method odometry target function; the camera pose is solved by minimizing the objective function. The method and the device combine the advantages of high accuracy of the direct method odometer and suitability for the area with few textures and the advantage of easiness in constructing a map by the characteristic point method for relocating and eliminating accumulated errors, and effectively reduce the accumulated errors of the direct method odometer.

Description

Visual positioning method and device
Technical Field
The present invention relates to the field of computer vision, and in particular, to a visual positioning method and device in computer vision.
Background
Visual positioning is a process of acquiring a stable and accurate camera pose through visual information and a constructed visual map, for example, a robot acquires a current camera pose of the robot through matching of image information and feature map information. The camera pose is described in six dimensions, position (three-dimensional coordinate position), pose (three-dimensional).
Visual Odometry (VO) is one of the common methods for performing visual localization, aiming at estimating the motion of a camera from captured images, and in particular, a method for incrementally acquiring the pose of a camera by image information, which determines the orientation and pose of a robot (camera) by analyzing a series of image sequences, i.e., estimating the change of the position and pose of the robot over time. The basic process of the visual odometer is as follows: the method comprises the steps of obtaining a video stream (mainly a gray image), recording images obtained at t and t +1 as I _ { t } and I _ { t +1}, obtaining internal parameters of a camera through camera calibration, taking the obtained image and the internal parameters of a camera as the input of a visual odometer, and outputting the position and the posture of the camera corresponding to each frame of image after algorithm model processing.
The visual odometer includes a direct method and a feature point method according to a processing manner of an algorithm model in the visual odometer. The direct method odometer method uses pixel points, and calculates the positions of the camera pose and the image points by minimizing the luminosity error of some pixel points, wherein the luminosity error is a minimized objective function and is usually determined by the error between images.
For convenience of explaining the principle of the direct method, the following takes dso (direct spark overlay) algorithm as an example. The direct method puts data association (data association) and pose estimation (position estimation) into a unified nonlinear optimization problem, and for this reason, the direct method becomes a more complex optimization problem which is solved all the time. Each three-dimensional point is projected to another target frame (targetframe) after being multiplied by a depth value from a certain dominant frame (host frame), thereby establishing a projected residual (residual). As long as the residuals are within reasonable bounds, the points can be considered as being projected by the same point. From the data association point of view, there is no relation a1-b1, a2-b2 in this process, and there may be cases a1-b1, a2-b1, a3-b1, whose goal is to try to project each point into all frames, calculate its residual in each frame, and not to care about the one-to-one correspondence between points.
From the back end, the direct method visual odometer uses a sliding window consisting of several key frames as its back end. This window exists throughout the VO process and has a set of methods to manage the addition of new data and the removal of old data. In particular, this window typically holds 5 to 7 key frames. The front-end tracking part judges whether the new frame can be inserted into the back end as a new key frame or not according to certain conditions. Meanwhile, if the back end finds that the number of the key frames is larger than the window size, one of the frames is selected to be removed through a specific method. The removed frame is not necessarily the oldest frame on the timeline, but rather has some complications.
The back-end maintains optimization-related constructs in addition to the keyframes and image points in this window. For example, attempting to project image points in each previous keyframe into the new keyframe forms a residual term. At the same time, immature points are extracted in the new keyframes and expected to evolve into normal image points. In practice, due to the reasons of motion and occlusion, part of residual error items can be regarded as outer points (outliers) and finally removed; and some immature image points can not evolve into normal image points and are finally removed. All residuals are added up to form the optimization problem that requires solution, the objective of which is to minimize photometric errors.
Thus, the inside of the sliding window constitutes a non-linear least squares problem. Is represented in the form of a factor graph (or graph optimization) as shown in figure 1. The state of each key frame is eight-dimensional: adding two parameters describing luminosity to the motion pose with six degrees of freedom; the state variable of each image point is one-dimensional, i.e. the inverse depth of the point in the dominant frame (Host). Thus, each residual term (or energy term E), will associate two key frames with an inverse depth. In fact, a global camera-internal parameter is also involved in the optimization, but is not shown in this figure.
Referring to fig. 2, fig. 2 is a VO flow of the direct method visual odometer. Each time an image data stream arrives, the information of the image will be processed according to the illustrated flow. The course of VO can be briefly summarized from fig. 2:
for a non-key frame, only calculating the pose of the non-key frame, and updating the depth estimation of each immature point by using the image of the frame;
the back end handles the optimization of only the key frame portion. Removing some memory maintenance operations, the main processing for each key frame is as follows: adding a new residual error item, removing an error residual error item, and extracting a new immature point.
It can be seen from the processing and principle of the direct method visual odometry method that the method can be well applied to occasions with few textures; meanwhile, because the features do not need to be extracted and matched, a very high frame rate can be obtained; and because all image information is utilized, the obtained camera pose estimation is more accurate than a characteristic point method. However, when tracking is performed by the conventional direct method mileage method, the accumulated error is larger and larger due to the fact that closed loop detection is not performed.
Disclosure of Invention
The invention provides a visual positioning method for reducing the accumulated error of a direct method odometer.
The visual positioning method provided by the invention comprises the following steps:
a visual positioning method is applied to a direct method odometer and comprises the following steps,
inputting each frame of image data flow from a camera into a direct method odometer for front-end processing, and selecting a key frame;
triggering a keyframe-based visual repositioning according to the motion state of the direct method odometer;
fusing the visual repositioning result serving as constraint into a target function of the direct method odometer to obtain a fused target function;
and solving the camera pose by minimizing the fused objective function.
And triggering the visual repositioning based on the key frames according to the motion state of the direct method odometer, wherein the step of triggering the visual repositioning based on the key frames comprises the steps of accumulating whether the number of the current key frames reaches a first threshold value, if so, triggering the visual repositioning and clearing the accumulated number of the key frames, and otherwise, prohibiting the triggering.
The merging of the visual repositioning result into the objective function of the direct odometer as a constraint includes adding the visual repositioning result into a constraint of a back-end optimization of the direct odometer as a constraint.
The fusing the visual repositioning result as a constraint and into an objective function of the direct legal odometer comprises that in the constructed objective function of the fused repositioning result constraint, the repositioning result constraint is weighted, and the weighted weight is dynamically adjusted according to the confidence degree of the visual repositioning result.
The weight is the result of confidence coefficient estimation of the vision repositioning result, and the result of the confidence coefficient estimation is determined by the angular velocity, the linear velocity, the number of the characteristic points during vision repositioning, and the difference between the vision repositioning result and the positioning result of the direct odometer.
The objective function of fusing the visual repositioning results as constraints to the direct odometer includes,
performing local graph optimization on all key frames between adjacent visual repositioning to obtain a camera pose as a constraint, and adding the camera pose into a rear-end optimized window of the direct method odometer; wherein the constraint is: the camera pose between the key frames obtained after the local image optimization of the vision repositioning result is equal to the camera pose between the key frames obtained by direct method odometry under the condition of no error;
for any image point, a fused error function E is constructed according to the following formulanew
Enew=Ephoto+wErelocalization
In another aspect, the present application provides a visual positioning device comprising a processor having direct odometry functionality, wherein the processor
Inputting each frame of image data flow from a camera to perform direct method odometer for front-end processing, and selecting a key frame;
triggering a keyframe-based visual repositioning according to the motion state of the direct method odometer;
fusing the visual repositioning result serving as a constraint into the target function of the direct method odometer to obtain a fused target function;
and solving the camera pose by minimizing the fused objective function.
In the embodiment of the invention, in the direct method odometer process, the target function is constructed by triggering visual relocation at a required time and fusing the constraint of the visual relocation result with the photometric error of the direct method odometer, so that the accumulated error of the direct method odometer is eliminated; the visual relocation adopts the characteristic point method for relocation, and combines the advantages of high accuracy of the direct method odometer and suitability for the area with less texture and the advantage that the characteristic point method is easy to construct a map for relocation and accumulated errors are eliminated.
Drawings
FIG. 1 is a diagram optimization schematic diagram of a nonlinear least squares problem formed inside a sliding window at the rear end of a conventional direct method odometer.
FIG. 2 is a VO flow of a prior direct method visual odometer.
Fig. 3 is a general schematic diagram of the visual positioning method according to the embodiment.
FIG. 4 is a flow chart of the direct method mileage method and relocation fusion according to the embodiment of the present invention.
FIG. 5 is an optimization factor graph of the fusion relocation result according to the embodiment of the present invention.
FIG. 6 is a form of a factor graph (or graph optimization) of a non-linear least squares problem formed inside a sliding window when merging the relocation result with the direct odometer according to an embodiment of the present invention.
Fig. 7 is a schematic flowchart of an offline image creating process and a process for implementing visual repositioning based on a feature point method according to an embodiment of the present invention.
Fig. 8 is a schematic view of a visual positioning apparatus according to an embodiment of the invention.
Detailed Description
First, some terms in the present application will be explained.
Key frame: the camera meets certain motion conditions or image frames with scene changes, and key frames are selected for carrying out correlation calculation to reduce the overall calculation amount, and each frame does not need to be subjected to complex calculation.
Maturation point: points with higher confidence in depth or smaller variance in depth.
Immature point: points with low confidence in depth or large variance in depth.
For the purpose of making the objects, technical means and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings.
The embodiment of the invention fuses the direct method mileage method and the visual relocation, and in the process of carrying out the visual positioning by adopting the direct method mileage method, the visual relocation is triggered according to the motion state of the odometer, for example, the visual relocation is triggered when the accumulated number of the key frames reaches a first threshold value, and the accumulated error of the visual positioning of the direct method odometer is eliminated by fusing the results of the visual relocation.
Referring to fig. 3, fig. 3 is a general schematic diagram of the visual positioning method according to the embodiment. The whole body is divided into two parts. The first part is the off-line construction of a feature map, the feature map is constructed by traversing the environment and utilizing a feature point method for loading during relocation; the second part is a visual odometer part, the main body of which is a direct method odometer and comprises a front end of the direct method odometer and a rear end of the direct method odometer.
In the running process of the direct method odometer, the accumulated error is gradually increased, and the accumulated error is eliminated by periodically relocating and fusing the relocating result. Inputting each frame of image data flow from a camera to a direct method odometer, processing the input image frame by the front end of the direct method odometer, carrying out front end tracking on the direct method odometer, triggering visual relocation according to the motion state of the odometer, for example, accumulating the number of key frames selected in the front end tracking process, triggering the visual relocation when the accumulated number of key frames reaches a first threshold value, matching image feature points of a current frame with an offline constructed feature map by the visual relocation, and solving the matched feature points to obtain a relocation result of the current frame; performing confidence calculation on the repositioning result, and taking the calculated confidence as the weight of fusing the repositioning result to the direct method odometer; and performing local graph optimization based on all key frames between adjacent relocations, weighting the local graph optimization result according to the weight, and adding the weighted local graph optimization result as a constraint to a rear-end optimization window of the direct method odometer, wherein the rear end of the direct method odometer performs rear-end optimization based on the fused result of the current relocations. Referring to fig. 4, fig. 4 is a flow chart illustrating the direct mileage method and relocation fusion according to the embodiment of the present invention by taking DSO algorithm as an example. In the direct method odometer method, generally, a process of obtaining an initial value of a camera pose is referred to as a front end of the direct method, and a process of performing iterative optimization based on the initial value of the camera pose is referred to as a rear end of the direct method. Referring to fig. 4, that is, the front-end tracking and key frame acquisition steps are direct method front ends, and the steps of point frame management, back-end optimization, triggering visual relocation and the like are referred to as direct method back ends.
Step 401, preprocessing the acquired image, including image distortion removal, luminosity correction, and the like. If the binocular vision system is adopted, binocular correction is also included.
And 402, judging whether the initialization is successful or not, if not, initializing, and obtaining the initial pose of the camera through initialization. Specifically, the motion between a first frame and a second frame of an image is estimated, and the initial pose of the camera is determined in a monocular or binocular initialization mode. When the initialization is successful, step 403 is performed.
And 403, after initialization, performing inter-frame tracking by adopting a direct method, and performing inter-frame camera motion estimation by extracting gradient points of pixel gray and minimizing photometric errors of a reference frame and a current frame in an actual positioning process to solve the camera pose to obtain an initial value of the camera pose.
Step 404, based on the image frame in the inter-frame tracking process in step 403, selecting a key frame, if the key frame cannot be selected, that is, when the selected frame is used as a non-key frame because a certain condition is not met, the non-key frame does not enter the back end for processing, at this time, executing step 405, updating the depth and/or variance of the immature point, so that the depth of the immature point is more accurate and the variance is smaller, then returning to step 401, otherwise, executing step 406; step 406, because the accumulated error is continuously increased during the running process of the odometer, the accumulated error is cleared by periodically performing the visual relocation, and considering the real-time performance of the running of the odometer, the visual relocation is not performed in real time, but is performed periodically, for example, every 5 key frames or every 10 key frames are fixed to trigger the relocation once, and because during the running process of the odometer, whether a new frame can be inserted into the back end as a new key frame or not can be judged by certain conditions, so that the motion state of the odometer is tracked by the key frame, the effect of triggering the visual relocation according to the motion state of the odometer is achieved from one aspect, and the frequency of the relocation can not be limited.
In specific implementation, for example, whether the accumulated current number of keyframes reaches a first threshold, when the accumulated current number of keyframes reaches the first threshold, it indicates that visual repositioning is required at this time, then visual repositioning processing is triggered, the visual repositioning of step 407 is executed, and the accumulated value of the current number of keyframes is cleared while the visual repositioning is triggered, so as to count the number of keyframes for triggering next visual repositioning; when the visual repositioning process is finished, executing step 409;
when the accumulated number of the key frames does not reach the first threshold value, the triggering of the visual relocation is forbidden, step 408 is executed, the immature point is updated, and the frames needing to be deleted are marked; adding the new key frame into the back-end optimization; projecting the old image point to a new key frame to generate a residual error item; activating immature points, and converting the immature points into image points; the process of this processing can be regarded as a point frame management process; after the point frame management process is finished, step 410 is executed.
Step 409, with respect to the post-end graph optimization processing without relocation, since all the key frames between adjacent relocations are part of the post-end graph optimization without relocation, the graph optimization performed at this time is part graph optimization; all key frames between adjacent relocations include those selected by the direct odometer front end, some of which may remain in the optimization window and some of which have been rimmed.
Performing local graph optimization based on all key frames in a back-end optimization window between adjacent relocations (for example, two adjacent relocations), determining camera poses corresponding to all key frames between the adjacent relocations through the local graph optimization, that is, the camera poses obtained after the local graph optimization, and adding the camera poses as constraints into the back-end optimization window, specifically:
Figure BDA0001903384040000061
wherein the visual repositioning result ErelocalizationFor the sum of residual errors formed by local graph optimization results of all key frames in a rear-end optimization window in the process of visual repositioning and positioning results of a direct method odometer, i and j are key frame numbers, n is the total number of all key frames between adjacent repositioning, Tr,ijIs the key frame inter-frame camera pose, T, obtained after the local map optimization of the repositioning resultd,ijThe pose of the camera between the key frames obtained by direct method odometry is equal under the condition of no error, and the pose is the constraint added by the repositioning result.
The process of local graph optimization can be embedded before the back-end optimization of the direct method odometer, namely, the process is executed before the back-end optimization is executed; the local graph optimization can also be independent of the back-end optimization, so that the local graph optimization is executed outside the direct odometer process, and in this embodiment, after the relocation result is obtained, the relocation result needs to be added as a constraint to the constraint of the direct odometer.
Step 410, performing back-end optimization: when no repositioning result exists, the target function is the luminosity error, and the luminosity error is minimized; when a repositioning result exists, the objective function is the fusion error of the luminosity error and the visual repositioning result, and the fusion error is minimized.
And (4) fusing the constraints of the visual repositioning result into the direct method odometer to construct an optimization objective function. In the embodiment of the invention, the fusion is not a simple weighted average, but a tight coupling process, namely, the relocation result is added as a constraint to the constraint of the direct odometer, and the optimization factor graph is shown in fig. 5, and fig. 5 is the optimization factor graph fusing the relocation result.
For arbitrary image points, the photometric error EphotoThe sum of the residual errors of the image point in all key frames in the rear-end optimization window of the direct method odometer and the visual repositioning result
Figure BDA0001903384040000071
And the sum of residual errors formed by the local graph optimization results of all key frames in the rear-end optimization window and the direct method odometer positioning result in the visual repositioning process.
Thus, the original optimization problem of the direct method odometer before fusing the repositioning results is to minimize the photometric error EphotoAdding the result constraint E of relocation in the fusion processrelocalizationThe final optimization problem after fusion is to minimize the error EnewThat is, the objective function E is constructed as followsnewAnd solving for the camera pose by minimizing an objective function, thereby optimizing the camera pose of the object for all keyframes between two adjacent relocations.
Enew=Ephoto+wErelocalization
Wherein w is the confidence of the repositioning result and is used as the weight of the back-end window optimization; when there is no repositioning trigger, there is no constraint of repositioning result in the constructed objective function.
Referring to fig. 6, fig. 6 shows a form of a factor graph (or graph optimization) of a non-linear least squares problem constructed inside a sliding window when fusing the relocation result with the direct odometer. Wherein, the luminance residual E in the graphp(Ephoto) Is the original optimized object of the direct method odometer, and relocates the photometric residual Er(Erelocalization) The residual error between the repositioned key frame and the feature map repositioning result can eliminate the accumulated error of the system. The global state is still the eight-dimensional state of each keyframe, i.e. the camera pose in six degrees of freedom and the two photometric parameters.
Step 411, after the back-end optimization, the back-end performs processing of removing outliers, extracting new key frames as mature points, and the like.
Through the processing of the steps, the repositioning result is directly added to the original optimization problem of the direct method odometer as a constraint, and the error term of the repositioning result is added to the optimization framework of the direct method odometer in a tight coupling mode, so that the result of the algorithm is better and more robust, and the overall robustness and the global consistency can be improved.
Referring to fig. 7, fig. 7 is a schematic flowchart of an offline image creating process and a process for implementing visual repositioning based on a feature point method according to an embodiment of the present invention.
Step 701, when the visual repositioning is triggered, image preprocessing is performed on the current frame of the image data stream, feature point extraction is performed, and the extracted features are used as first feature points. The image preprocessing comprises image distortion removal, Gaussian blur, Gaussian pyramid construction, feature point extraction, descriptor extraction and the like, and if the image is a binocular vision system, binocular correction is further performed.
Step 702, matching the first feature point with the loaded feature map, and constructing data association between the current frame image and the feature map by matching the first feature point with the feature points in the feature map, that is, matching the feature points in the feature map to the feature points of the current frame image.
And 703, solving a repositioning result according to the matched feature points in the feature map. That is, PnP solution is performed through the matched feature points, and the repositioning result of the current frame, namely the pose of the camera, is obtained.
Step 704, performing confidence calculation on the repositioning result. Due to the very large number of disturbing factors in the environment, the current scenario is not necessarily suitable for relocation, such as white walls, the same gallery, etc., which makes the relocation result not so reliable. And the wrong relocation result is fused into the odometer, and the odometer is biased. Therefore, confidence calculation needs to be performed on the repositioning result to ensure the reliability of the fusion result. And estimating the reliability of the repositioning result of the current frame according to factors such as the difference between the repositioned inter-frame motion estimation and the direct method odometer inter-frame motion estimation, the number of the characteristic points, the angular velocity, the linear velocity and the like, and taking the reliability as the weight of the rear-end window optimization based on the reliability. The confidence of the repositioning result is represented by the weight w of the repositioning constraint term, which is related to the repositioning state and the motion state of the odometer, and in the case of large angular velocity and linear velocity, the weight w is smaller, and the difference value delta T and delta theta between the repositioning result and the odometer result is also used as the factor of the weight, and the closer the repositioning result and the odometer result are, the larger the weight w is. In addition, the overall weight w is also related to the number of feature points at the time of relocation, and the larger the number n of feature points, the more reliable the positioning result is considered. Based on this, the relocation constraint term weight w may be:
w=f(ω,ν,ΔT,Δθ,n)
one of the specific embodiments may be:
w=αexp(β5n-(β1ω+β2v+β3||ΔT||+β4||Δθ||))
wherein ω is angular velocity, ν is linear velocity, n is the number of feature points at the time of visual repositioning, Δ θ is the angular velocity difference between the result of the repositioning and the odometer front end result, Δ Τ is the displacement difference between the result of the repositioning and the odometer front end result, β1、β2、β3、β4、β5And α are coefficients of the corresponding terms.
In the above-mentioned visual repositioning process, the visual repositioning of this embodiment adopts a feature point method visual repositioning, which is different from the visual odometer positioning process, in the visual odometer positioning process, the positioning error of the odometer is gradually increased due to accumulation of the odometer, and the visual repositioning has no accumulated error, because the repositioning process has no relation with the camera pose information corresponding to the history frame, and the camera pose is obtained only by matching the current image information with the feature map; although the existence of various interference factors causes the relocation result to have larger errors in some scenes, the embodiment is based on a global consistent map in time, estimates the confidence coefficient of the relocation result through the motion state of the odometer and the series parameters of relocation, and performs the operation of fusing the relocation result and the positioning result of the direct method odometer based on the confidence coefficient, thereby reducing the influence of the relocation result with large errors on the odometer; therefore, after the characteristic point method relocation result and the direct method odometer result are tightly coupled, the positioning algorithm can integrate the advantages that the characteristic point method relocation is carried out to eliminate accumulated errors and the direct method odometer can be used in areas lacking textures and characteristics, and the overall consistent positioning result under a large environment is achieved.
In the step 702, the loaded feature map is pre-constructed in an offline manner, an environment map, such as ORB-SLAM, is constructed by traversing the environment, an instant vision and mapping (SLAM) method is used for map construction, accumulated errors are eliminated by closed-loop detection, and a globally consistent feature map is constructed offline. The purpose of off-line mapping is that the real-time mapping is not considered in the algorithm parameter setting, but only the map precision is considered.
Referring to fig. 7, the upper part of the dotted line in fig. 7 is a schematic flow chart of generating the feature map according to the present embodiment.
Step 801, collecting environmental data for offline construction of an environmental map. And traversing the environment which can be visited by the robot in the working state of the robot according to the working posture of the robot, and constructing the visual features in the scene into a feature map. The visual features used are not limited to ORB (Oriented Brief), SIFT (Scale-Invariant Feature Transform), SURF (speedUp Robust features), and BRIEF (binary Robust Independent element features), etc.
Step 802, image preprocessing. And carrying out necessary preprocessing on the image, including image distortion removal, Gaussian blur, Gaussian pyramid construction, feature point extraction, descriptor extraction and the like. If the binocular vision system is adopted, binocular correction is also included.
And step 803, inter-frame tracking, namely estimating relative motion between frames to obtain the camera pose. And matching the feature points through the feature points and the descriptors, constructing data association between frames, and selecting a key frame through the estimated inter-frame motion and image state so as to perform subsequent camera pose optimization operation based on the key frame. This process is typical of a characteristic point method odometer.
And 804, optimizing the camera pose corresponding to the key frame in a certain range before the current frame. Specifically, through some common feature points observed between the key frames, a re-projection error constraint is formed on all the common feature points and each frame in the range, the re-projection error of the feature points is minimized, and the optimization of the camera pose is realized through the minimized re-projection error of the feature points.
Step 805, closed loop detection and optimization. Because the accumulated error of the camera pose is very large, the global consistency of the map needs to be improved through closed-loop detection, so that the precision of the whole feature map is improved, and the accumulated error is eliminated. In closed-loop detection, similarity evaluation is carried out by utilizing image feature points and a bag-of-words model of a descriptor to detect the similarity of image scenes, data association between closed-loop frames is constructed according to the detected frames (called closed-loop frames for convenient description) with similar image scenes, and the camera pose relationship between the current frame and the closed-loop frames forms constraint which is irrelevant to historical frame information, namely is not influenced by accumulated errors.
Through the process, the feature map is obtained.
Offline mapping aims at providing reference data for visual repositioning, and offline mapping has the advantage over online mapping that the relevant parameter settings can take into account the accuracy of the map as much as possible without concern for the real-time nature of the map. The reason for constructing the feature map is that the map for relocation cannot be constructed by the direct method odometer, the relocation process needs data association with good robustness and can be realized only through robust features, and the used features are not limited to ORB, SIFT, SURF, BRIEF and the like.
Referring to fig. 8, fig. 8 is a schematic view of a visual positioning apparatus according to an embodiment of the present invention. The apparatus includes a processor having a direct odometer function, wherein the processor:
inputting each frame of image from an image data stream into a direct method odometer for front-end processing, and selecting a key frame; triggering a keyframe based visual repositioning when repositioning is required;
fusing the vision repositioning result with a direct method odometer to construct a target function; the camera pose is solved by minimizing the objective function.
The device also comprises a memory used for storing the characteristic map constructed off line so as to carry out map matching when carrying out visual relocation.
The device can be applied to equipment carrying the camera/camera, such as a mobile robot, an unmanned aerial vehicle, an intelligent terminal and the like.
The Direct method odometer in the embodiment of the invention is not limited to LSD (Large-Scale Direct monoclonal SLAM), DSO (Direct Sparse odometer) and is not limited to monocular, binocular or multi-ocular vision systems.
It should be appreciated that the examples described herein below may include various components and features. Some of these components and features may be removed and/or modified without departing from the scope of the apparatus, methods, and non-transitory computer-readable media. It should also be appreciated that in the following description, specific details are given to provide a thorough understanding of the examples. However, it is understood that the examples may be practiced without limitation to these specific details. In other instances, well-known methods and structures may not be described in detail so as not to unnecessarily obscure the description of the examples. Additionally, examples may be used in conjunction with each other.
Reference in the specification to "an embodiment" or similar language means that a particular feature, structure, or characteristic described in connection with the example is included in at least one embodiment, but not necessarily in other embodiments. The various instances of the phrase "in one embodiment" or similar phrases in various places in the specification are not necessarily all referring to the same example. As used herein, a component is a combination of hardware and software that executes on hardware to provide a given functionality.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (23)

1. A visual positioning method, comprising,
inputting each frame of image data flow from a camera into a direct method odometer for front-end processing, and selecting a key frame;
triggering a keyframe-based visual repositioning according to the motion state of the direct method odometer;
fusing the visual repositioning result serving as constraint into a target function of the direct method odometer to obtain a fused target function;
and solving the camera pose by minimizing the fused objective function.
2. The visual positioning method of claim 1 wherein triggering a keyframe based visual repositioning according to the motion state of the direct odometer comprises accumulating whether a number of current keyframes reaches a first threshold, and if so, triggering the visual repositioning and clearing the accumulated number of keyframes, otherwise, inhibiting triggering.
3. The visual positioning method of claim 1, wherein the fusing the visual repositioning result as a constraint to the objective function of the direct odometer comprises adding the visual repositioning result as a constraint to a constraint of a direct odometer back-end optimization.
4. The visual localization method of claim 1, wherein said fusing the visual repositioning results as constraints to an objective function of the direct odometer comprises, in constructing an objective function of the fused repositioning results constraints, weighting the repositioning results constraints, and wherein the weighting is dynamically adjusted according to a confidence of the visual repositioning results.
5. The visual positioning method of claim 4, wherein the weight is a result of a confidence estimation of a visual repositioning result,
the result of the confidence coefficient estimation is determined by the angular velocity, the linear velocity, the number of the characteristic points during visual repositioning, and the difference between the visual repositioning result and the positioning result of the direct odometer.
6. The visual localization method of claim 5, wherein the confidence estimate is calculated according to the following equation:
w=αexp(β5n-(β1ω+β2v+β3||ΔT||+β4||Δθ||))
wherein ω is angular velocity, ν is linear velocity, n is the number of feature points at the time of visual repositioning, Δ θ is the angular velocity difference between the result of the repositioning and the odometer front end result, Δ Τ is the displacement difference between the result of the repositioning and the odometer front end result, β1、β2、β3、β4、β5And α are coefficients of the corresponding terms.
7. The visual positioning method of any one of claims 4-6, where fusing the visual repositioning results as constraints to an objective function of the direct method odometer includes,
performing local image optimization on all key frames between adjacent visual repositioning to obtain the inter-key-frame camera pose between adjacent visual repositioning;
adding the obtained key frame inter-frame camera pose between adjacent visual repositioning as a constraint into a window optimized at the rear end of the direct method odometer; wherein the constraint is: the camera pose between the key frames obtained after the local image optimization of the vision repositioning result is equal to the camera pose between the key frames obtained by direct method odometry under the condition of no error;
for any image point, a fused error function E is constructed according to the following formulanewFor the objective function:
Enew=Ephoto+wErelocalization
wherein E isphotoFor the sum of the residual errors of the image point in all the key frames in the rear-end optimization window of the direct method odometer, ErelocalizationAnd the sum of residual errors formed by the local graph optimization results of all key frames in a rear-end optimization window and the direct method odometer positioning result in the visual repositioning process, wherein w is a confidence coefficient estimation result of the visual repositioning result.
8. The visual positioning method of claim 7, wherein all keyframes between adjacent visual relocations are all keyframes between two adjacent visual relocations.
9. The visual positioning method of claim 1, wherein the visual repositioning is a feature point method repositioning;
the feature point method relocation includes,
when visual repositioning is triggered, image preprocessing is carried out on a current frame of an image data stream, and feature points of the current frame are extracted to obtain first feature points;
matching the first characteristic point with a characteristic map constructed off line to obtain a second characteristic point matched with the first characteristic point in the characteristic map,
and solving a vision repositioning result according to the second characteristic point.
10. The visual positioning method of claim 9, wherein the construction of the offline constructed feature map comprises,
acquiring environmental image data, and preprocessing the acquired image data;
performing inter-frame tracking to obtain a camera pose;
for a key frame in a certain range before the current frame, all common feature points between the key frame and each frame form a re-projection error constraint, and the re-projection error of the feature points is minimized to obtain an optimized camera pose;
detecting the similarity of image scenes, constructing data association between closed-loop frames according to the closed-loop frames with the similar image scenes, and forming constraint on the camera pose relationship between the current frame and the closed-loop frames to obtain a feature map.
11. The visual localization method of claim 10, wherein said preprocessing comprises image distortion removal, binocular rectification of binocular vision system, gaussian blurring, constructing gaussian pyramid, extracting feature points and descriptors.
12. A visual positioning device comprising a processor having direct odometry functionality, wherein the processor is configured to determine a distance between a user and a target location
Performing direct method odometer front-end processing on each frame of image data stream from a camera, and selecting a key frame;
triggering according to the motion state of the direct method odometer, and performing visual relocation based on a key frame;
fusing the visual repositioning result serving as a constraint into the target function of the direct method odometer to obtain a fused target function;
and solving the camera pose by minimizing the fused objective function.
13. The apparatus of claim 12, wherein said triggering a keyframe based visual repositioning from said direct odometer motion state comprises accumulating whether a number of current keyframes reaches a first threshold, and if so, triggering said visual repositioning and clearing the accumulated number of keyframes, otherwise, disabling triggering.
14. The apparatus of claim 12, wherein fusing the visual repositioning results as constraints to the objective function of the direct odometer comprises adding the visual repositioning results as constraints to constraints of a direct odometer back-end optimization.
15. The apparatus of claim 12, wherein fusing visual repositioning results as constraints to the objective function of the direct odometer comprises, in constructing an objective function of a fused repositioning results constraint, weighting the repositioning results constraint, and the weighting weights are dynamically adjusted according to a confidence of the visual repositioning results.
16. The apparatus of claim 15, wherein the weights are a result of a confidence estimation of the visual repositioning results, the result of the confidence estimation being determined by angular velocity, linear velocity, number of feature points at the time of visual repositioning, difference of the visual repositioning results and the direct odometry positioning results.
17. The apparatus of claim 16, wherein the confidence estimate is calculated according to the following equation:
w=αexp(β5n-(β1ω+β2v+β3||ΔT||+β4||Δθ||))
wherein ω is angular velocity, ν is linear velocity, n is the number of feature points at the time of visual repositioning, Δ θ is the angular velocity difference between the result of the repositioning and the odometer front end result, Δ Τ is the displacement difference between the result of the repositioning and the odometer front end result, β1、β2、β3、β3、β5α are coefficients of the corresponding terms.
18. The apparatus of any one of claims 15 to 17, wherein said fusing visual repositioning results as constraints into an objective function of the direct odometer comprises,
performing local image optimization on all key frames between adjacent visual repositioning to obtain the inter-key-frame camera pose between adjacent visual repositioning;
adding the key frame inter-frame camera pose between the adjacent visual repositioning as a constraint into a rear-end optimized window of the direct method odometer;
wherein the constraint is: the camera pose between the key frames obtained after the local image optimization of the vision repositioning result is equal to the camera pose between the key frames obtained by direct method odometry under the condition of no error;
for any image point, a fused error function E is constructed according to the following formulanew
Enew=Ephoto+wErelocalization
Wherein E isphotoFor the sum of the residuals of the image point in all keyframes in the direct method odometer back-end optimization window, ErelocalizationAnd the sum of residual errors formed by the local graph optimization results of all key frames in a rear-end optimization window and the direct method odometer positioning result in the visual repositioning process, wherein w is a confidence coefficient estimation result of the visual repositioning result.
19. The apparatus of claim 18, in which all keyframes between adjacent visual relocations are all keyframes between two adjacent visual relocations.
20. The apparatus of claim 12, in which the visual repositioning is a feature point method repositioning,
the feature point method relocation includes,
when visual repositioning is triggered, image preprocessing is carried out on a current frame of an image data stream, and feature points of the current frame are extracted to obtain first feature points;
matching the first characteristic point with a characteristic map constructed off line to obtain a second characteristic point matched with the first characteristic point in the characteristic map,
and solving a vision repositioning result according to the second characteristic point.
21. The apparatus of claim 20, further comprising a memory storing a feature map constructed offline,
the construction of the feature map includes,
acquiring environmental image data, and preprocessing the acquired image data;
performing inter-frame tracking to obtain a camera pose;
for a key frame in a certain range before the current frame, all common feature points between the key frame and each frame form the constraint of the re-projection error, and the re-projection error of the feature points is minimized to obtain an optimized camera pose;
detecting the similarity of image scenes, constructing data association between closed-loop frames according to the closed-loop frames with the similar image scenes, and forming constraint on the camera pose relationship between the current frame and the closed-loop frames to obtain a feature map.
22. The apparatus of claim 20 or 21, wherein the pre-processing comprises image de-distortion, binocular rectification of binocular vision system, gaussian blur, constructing gaussian golden towers, extracting feature points and descriptors.
23. A non-transitory computer readable storage medium comprising direct odometry instructions, further comprising instructions which, when executed by a processor of a device, cause the processor to implement the visual positioning method of any of claims 1 to 12.
CN201811521793.6A 2018-12-13 2018-12-13 Visual positioning method and device Active CN111322993B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811521793.6A CN111322993B (en) 2018-12-13 2018-12-13 Visual positioning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811521793.6A CN111322993B (en) 2018-12-13 2018-12-13 Visual positioning method and device

Publications (2)

Publication Number Publication Date
CN111322993A true CN111322993A (en) 2020-06-23
CN111322993B CN111322993B (en) 2022-03-04

Family

ID=71170481

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811521793.6A Active CN111322993B (en) 2018-12-13 2018-12-13 Visual positioning method and device

Country Status (1)

Country Link
CN (1) CN111322993B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111623783A (en) * 2020-06-30 2020-09-04 杭州海康机器人技术有限公司 Initial positioning method, visual navigation equipment and warehousing system
CN111780763A (en) * 2020-06-30 2020-10-16 杭州海康机器人技术有限公司 Visual positioning method and device based on visual map
CN112099509A (en) * 2020-09-24 2020-12-18 杭州海康机器人技术有限公司 Map optimization method and device and robot
CN112734850A (en) * 2021-01-22 2021-04-30 北京华捷艾米科技有限公司 Cooperative SLAM method and device, computer equipment and storage medium
CN113674351A (en) * 2021-07-27 2021-11-19 追觅创新科技(苏州)有限公司 Robot and drawing establishing method thereof
WO2022002150A1 (en) * 2020-06-30 2022-01-06 杭州海康机器人技术有限公司 Method and device for constructing visual point cloud map

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0213938A2 (en) * 1985-08-30 1987-03-11 Texas Instruments Incorporated Failsafe brake for a multi-wheel vehicle with motor controlled steering
KR20120046974A (en) * 2010-11-03 2012-05-11 삼성전자주식회사 Moving robot and simultaneous localization and map-buliding method thereof
CN204123858U (en) * 2014-08-06 2015-01-28 温斌荣 A kind of teaching aid for demonstrating layout of cam profile principle and process thereof
EP3159125A1 (en) * 2014-06-17 2017-04-26 Yujin Robot Co., Ltd. Device for recognizing position of mobile robot by using direct tracking, and method therefor
CN106885574A (en) * 2017-02-15 2017-06-23 北京大学深圳研究生院 A kind of monocular vision robot synchronous superposition method based on weight tracking strategy
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
CN107356252A (en) * 2017-06-02 2017-11-17 青岛克路德机器人有限公司 A kind of Position Method for Indoor Robot for merging visual odometry and physics odometer
EP3252714A1 (en) * 2016-06-03 2017-12-06 Univrses AB Camera selection in positional tracking
CN107680133A (en) * 2017-09-15 2018-02-09 重庆邮电大学 A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm
CN107796397A (en) * 2017-09-14 2018-03-13 杭州迦智科技有限公司 A kind of Robot Binocular Vision localization method, device and storage medium
CN107966136A (en) * 2016-10-19 2018-04-27 杭州海康机器人技术有限公司 Slave unmanned plane position display method, apparatus and system based on main unmanned plane vision
CN108010081A (en) * 2017-12-01 2018-05-08 中山大学 A kind of RGB-D visual odometry methods based on Census conversion and Local map optimization
CN108038139A (en) * 2017-11-10 2018-05-15 未来机器人(深圳)有限公司 Map constructing method, device and robot localization method, apparatus, computer equipment and storage medium
CN108253963A (en) * 2017-12-20 2018-07-06 广西师范大学 A kind of robot active disturbance rejection localization method and alignment system based on Multi-sensor Fusion
CN108416808A (en) * 2018-02-24 2018-08-17 斑马网络技术有限公司 The method and device of vehicle reorientation
WO2018156991A1 (en) * 2017-02-24 2018-08-30 CyPhy Works, Inc. Control systems for unmanned aerial vehicles
CN108615247A (en) * 2018-04-27 2018-10-02 深圳市腾讯计算机***有限公司 Method for relocating, device, equipment and the storage medium of camera posture tracing process
CN108986037A (en) * 2018-05-25 2018-12-11 重庆大学 Monocular vision odometer localization method and positioning system based on semi-direct method

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0213938A2 (en) * 1985-08-30 1987-03-11 Texas Instruments Incorporated Failsafe brake for a multi-wheel vehicle with motor controlled steering
KR20120046974A (en) * 2010-11-03 2012-05-11 삼성전자주식회사 Moving robot and simultaneous localization and map-buliding method thereof
EP3159125A1 (en) * 2014-06-17 2017-04-26 Yujin Robot Co., Ltd. Device for recognizing position of mobile robot by using direct tracking, and method therefor
CN204123858U (en) * 2014-08-06 2015-01-28 温斌荣 A kind of teaching aid for demonstrating layout of cam profile principle and process thereof
EP3252714A1 (en) * 2016-06-03 2017-12-06 Univrses AB Camera selection in positional tracking
CN107966136A (en) * 2016-10-19 2018-04-27 杭州海康机器人技术有限公司 Slave unmanned plane position display method, apparatus and system based on main unmanned plane vision
CN106885574A (en) * 2017-02-15 2017-06-23 北京大学深圳研究生院 A kind of monocular vision robot synchronous superposition method based on weight tracking strategy
WO2018156991A1 (en) * 2017-02-24 2018-08-30 CyPhy Works, Inc. Control systems for unmanned aerial vehicles
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
CN107356252A (en) * 2017-06-02 2017-11-17 青岛克路德机器人有限公司 A kind of Position Method for Indoor Robot for merging visual odometry and physics odometer
CN107796397A (en) * 2017-09-14 2018-03-13 杭州迦智科技有限公司 A kind of Robot Binocular Vision localization method, device and storage medium
CN107680133A (en) * 2017-09-15 2018-02-09 重庆邮电大学 A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm
CN108038139A (en) * 2017-11-10 2018-05-15 未来机器人(深圳)有限公司 Map constructing method, device and robot localization method, apparatus, computer equipment and storage medium
CN108010081A (en) * 2017-12-01 2018-05-08 中山大学 A kind of RGB-D visual odometry methods based on Census conversion and Local map optimization
CN108253963A (en) * 2017-12-20 2018-07-06 广西师范大学 A kind of robot active disturbance rejection localization method and alignment system based on Multi-sensor Fusion
CN108416808A (en) * 2018-02-24 2018-08-17 斑马网络技术有限公司 The method and device of vehicle reorientation
CN108615247A (en) * 2018-04-27 2018-10-02 深圳市腾讯计算机***有限公司 Method for relocating, device, equipment and the storage medium of camera posture tracing process
CN108986037A (en) * 2018-05-25 2018-12-11 重庆大学 Monocular vision odometer localization method and positioning system based on semi-direct method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XING ZHENG ET,AL: "Photometric Patch-based Visual-Inertial Odometry", 《2017 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION》 *
袁梦等: "点线特征融合的单目视觉里程计", 《激光与光电子学进展》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111623783A (en) * 2020-06-30 2020-09-04 杭州海康机器人技术有限公司 Initial positioning method, visual navigation equipment and warehousing system
CN111780763A (en) * 2020-06-30 2020-10-16 杭州海康机器人技术有限公司 Visual positioning method and device based on visual map
WO2022002150A1 (en) * 2020-06-30 2022-01-06 杭州海康机器人技术有限公司 Method and device for constructing visual point cloud map
CN112099509A (en) * 2020-09-24 2020-12-18 杭州海康机器人技术有限公司 Map optimization method and device and robot
CN112099509B (en) * 2020-09-24 2024-05-28 杭州海康机器人股份有限公司 Map optimization method and device and robot
CN112734850A (en) * 2021-01-22 2021-04-30 北京华捷艾米科技有限公司 Cooperative SLAM method and device, computer equipment and storage medium
CN113674351A (en) * 2021-07-27 2021-11-19 追觅创新科技(苏州)有限公司 Robot and drawing establishing method thereof
WO2023005377A1 (en) * 2021-07-27 2023-02-02 追觅创新科技(苏州)有限公司 Map building method for robot, and robot
CN113674351B (en) * 2021-07-27 2023-08-08 追觅创新科技(苏州)有限公司 Drawing construction method of robot and robot

Also Published As

Publication number Publication date
CN111322993B (en) 2022-03-04

Similar Documents

Publication Publication Date Title
CN111322993B (en) Visual positioning method and device
CN108986037B (en) Monocular vision odometer positioning method and positioning system based on semi-direct method
CN110223348B (en) Robot scene self-adaptive pose estimation method based on RGB-D camera
CN107563313B (en) Multi-target pedestrian detection and tracking method based on deep learning
JP6760114B2 (en) Information processing equipment, data management equipment, data management systems, methods, and programs
CN108955718B (en) Visual odometer and positioning method thereof, robot and storage medium
CN111210477B (en) Method and system for positioning moving object
CN110796010B (en) Video image stabilizing method combining optical flow method and Kalman filtering
CN108960211B (en) Multi-target human body posture detection method and system
US9071829B2 (en) Method and system for fusing data arising from image sensors and from motion or position sensors
JP4849464B2 (en) Computerized method of tracking objects in a frame sequence
CN108846854B (en) Vehicle tracking method based on motion prediction and multi-feature fusion
US7239718B2 (en) Apparatus and method for high-speed marker-free motion capture
EP2179398A1 (en) Estimating objects proper motion using optical flow, kinematics and depth information
KR101885839B1 (en) System and Method for Key point Selecting for Object Tracking
US20170262992A1 (en) Image analysis system and method
KR20210141668A (en) Detection, 3D reconstruction and tracking of multiple orthopedic objects moving relative to each other
KR101901487B1 (en) Real-Time Object Tracking System and Method for in Lower Performance Video Devices
CN110570474B (en) Pose estimation method and system of depth camera
CN112950696A (en) Navigation map generation method and generation device and electronic equipment
Xiao et al. An enhanced adaptive coupled-layer LGTracker++
JP6922348B2 (en) Information processing equipment, methods, and programs
KR101756698B1 (en) Apparatus for object detection on the road and method thereof
CN111829522B (en) Instant positioning and map construction method, computer equipment and device
CN115511970B (en) Visual positioning method for autonomous parking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Patentee after: Hangzhou Hikvision Robot Co.,Ltd.

Address before: 310053 floor 5, building 1, building 2, No. 700, Dongliu Road, Binjiang District, Hangzhou, Zhejiang Province

Patentee before: HANGZHOU HIKROBOT TECHNOLOGY Co.,Ltd.