CN208289901U - A kind of positioning device and robot enhancing vision - Google Patents

A kind of positioning device and robot enhancing vision Download PDF

Info

Publication number
CN208289901U
CN208289901U CN201820827363.6U CN201820827363U CN208289901U CN 208289901 U CN208289901 U CN 208289901U CN 201820827363 U CN201820827363 U CN 201820827363U CN 208289901 U CN208289901 U CN 208289901U
Authority
CN
China
Prior art keywords
image
positioning device
camera
data
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn - After Issue
Application number
CN201820827363.6U
Other languages
Chinese (zh)
Inventor
赖钦伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Amicro Semiconductor Co Ltd
Original Assignee
Zhuhai Amicro Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Amicro Semiconductor Co Ltd filed Critical Zhuhai Amicro Semiconductor Co Ltd
Priority to CN201820827363.6U priority Critical patent/CN208289901U/en
Application granted granted Critical
Publication of CN208289901U publication Critical patent/CN208289901U/en
Withdrawn - After Issue legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The utility model discloses a kind of positioning device and robot for enhancing vision, the positioning device is a kind of moveable vision positioning device, it include: image capture module, it include forward the sweptback binocular camera of inclined camera and face be positioned at the different location of the positioning device, for enhance the positioning device vision sense effect;Image processing module, including image preprocessing submodule and characteristic matching submodule, for handling the image data of acquisition;Inertial data acquisition processing module, for incuding the rotation angle information of inertial sensor, acceleration information and translational velocity information in real time;Locating module is merged, for being merged the autonomous positioning to realize reliable and robust to environmental information acquired in each sensor module.Compared with the existing technology, using the fusing image data inertial data of characteristic matching and in conjunction with relative positional relationship more new landmark, so that identification feature matching is more acurrate.

Description

A kind of positioning device and robot enhancing vision
Technical field
The utility model relates to positioning devices, and in particular to a kind of positioning device and robot for enhancing vision.
Background technique
Robot realizes intelligence, and a basic technology is oneself can to position and walk, and indoor navigation technology is wherein Key technology.Indoor navigation technology has inertial sensor navigation, laser navigation, vision guided navigation, radionavigation etc. at present, Each technology has the advantage and disadvantage of oneself.Inertial sensor navigation is to carry out navigator fix, price using gyroscope, odometer etc. It is cheap, but there are problems that long time drift;Laser navigation precision is high, but price is relatively high, and the service life is also a problem; Traditional vision guided navigation calculates complexity, relatively high for processor performance requirement, and power consumption and price can be relatively high;Nothing Line electricity needs the radio emitting source of multiple fixations, and using inconvenience, price is yet relatively high.The fusion of more technologies is realized low Cost and high-precision are a developing direction of robot navigation's technology.
In existing vision sweeper product, camera is placed on before machine, requires slightly protrusion under normal circumstances, Relatively good visual angle can be obtained, however, it is easy to cause camera eyeglass to be encountered by some objects for being difficult to detect in this way, It is easy to scratch eyeglass;And the sensor generally placed before machine is relatively more, such as many machines have bump bar and cylinder 360 degree of infrared receiving devices of shape, these are easy to block camera, and the angle of camera is caused to need to increase.
Utility model content
A kind of positioning device enhancing vision, which is a kind of moveable vision positioning device, including image Acquisition module, image processing module, inertial data acquisition module and fusion locating module;
Image capture module, including canted shot head forward identify the positioning device side of driving forwards for detecting Upward object;It further include towards rear-inclined camera, for capturing ambient image to realize positioning;
Image processing module, including image preprocessing submodule and characteristic matching submodule, for handling Image Acquisition mould The image data acquired in block;Wherein, image preprocessing submodule is used for the data towards the acquisition of rear-inclined camera Gray level image is converted to, characteristic matching submodule is for extracting characteristic from the pretreated image of image preprocessing submodule It is matched according to the landmark image associated features in landmark data library;Wherein the database is built in image processing module Landmark data library, which includes the image characteristic point of given terrestrial reference associated area;
Inertial data acquisition processing module is made of a series of inertial data measuring units, incudes inertial sensor in real time Rotation angle information, acceleration information and translational velocity information;
Locating module is merged, for the camera shooting as a result, faying face turns forward according to the characteristic matching in image processing module The image data of head acquisition carries out data fusion to the inertial data of inertial data acquisition processing module acquisition, then passes through data Fusion results correct current location information.
Further, the first half that the head of canted shot forward is positioned at the positioning device top surface be open to At preceding recessed and/or projective structure.
Further, it is described towards rear-inclined camera be the identical binocular camera of imaging parameters, the binocular camera Two cameras be abreast positioned at the tail portion opening recessed and/or projective structure backward of the positioning device top surface Place.
Further, the optic axis of the camera inclined forward and the sweptback camera in the face with it is described The angle that positioning device top surface is formed slopely is all across 0-80 degree, and their angle keeps equal.
Further, it in the fusion locating module, when the characteristic matching success in described image processing module, obtains The coordinate being marked in map describedly, the coordinate in conjunction with the positioning device relative to the terrestrial reference, it is described fixed to calculate Coordinate of the position device in map, and amendment is updated using the inertial data;
When the characteristic matching failure in described image processing module, found out according to the accumulated value of the inertial data described The rigid connection relationship of inertial sensor and the head of canted shot forward, then in conjunction with described towards rear-inclined camera root The relative attitude of landmark image associated features in the target image obtained according to left and right binocular parallax and landmark data library calculates New terrestrial reference and storage are recorded in the landmark data library out, complete the creation of new road sign;
Wherein, the inertial sensor is to the canted shot head forward, and the head of canted shot forward is described in All there is mapping associations for gray level image feature or landmark image associated features, while feature can pass through the gray level image It extracts and obtains;The rigid connection relationship is described used between the adjacent two field pictures based on the head of canted shot forward acquisition Property the corresponding pose of data change the positional relationship set up.
A kind of robot, the robot are a kind of mobile robots for installing the positioning device.
The utility model provides a canted shot head forward, identifies the positioning device side of driving forwards for detecting Upward object and the rigid connection relationship for establishing inertial sensor Yu the head of canted shot forward are realized to the inertia Data are merged, also the offer sweptback binocular camera in face, for capturing ambient image according to geometrical relationship to realize Binocular positioning;Compared with the existing technology, the camera that multiple and different orientation are installed in the utility model is respectively to detect identification barrier Hinder object and location navigation process to acquire image data, improves the precision of detection terrestrial reference, while multiple cameras share out the work and help one another realization Positioning mitigates the operand of memory source, shortens feature search time, improves navigation efficiency.
Detailed description of the invention
Fig. 1 is the module frame figure for the positioning device that the utility model implements a kind of enhancing vision provided;
Fig. 2 is the localization method flow chart that the utility model implements a kind of enhancing vision provided;
(binocular camera is fixed for a kind of robot system architecture's figure for enhancing vision that Fig. 3 implements to provide for the utility model Position is at the projective structure of the positioning device surface).
Specific embodiment
Specific embodiment of the present utility model is described further with reference to the accompanying drawing:
The positioning device of one of the utility model embodiment enhancing vision is implemented in a manner of robot, including sweeps Floor-washing robot, AGV etc. mobile robot.The obstacle avoidance apparatus is assumed below to be installed on sweeping robot.However this field The skilled person will understand that other than being used in particular for mobile robot, the construction of embodiment according to the present utility model Energy expanded application is in mobile terminal.
The utility model provides a kind of positioning device for enhancing vision, which is a kind of moveable vision Positioning device, as shown in Figure 1, including image capture module, image processing module, inertial data acquisition module and fusion positioning mould Block.Image capture module, including canted shot head forward identify that the positioning device drives forwards on direction for detecting Object;It further include the sweptback camera in face, for capturing ambient image to realize positioning.Advanced in the positioning device Cheng Zhong, canted shot head is placed on before the positioning device forward, requires slightly protrusion under normal circumstances, keeps pre- If angle can obtain relatively good visual angle, because being provided with bump bar and cylinder in the utility model embodiment 360 degree of infrared receiving devices, these are easy to block the camera, and camera is caused to keep predetermined angle that can be compared Good visual angle is used for object detection, especially to described so canted shot head is not used in assisting navigation positioning forward instead Positioning device drives forwards the object on direction and carries out target identification analysis.The positioning dress is placed on towards rear-inclined camera The tail portion set carries out location navigation by acquiring identifiable terrestrial reference.When turning to the positioning device, described image acquisition Canted shot head can be used by leading on the return path towards rear-inclined camera captured image data forward in module The feature and/or terrestrial reference of boat.
Preferably, as shown in figure 3, the head of canted shot forward 108 is positioned at the positioning device top surface At first half recessed structure for opening forward, identify that the positioning device drives forwards the object on direction for detecting;Specifically Ground, which only does object identification and detection of obstacles, but is not used in location navigation.It is described towards rear-inclined camera It is positioned at the projective structure of the tail portion opening of the positioning device top surface backward.Institute is avoided by the camera postposition It states camera to be collided or blocked, is particularly suited for capturing ambient image to realize precise positioning;Specifically, described towards leaning forward The angle model that oblique camera 108 and the optic axis towards rear-inclined camera are all formed with the positioning device top surface It is trapped among near 45 degree, increases the effective field of view of the positioning device, prevent undesired imaging problem, may such as prevent institute The light reflection and/or refraction for stating the effective imaging features of camera, thus positioning and mapping under being particularly suited for indoor environment.
Further, the first half that the head of canted shot forward 108 is positioned at the positioning device top surface is opened At the forward recessed and/or projective structure of mouth, it is usually used in navigator fix in the prior art, but in the utility model implementation only Object identification and detection of obstacles are done, because preposition camera can block during the positioning device drives forwards The phenomenon that camera lens, is unfavorable for real-time location navigation, and can carry out target identification by identifying the feature of shielded image after being blocked. Although for opening forward recessed and/or projective structure is easily blocked, but specific visual angle can be provided for camera, to mention The precision of the characteristic angle of height capture image.
Further, it is described towards rear-inclined camera be the identical binocular camera of imaging parameters, as shown in figure 3, should Binocular camera is divided into left camera 106L and right camera 106R, they are abreast positioned at the positioning device top surface Tail portion opening projective structure backward at, left camera 106L and right camera 106R drove forwards in the positioning device The phenomenon that camera lens is blocked in journey can be reduced, and prevented undesired imaging problem, such as the camera may be prevented effectively to be imaged The light of feature reflects and/or refraction, thus the real-time location navigation under being particularly suited for indoor environment.
Specifically, as shown in figure 3, the head of canted shot forward 108 with it is described corresponding towards rear-inclined camera The angle that the optic axis and the positioning device top surface of left camera 106L and right camera 106R is formed slopely all across 0-80 degree, and it is acute angle ɑ that they, which are formed by tilt angle all, can generally define 45 degree, to guarantee to obtain true imaging spy Property good approximation effect, improve detection terrestrial reference feature precision.
As shown in Figure 1, image processing module, including image preprocessing submodule and characteristic matching submodule, for handling The image data acquired in image capture module.Wherein, image preprocessing submodule receives the figure acquired in image capture module As data repeat identifiable unique terrestrial reference to establish in ambient enviroment, and by described towards the acquisition of rear-inclined camera Color image data binarization, is converted to gray level image, completes the preprocessing process of image;Then characteristic matching submodule from Characteristic is extracted in the pretreated image of image preprocessing submodule, and associated with the landmark image in landmark data library Feature is matched.
Wherein, the landmark data library is the landmark data library built in image processing module, which includes giving Determine the image characteristic point of terrestrial reference associated area.The landmark data library includes the information of the terrestrial reference about many previous observations, The positioning device can use the terrestrial reference to execute navigator fix movement.Terrestrial reference is considered with specific two-dimensional structure Feature set.Any one of various features can be used for identifying terrestrial reference, when the positioning device is configured as house When clean robot, terrestrial reference may be one group of feature of the two-dimensional structure identification of the corner of (but being not limited to) based on photo frame.In this way Feature based on the static geometry in room, although and feature have certain illumination and dimensional variation, their phases Quilt is generally easier for the object in the lower region for the environment for being continually displaced (such as chair, dustbin, pet etc.) Distinguish and be identified as terrestrial reference.
As shown in Figure 1, inertial data acquisition processing module, is made of a series of inertial data measuring units, incude in real time The rotation angle information of inertial sensor, acceleration information and translational velocity information;The module by inertial sensor for being adopted Collect inertial data, then carries out calibration filtering processing and be transmitted to fusion locating module.The original data processing of the inertial data includes The shielding of maximum value and minimum value;Static drift is eliminated;The Kalman filtering of data.Wherein inertial sensor include odometer, Gyroscope, accelerometer etc. are used for the sensor of inertial navigation.The data of these inertial sensors acquisition are in subsequent processes In be based on the optical flow observed between consecutive image come the image of acquisition and tracking terrestrial reference and determine advance distance and obtain light Learn stream range-measurement system, the sensor combinations required suitable for specific images match.
As shown in Figure 1, fusion locating module, for according to the characteristic matching in image processing module as a result, in conjunction with described The image data of canted shot head acquisition forward carries out data to the inertial data of inertial data acquisition processing module acquisition and melts It closes, then passes through data fusion modified result current location information.The module is based on canted shot head and the face forward The image for tilting backwards camera acquisition, in conjunction with travel distance acquired in inertial sensor, the new image information that will be obtained It is matched with the corresponding landmark image being stored in landmark data library, carries out data fusion then to realize positioning.
Specifically, in the fusion locating module, when the characteristic matching success in described image processing module, institute is obtained The coordinate being marked on stating in map, the coordinate in conjunction with the positioning device relative to the terrestrial reference, can calculate the positioning Coordinate of the device in map, and amendment is updated using the inertial data;Characteristic matching in described image processing module When failure, the rigidity of the inertial sensor Yu the head of canted shot forward is found out according to the accumulated value of the inertial data Connection relationship, then in conjunction with the target image obtained towards rear-inclined camera according to left and right binocular parallax and landmark data The relative attitude of landmark image associated features in library, calculates new terrestrial reference and storage is recorded in the landmark data library In, complete the creation of new road sign;Wherein, the rigid connection relationship is based on the adjacent of the head of canted shot forward acquisition The corresponding pose of the inertial data changes the positional relationship set up between two field pictures;The inertial sensor is to the face Turn forward camera, and the head of canted shot forward is all deposited to the gray level image feature or landmark image associated features In mapping association, while feature can be extracted by the gray level image and be obtained, and by rigid connection relationship, use continuous two The inertial sensor data between frame image is iterated operation, obtains the predicted value of the current location of the positioning device, So that region of search when characteristic matching is smaller, matching speed is faster.
In the utility model embodiment, the head of canted shot forward 108 and the left side towards rear-inclined camera These images are simultaneously supplied to described image processing mould by the image of camera 106L and right the captured ambient enviroment of camera 106R Block.The head of canted shot forward 108 can be used to detect one group of feature associated with terrestrial reference but not in the positioning device Location navigation in turn can be used to match with the landmark image in the landmark data library, while being moved forwards upwardly toward terrestrial reference It is dynamic;It tracks identical one group of feature towards rear-inclined camera using described when moving conversion direction, at this moment can be used It is described to carry out location navigation towards rear-inclined camera, while it is mobile far from terrestrial reference to control the positioning device.The positioning dress Set can be used the head of canted shot forward 108 and it is described towards both rear-inclined cameras come and meanwhile capture ambient enviroment Image, thus with than single camera make the positioning device with the less time capture ambient enviroment greater portion, also because For two kinds of cameras play effect connect each other, common supplemental characteristic match cognization and save memory calculation resources.
To be conceived based on same utility model, the utility model embodiment additionally provides a kind of localization method for enhancing vision, Due to use the localization method solve orientation problem hardware device be based on it is aforementioned it is a kind of enhance vision positioning device, The embodiment of the localization method may refer to a kind of application implementation of aforementioned positioning device for enhancing vision.It is being embodied When, a kind of localization method enhancing vision provided by the embodiment of the utility model, as shown in Fig. 2, specifically including:
Step 1: two towards the rear-inclined camera camera acquires same terrestrial reference in actual scene respectively, it is right The image of binocular camera acquisition carries out analysis identification, extracts image pair.When left image and right image contain corresponding terrestrial reference When, an image-region is extracted as template by center pixel of characteristic point in left image, determines the center pixel of left image The coordinate of point.Point extracts another image-region as search window, then in the search using centered on characteristic point in right image Extract in window with image-region identical with the pixel size of the template so that the image-region extracted and the template into Row gradient of disparity constrained matching, to reduce the corner feature of error hiding, wherein gradient of disparity constrained matching is as matching characteristic The purification algorithm of point, specifically: in the search window of right image, a pixel is from left to right successively translated, with the template It is matched.When choosing smallest match value in the corner feature matching value of acquisition, by image district corresponding to smallest match value Domain is as target area.
Further, according to the imaging features geometrical relationship of the target area and the terrestrial reference, the positioning is calculated Coordinate of the device relative to the terrestrial reference;
Simultaneously from the landmark image associated features for describing son with being stored in landmark data library generated in target area Description carry out characteristic matching;Specifically, from the image of target area that binocular camera shooting identification obtains into Row gaussian filtering process removes noise, then carries out gray processing;Feature point extraction is carried out to gray level image, generates characteristics of image, it will The characteristic point of extraction is compared with the pixel grey scale of 256 positions in the field of this feature point, with binary recording as a result, 0 Indicate that the pixel grey scale of characteristic point is less than any one of 256 pixel grey scales in the field of this feature point, 1 indicates characteristic point Pixel grey scale is greater than any one of 256 pixel grey scales in the field of this feature point, result is stored in the vector of 256 dimensions Description as this feature point.The field of this feature point is to include centered on this feature point, using r as the disk of radius, r Value according to the image grayscale of target area determine.
Step 2: description of the gray level image feature of the target area is matched, matching object is storage Landmark image associated features in landmark data library equally handle to obtain corresponding feature to the landmark image gray processing Description of point, by description generated in target area and the landmark image associated features that are stored in landmark data library Description carries out characteristic matching.
Step 3: judging that the gray level image feature of the target area is associated with the landmark image in landmark data library special Whether sign matches, and is to obtain the coordinate being marked in map describedly, the seat in conjunction with the positioning device relative to the terrestrial reference Mark, calculates coordinate of the positioning device in map, obtains the positioning device present bit by the inertial sensor The predicted position coordinate set, and the current position coordinates are corrected and are updated, an accurate current position coordinates are obtained, are completed The real-time positioning of the positioning device.
Otherwise it according to the rigid connection relationship of the inertial sensor and the head of canted shot forward, merges described used Property data predict the characteristic point coordinate in present frame landmark image, then in conjunction with the target area characteristic point and terrestrial reference number According to the relative attitude of the landmark image associated features point in library, calculates new terrestrial reference and storage is recorded in the landmark data In library, the creation of new road sign is completed.Wherein, the inertial data has been subjected to calibration filtering processing;The inertial data includes angle Speed, acceleration and range information;The rigid connection relationship is adjacent two based on the head of canted shot forward acquisition The corresponding pose of the inertial data changes the positional relationship set up between frame image.
As a kind of mode that the utility model is implemented, in the step 2, the characteristic matching process includes: current Under frame image, description of calculating target area feature is corresponding with the landmark image associated features in the landmark data library to be retouched State the Hamming distance between son;If the Hamming distance is less than preset threshold, then it represents that the binocular camera matching treatment mistake Image and in the landmark data library the corresponding landmark image associated features similarities it is high, be considered as successful match. Specifically, characteristic matching is carried out by calculating the Hamming distance of description of characteristic point in the utility model implementation, through excessive Amount experiment show the Hamming distance of description of the characteristic point that it fails to match 128 or so, the description of the characteristic point of successful match The Hamming distance of son is then much smaller than 128;The image mould of the description son and the database of the characteristics of image of the i.e. described target area The number that the feature coding of description in plate corresponds to identical element on the position bit is not centainly pairing less than 128;One width The largest number of characteristic points of identical element can be made into one on the characteristic point position bit corresponding with feature coding on another width figure on figure It is right.The characteristic point and the ground in landmark data library that wherein the preset threshold corresponds to target area described in the utility model implementation The numerical relation of the relative attitude of logo image associated features point, is set as 128 in the utility model embodiment.It is described opposite Posture depends on the identifiable associated two-dimensional space feature of terrestrial reference in the image towards the acquisition of rear-inclined camera One or more, relative attitude also as the variation of the various sensors configureds of the positioning device and change.
As a kind of mode that the utility model is implemented, in the step 3, the fusion inertial data includes: described in It is corresponding according to the inertial data between the adjacent two field pictures of the head of canted shot forward acquisition when characteristic matching fails Pose variation, find out the rigid connection relationship of the inertial sensor Yu the head of canted shot forward, it is described forward In situation known to canted shot head internal reference, according to the rigid connection of the inertial sensor and the head of canted shot forward The characteristic point coordinate of the current frame image of the inertial sensor prediction is calculated in relationship, and the inertial sensor is predicted The characteristic point coordinate pair ratio of the characteristic point coordinate of current frame image and the current frame image of the head of canted shot forward acquisition, Amendment is updated to the characteristic point coordinate of the present image of the head of canted shot forward acquisition.
Specifically, the inertial sensor gives a forecast motion model, and the head of canted shot forward does observation model, institute The rigid body stated between inertial sensor and the head of canted shot forward is connected to the parameter value to be estimated;In two continuous frames Between image, the accumulated value of the inertial data is calculated, including caused by translation caused by velocity and acceleration and angular speed Rotation, because there are an inertia sensor to the canted shot head forward, the head of canted shot forward to figure Picture, the mapping association of image to feature, while characteristic point can be obtained by image zooming-out, according to the same characteristic point in image In imaging uniqueness principle, construct an optimization equation, using the inertia sensor provide posture be iterated as initial value It solves.Then optimal estimated using what the covariance information of prediction and observation carried out that information merges to obtain under a least square meaning Meter updates amendment and obtains accurate coordinate value of the positioning device under current location.
Between the head of canted shot forward shooting two continuous frames image, records the inertial data and add up Operation obtains the pose transformation of inertial sensor record between two continuous frames, using the inertia sensor arrive described in towards Fixed rotation and translation transformation relation between top rake camera, the pose for being converted into the canted shot head forward become It changes, coordinate of the characteristic coordinates in present frame of previous frame is obtained further according to the head of the canted shot forward internal reference matrix;Work as institute With aforementioned conversion method when stating characteristic matching failure, present image is predicted by the current position coordinates that inertial sensor obtains Coordinate, and with the characteristic point coordinate pair ratio in present image feature, the characteristic point coordinate in present image feature is updated Amendment, and it is stored back into new landmark of the landmark data library as the creation under current location.When characteristic matching success, The coordinate that described image processing module pre-processes to obtain the gray level image feature passes through the imaging features geometrical relationship operation Obtained current position coordinates are compared with the current position coordinates that the inertial sensor obtains, i.e., by observation model to pre- Modifying model is surveyed, to realize that the amendment of the current position coordinates obtained by the characteristic point updates.So when the positioning fills Set oneself not being matched in input picture know terrestrial reference there are when, optionally attempt creation new landmark.
As a kind of mode that the utility model is implemented, in the step 3, the imaging features geometrical relationship is to be based on The camera lens towards rear-inclined camera of the predetermined position is towards the image knot acquired on angle ɑ (as shown in Figure 3) It closes the positional relationship that the inertial sensor sensing road sign inertial data collected corresponds to and sets up.It is described towards Rear-inclined images head model and uses traditional pin-hole model, and described towards rear-inclined camera internal reference is it is known that in conjunction with described fixed The triangulation that the distance of feature and position are made on the road sign that position device is shot during advancing, builds similar triangles Geometrical relationship can calculate individual features angle point on road sign and sit in the two dimension towards in rear-inclined camera coordinate system Mark.
Specifically, using the coordinate of central pixel point described in right image, central pixel point described in left image is subtracted Coordinate obtains parallax;Parallax is substituted into binocular ranging formula, calculates distance of the binocular camera into actual scene, it is double It estimates as follows away from formula:
Wherein, T is binocular camera spacing, and f is the focal length of binocular camera, and x is parallax, and Z is the terrestrial reference to institute State the distance of binocular camera.
As a kind of robotic embodiment that the utility model is implemented, a kind of structure chart of sweeping robot is provided in Fig. 3, It can be used as a kind of concrete application product structure figure of the positioning device of the enhancing vision provided in the utility model implementation, in order to just In explanation, part relevant to the utility model embodiment is illustrated only.In the positioning device image processing module with merge Locating module is built in signal-processing board 102;Image capture module includes camera 106, wherein being towards rear-inclined camera Binocular camera, the camera of left and right two is respectively camera 106L and camera 106R, is abreast installed in body 101 Tail portion is backward at projective structure so that it is described towards rear-inclined camera far from collision detection sensor 105, avoid by some difficulties Encountered with the object detected;The optic axis of the camera 106R and camera 106L and the positioning device top surface shape At certain tilt angle ɑ, so that the binocular camera has preferable observed bearing.
Image capture module further includes canted shot head 108 forward;Wherein canted shot head 108 is installed in machine forward The first half of body 101 is backwardly recessed at structure, at the top of the optic axis of the head of canted shot forward 108 and the positioning device Surface forms certain tilt angle ɑ, so that the head of canted shot forward 108 is far from collision detection sensor 105, thus Make forward canted shot head have preferable angular field of view.Inertial data acquisition module includes collision detection sensor 105, is used to Property data acquisition module sensed under the action of moving wheel 104 and universal wheel 107 drive body 101, the inertial data Acquisition module, camera 106R and camera 106L acquisition data pass through with the relative pose and rigid connection relationship into Row fusion correction position coordinate, and then location navigation movement is executed, the landmark data library can also be updated to lead as building The foundation of boat map.The current location for the sweeping robot that last 103 output signal processing board of man-machine interface is calculated Accurate coordinate numerical value.
Above embodiments are only sufficiently open rather than limitation the utility model, all creation purports based on the utility model, The replacement of equivalence techniques feature without creative work should be considered as the range of the application exposure.

Claims (6)

1. a kind of positioning device for enhancing vision, which is a kind of moveable vision positioning device, which is characterized in that Including image capture module, image processing module, inertial data acquisition module and fusion locating module;
Image capture module, including canted shot head forward identify that the positioning device drives forwards direction for detecting Object;It further include towards rear-inclined camera, for capturing ambient image to realize positioning;
Image processing module, including image preprocessing submodule and characteristic matching submodule, for handling in image capture module The image data of acquisition;Wherein, image preprocessing submodule is used for the data conversion towards the acquisition of rear-inclined camera For gray level image, characteristic matching submodule for extracted from the pretreated image of image preprocessing submodule characteristic with Landmark image associated features in landmark data library are matched;Wherein the database is the ground built in image processing module Database is marked, which includes the image characteristic point of given terrestrial reference associated area;
Inertial data acquisition processing module is made of a series of inertial data measuring units, incudes the rotation of inertial sensor in real time Gyration information, acceleration information and translational velocity information;
Locating module is merged, for camera to be adopted as a result, faying face turns forward according to the characteristic matching in image processing module The image data of collection carries out data fusion to the inertial data of inertial data acquisition processing module acquisition, then passes through data fusion Modified result current location information.
2. positioning device according to claim 1, which is characterized in that the head of canted shot forward is positioned at the positioning The first half of device top surface is for opening forward recessed and/or projective structure at.
3. positioning device according to claim 1, which is characterized in that described identical for imaging parameters towards rear-inclined camera Binocular camera, the tail portion that two cameras of the binocular camera are abreast positioned at the positioning device top surface is opened At the recessed and/or projective structure of mouth backward.
4. according to claim 1 to any one of claim 3 positioning device, which is characterized in that described inclined forward The angle that the optic axis and the positioning device top surface of the sweptback camera of camera and the face are formed slopely is all Across 0-80 degree, and their angle keeps equal.
5. positioning device according to claim 1, which is characterized in that in the fusion locating module, when described image processing When characteristic matching in module is successful, the coordinate being marked in map describedly is obtained, in conjunction with the positioning device relative to described The coordinate of terrestrial reference can calculate coordinate of the positioning device in map, and update amendment using the inertial data;
When the characteristic matching failure in described image processing module, the inertia is found out according to the accumulated value of the inertial data The rigid connection relationship of sensor and the head of canted shot forward, then in conjunction with it is described towards rear-inclined camera according to a left side The relative attitude of target image and the landmark image associated features in landmark data library that right binocular parallax obtains, calculates new Terrestrial reference and storage be recorded in the landmark data library, complete the creation of new road sign;
Wherein, the inertial sensor is to the canted shot head forward, the head of canted shot forward to the gray scale All there is mapping associations for characteristics of image or landmark image associated features, while feature can be extracted by the gray level image It obtains;The rigid connection relationship is the inertia number between the adjacent two field pictures based on the head of canted shot forward acquisition Change the positional relationship set up according to corresponding pose.
6. a kind of robot, which is characterized in that the robot is a kind of installing positioning as described in any one of claim 1 to 5 The mobile robot of device.
CN201820827363.6U 2018-05-31 2018-05-31 A kind of positioning device and robot enhancing vision Withdrawn - After Issue CN208289901U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201820827363.6U CN208289901U (en) 2018-05-31 2018-05-31 A kind of positioning device and robot enhancing vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201820827363.6U CN208289901U (en) 2018-05-31 2018-05-31 A kind of positioning device and robot enhancing vision

Publications (1)

Publication Number Publication Date
CN208289901U true CN208289901U (en) 2018-12-28

Family

ID=64722339

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201820827363.6U Withdrawn - After Issue CN208289901U (en) 2018-05-31 2018-05-31 A kind of positioning device and robot enhancing vision

Country Status (1)

Country Link
CN (1) CN208289901U (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108481327A (en) * 2018-05-31 2018-09-04 珠海市微半导体有限公司 A kind of positioning device, localization method and the robot of enhancing vision
CN111904334A (en) * 2020-07-27 2020-11-10 轻客小觅机器人科技(成都)有限公司 Fisheye binocular stereoscopic vision navigation system and sweeping robot
CN113535877A (en) * 2021-07-16 2021-10-22 上海高仙自动化科技发展有限公司 Intelligent robot map updating method, device, equipment, medium and chip
CN113932808A (en) * 2021-11-02 2022-01-14 湖南格兰博智能科技有限责任公司 Algorithm suitable for fusion correction of vision and gyroscope of vision navigation floor sweeping robot

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108481327A (en) * 2018-05-31 2018-09-04 珠海市微半导体有限公司 A kind of positioning device, localization method and the robot of enhancing vision
CN108481327B (en) * 2018-05-31 2023-11-07 珠海一微半导体股份有限公司 Positioning device, positioning method and robot for enhancing vision
CN111904334A (en) * 2020-07-27 2020-11-10 轻客小觅机器人科技(成都)有限公司 Fisheye binocular stereoscopic vision navigation system and sweeping robot
CN113535877A (en) * 2021-07-16 2021-10-22 上海高仙自动化科技发展有限公司 Intelligent robot map updating method, device, equipment, medium and chip
CN113932808A (en) * 2021-11-02 2022-01-14 湖南格兰博智能科技有限责任公司 Algorithm suitable for fusion correction of vision and gyroscope of vision navigation floor sweeping robot
CN113932808B (en) * 2021-11-02 2024-04-02 湖南格兰博智能科技有限责任公司 Visual and gyroscope fusion correction algorithm applicable to visual navigation floor sweeping robot

Similar Documents

Publication Publication Date Title
CN108481327A (en) A kind of positioning device, localization method and the robot of enhancing vision
CN208289901U (en) A kind of positioning device and robot enhancing vision
Tardif et al. Monocular visual odometry in urban environments using an omnidirectional camera
KR101776622B1 (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
Veľas et al. Calibration of rgb camera with velodyne lidar
CN110275540A (en) Semantic navigation method and its system for sweeping robot
CN112785702A (en) SLAM method based on tight coupling of 2D laser radar and binocular camera
KR101784183B1 (en) APPARATUS FOR RECOGNIZING LOCATION MOBILE ROBOT USING KEY POINT BASED ON ADoG AND METHOD THEREOF
GB2554481A (en) Autonomous route determination
JP6782903B2 (en) Self-motion estimation system, control method and program of self-motion estimation system
CN112734852A (en) Robot mapping method and device and computing equipment
CN108544494A (en) A kind of positioning device, method and robot based on inertia and visual signature
Sappa et al. An efficient approach to onboard stereo vision system pose estimation
CN208323361U (en) A kind of positioning device and robot based on deep vision
CN103680291A (en) Method for realizing simultaneous locating and mapping based on ceiling vision
KR101100827B1 (en) A method of recognizing self-localization for a road-driving robot
EP3710985A1 (en) Detecting static parts of a scene
Maier et al. Vision-based humanoid navigation using self-supervised obstacle detection
Wang et al. Monocular visual SLAM algorithm for autonomous vessel sailing in harbor area
Hakeem et al. Estimating geospatial trajectory of a moving camera
WO2019113859A1 (en) Machine vision-based virtual wall construction method and device, map construction method, and portable electronic device
CN112731503A (en) Pose estimation method and system based on front-end tight coupling
CN212044739U (en) Positioning device and robot based on inertial data and visual characteristics
CN116804553A (en) Odometer system and method based on event camera/IMU/natural road sign
CN116380079A (en) Underwater SLAM method for fusing front-view sonar and ORB-SLAM3

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant
AV01 Patent right actively abandoned
AV01 Patent right actively abandoned
AV01 Patent right actively abandoned

Granted publication date: 20181228

Effective date of abandoning: 20231107

AV01 Patent right actively abandoned

Granted publication date: 20181228

Effective date of abandoning: 20231107