CN202702247U - Rapid and accurate positioning system used for indoor mobile robot - Google Patents

Rapid and accurate positioning system used for indoor mobile robot Download PDF

Info

Publication number
CN202702247U
CN202702247U CN 201220375716 CN201220375716U CN202702247U CN 202702247 U CN202702247 U CN 202702247U CN 201220375716 CN201220375716 CN 201220375716 CN 201220375716 U CN201220375716 U CN 201220375716U CN 202702247 U CN202702247 U CN 202702247U
Authority
CN
China
Prior art keywords
point
label
image
infrared
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201220375716
Other languages
Chinese (zh)
Inventor
周风余
田国会
王然
闫云章
韩旭
台述鹏
王小龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN 201220375716 priority Critical patent/CN202702247U/en
Application granted granted Critical
Publication of CN202702247U publication Critical patent/CN202702247U/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The utility model relates to a rapid and accurate positioning system used for an indoor mobile robot. The rapid and accurate positioning system used for the indoor mobile robot comprises a positioning sensor and a plurality of passive tags, wherein the positioning sensor is arranged on the robot, and the passive tags are arranged on a ceiling in a working region and are used for reflecting infrared rays. The positioning sensor comprises an image processing chip. The image processing chip is connected with a storage module, a complementary metal oxide semiconductor (COMS) camera, a data interface, a power supply and an infrared emission module; the infrared emission module comprises a plurality of infrared tubes; the infrared tubes are arranged around the COMS camera and are divided into several groups; the passive tags are tags; a plurality of mark points are arranged one each tag; the mark points are divided into two types; the first type of the mark points are direction points which are used for determining the direction of a coordinate axis; in order to uniquely determine the direction, the direction mark points are respectively set at only three of the four corners of each corresponding tag; the second type of the mark points are coding points, i.e. the remaining mark points which are used for determining the identification (ID) number of the corresponding tag; reflected infrared materials are respectively pasted on the mark points used for determining the direction; and the reflected infrared materials are pasted on the remaining mark points used for coding according to the coding requirements.

Description

The quick Precise Position System that is used for indoor mobile robot
Technical field
The utility model relates to a kind of quick Precise Position System for indoor mobile robot, belongs to detection technique, image processing and robot navigation field.
Background technology
Indoor positioning is under indoor environment, and according to the current estimation of priori Environmental Map Information, object pose and the input messages such as observation data of sensor, the analysis and calculation through certain obtains the more accurately estimation of object pose.For the intellect service robot that is operated under the indoor environments such as family, hospital and office space, it is located accurately is robot navigation's prerequisite, is the important guarantee of completion service task.
The indoor positioning sensor can be divided into absolute fix sensor and relative positioning sensor according to employed location technology.Absolute fix mainly adopts the technology such as navigation beacon, active or passive sign to position; Relative positioning is to measure the current location that object is determined object with respect to distance and the direction of initial position, is also referred to as dead reckoning.At present the method that adopts of Indoor Robot location mainly contains: based on the method for RFID, based on the method for radio sensing network with based on method of odometer and inertial navigation module etc.
Based on the method for RFID, generally be to read first coarse localization after the RFID data, the recycling sonac carries out range measurement, thereby obtains locating information.The method requires will take into full account various possibilities when placing the RFID label, uses inconvenience, and precision is lower, is fit to environment simple, and to the location under the not high occasion of required precision.
Based on the method for radio sensing network, such as the Wi-Fi technology, Zibgee technology etc. utilize signal strength signal intensity to position, and the method need to be set up radio sensing network, and cost is high, and wireless signal easily is disturbed, and precision is relatively poor.
Utilize the inertial navigation module such as gyroscope, accelerometer, magnetometer based on the method for inertial navigation module, again in conjunction with odometer, course, speed and acceleration to object carry out real time record, and accumulated distance is by calculating object with respect to the coordinate of initial position.The method exists accumulated error and drift, and when time one length or road conditions were not good, precision was difficult to guarantee.
The patent No. is 201110260388.5 patent of invention, use infrared-emitting diode to make the dot matrix road sign, and be attached on the indoor ceiling, the wide-angle thermal camera is fixed on it the mobile robot, to the infrared road sign of photographs, carry out graphical analysis, calculate the pose of robot in real time by robot computer with it.This method has certain limitation, and its dot matrix road sign belongs to active label, and each label is a circuit board, and needs Power supply, and cost is high, installs, uses inconvenient.Secondly, its image is processed and is used airborne industrial computer, and volume is larger, and cost is higher, can't use the middle-size and small-size robot that does not configure industrial computer at all.
Summary of the invention
The purpose of this utility model is exactly for addressing the above problem, a kind of quick Precise Position System and method of work thereof for indoor mobile robot is provided, this system is comprised of with the passive label that sticks on a plurality of reflected infrared raies on the ceiling of working region the alignment sensor that is installed in the robot, system utilizes the sensor infrared transmission module to send infrared ray, label on the irradiation ceiling, miniature CMOS camera collection label light spot image on the sensor, the TMS320DM642DSP chip is processed image, obtain sensor with respect to the X coordinate figure of label, the Y coordinate figure, course angle and highly wait position information realizes accurately location.Adopting ultrared purpose is in order effectively to avoid visible light on the impact of sensor, to improve precision and the robustness of location.
For achieving the above object, the utility model adopts following technical scheme:
A kind of quick Precise Position System for indoor mobile robot, it comprises the alignment sensor that is installed in the robot and sticks on the passive label of a plurality of reflected infrared raies on the ceiling of working region; Wherein, alignment sensor comprises picture processing chip, picture processing chip is connected with memory module, COMS video camera, data-interface, power supply and infrared transmission module respectively, and infrared transmission module comprises a plurality of infrared tubes, and they are centered around around the COMS camera and are divided into several groups; Passive label is identification (RFID) tag, having 15 index point positions at label can be for pasting the reflective infrared wire material, index point is divided into two classes, the first kind is the direction point, namely determine the direction of reference axis, for unique directions, can only have the position at any three angles that the Directional Sign point is arranged in four angles of label, the reflective infrared wire material can not be pasted in remaining next summit, must contain this three direction points on each label; Equations of The Second Kind is encoded point, i.e. remaining each index point, and each encoded point represents bit, can determine by the combination of encoded point the ID numbering of this label; Be stained with the reflective infrared wire material at direction point, on all the other encoded points, select all or part of stickup reflective infrared wire material according to the coding needs.
Described data-interface is the UART interface, and described memory module is SDRAM, FLASH, EEPROM.
Described infrared tube has 12, and per 4 are divided into one group, totally 3 groups; All open for 3 groups when beginning to measure, close immediately one group after recording the result, measure again, if certainty of measurement is impact not, then close again one group, purpose is the infrared tube minimum number that guarantees that precision is unaffected and use, reach the saving energy, reduce the purpose of caloric value.
The job step of this navigation system is:
1) as required, paste the label of right quantity at the ceiling of robot work region, the distance between the label needs greater than 1.5 meters, and the distance range between alignment sensor and the label is 1 meter to 2.5 meters;
2) alignment sensor is installed in the robot, powers on and initialize;
3) read memory module and finish configuration;
4) judge whether to begin to detect, if not, then continue to wait for sense command; If then change step 5) over to;
5) the COMS camera receives the infrared light spot image that tag reflection is returned, image is carried out preliminary treatment, then detect whether have the criterion label in the visual field, if then do not forward 7 to), if the quantity of effective label is more than one in the visual field, then should therefrom select optimum label to carry out reflective spot identification; Thereby determine in the label that three direction points and tag coordinate are XOY, thereby determine X, Y coordinate information and the course angle information of robot under this tag coordinate system, and the elevation information between definite alignment sensor and the adhesive label ceiling, the simultaneously id information of definite label;
6) host computer that result is uploaded to robot shows and controls;
7) determine whether stop to detect, if not, then return step 5) by host computer; If then finish this testing process.
Described step 2) in, before alignment sensor comes into operation, must demarcate miniature cmos camera, obtain intrinsic parameter and distortion parameter:
Timing signal adopts the calibration algorithm based on 2D plane reference plate, the hypothesis intrinsic parameters of the camera is constant all the time in calibration process, and namely no matter video camera is from any angle shot plane template, and intrinsic parameters of the camera all is constant, only have external parameter to change, basic step is:
(1) chessboard of printing a standard is as scaling board, and it is attached on the plane of a rigidity;
(2) take from different perspectives many scaling board images, quantity is many can to make demarcation more accurate;
(3) characteristic point on the detection scaling board, and definite image coordinate and actual coordinate;
(4) utilize linear model to calculate the inner parameter of video camera;
(5) utilize distortion model, camera intrinsic parameter is optimized, obtain all parameters, determine behind the intrinsic parameter image to be corrected, thereby be that subsequent calculations is ready.
In the described step 5), the process that defines criterion label and optimum label in infrared image is: the infrared image that obtains is carried out preliminary treatment, at first infrared image is carried out Gaussian smoothing filtering, choose afterwards suitable threshold value and carry out binary conversion treatment acquisition binary image, extract the profile in the bianry image, remove larger and less profile with the interference of noise decrease.
Alignment sensor adopts and the bianry image that obtains above to be carried out the method that floor projection and upright projection combine comes the position of positioning label in infrared image, determines current optimum label with the rule of arest neighbors.
Given straight line becomes some parts with the equidistant straight line of cluster of vertical this straight line with binary image segmentation, and adding up pixel value in every part and be 1 number of pixels is that bianry image is in the projection to this part in the boning out; When being level or vertical line to boning out, pixel value is 1 pixel quantity in calculating each row of bianry image or the every delegation, just obtained the horizontal and vertical projection of bianry image, in some applications projection can be used as a feature of object identification, the bianry image that Figure 14 a has obtained after having represented to process, Figure 14 b and Figure 14 c have represented respectively upright projection and the floor projection of bianry image, the visible following steps of concrete classification:
(1) the upright projection image that obtains is traveled through the spacing distance d that obtains its adjacent projections pixel clusters from left to right one by one 1, d 2D n, equally the floor projection image is taked to travel through one by one from top to bottom the spacing distance d ' that can get the adjacent projections pixel clusters 1, d ' 2D ' n, since label standing time every far away, thereby the projected pixel between different label bunch also can the interval larger distance, can be with spacing distance as the foundation of distinguishing label.
(2) determine view field corresponding to label on the projected image, get seven times of minimum of a value d ' of the minimum of a value d of above-mentioned upright projection image pixel clusters spacing distance and floor projection image pixel clusters spacing distance, be that 7d and 7d ' carry out nearest neighbour classification as threshold value, concrete sorting technique: travel through from left to right for the upright projection image, find first projected pixel bunch, add up all the other projected pixel bunch distance to this pixel clusters, if less than 7d, the view field that then belongs to same label, then take the next one greater than the projected pixel of 7d bunch as benchmark, continue traversal to the right, be still is the view field at second label place less than 7d, traversal can obtain the zones of different of different labels in the upright projection image downwards successively, for the floor projection image, travel through from top to bottom, take 7d ' as apart from benchmark, adopt said method can obtain the corresponding region of label on the floor projection image.Shown in Figure 14 b and 14c, can obtain A, B, C, four view fields of D.
(3) find label area on the original image, straight line is done at edge with each definite in (2) view field, the straight line in the straight line in upright projection zone and floor projection zone can intersect on original image and obtains different rectangular areas, the zone that this rectangular area then may exist for label, Figure 14 d has represented four rectangular areas having intersecting straight lines to obtain, irrational zone is wherein arranged, namely there are two rectangular areas not have label, next step will introduce how to remove unreasonable zone, obtain effective label.
(4) remove irrational zone, obtain effective label, can obtain the zone at label possibility place through step (3), but need to remove inactive area and obtained effective label region, mainly be the interference of getting rid of two aspects: a kind of is to get rid of to intersect the situation that does not have label in the rectangular area, to such an extent as to a kind of be to get rid of label does not obtain complete label in the image border situation.
(5) in (3), obtained possible view field, and can determine the coordinate range of label possibility region in the image by the edge line in projecting edge zone, whether there is label in this zone by having or not reflective spot to judge in the surveyed area, if there is reflective spot then to have label, if without then not existing.Because the probability in place, image border error detection is larger, so, when keeping to the side, label need to give up, whether can very simply judge this label in the image border by the coordinate in zone.Exclude the effective label that is that is left after disturbing.
(6) if effectively the quantity of label need to the optimum label of selection more than one in the image.Figure 14 d has represented to obtain the image-region at label place.Find the solution two rectangular area centre coordinate a (x a, y a) and b (x b, y b), picture centre point coordinates o (x o, y o), calculate oa and ob distance:
d oa = ( x o - x a ) 2 + ( y o - y a ) 2
d ob = ( x o - x b ) 2 + ( y o - y b ) 2
Get apart from the label of less and be the effective label under precondition, when unnecessary two of the number of label, judgment mode and top similar.
Label reflective spot identifying is in the described step 5):
(1) at first define a gradient: stipulate a pixel along the gradient of certain direction for poor to the gray value of next pixel of this pixel and the party, search for from left to right from top to bottom, if at a certain pixel, right gradient is greater than setting threshold ε 1, then think this point be in the mark region a bit;
(2) take this point as starting point, search for maximum gradation value in its eight neighborhood, finally find maximum gradation value point in the mark region;
(3) take central point as starting point, up and down, left and right four direction search, when certain some grey scale pixel value during less than setting threshold, thinks then that this point is the boundary point of mark region less than setting value and gradient;
(4) point that links to each other take center diagonal respectively begins the horizontal and vertical search as starting point, until find the mark region boundary point, other are searched for by that analogy;
(5) detected zone might not all be mark region, also need to remove interference region, the mean value of all pixels in the calculation flag zone at first, mean value are crossed and are lowly then got rid of, then the size in calculation flag zone and boundary length do not meet the eliminating of threshold value;
Determine all label reflective spots through above step, adopt the method for setting up grey level histogram, choose the lowest point between the peak value of mark region and background area gray scale as threshold value, then deduct this threshold value with image each point gray value, obtain the new image of a width of cloth; When determining central point, select the intensity-weighted centroid method, so-called intensity-weighted barycenter refers to, take gray value as weight, and the mean value of all index point pixel coordinates in the computed image, formula is as follows:
x 0 = Σ ( i , j ) ∈ S i w i , j Σ ( i , j ) ∈ S w i , j y 0 = Σ ( i , j ) ∈ S j w i , j Σ ( i , j ) ∈ S w i , j
X in the formula 0And y 0For calculating the pixel coordinate of gained central point, certain pixel in (i, j) presentation graphs picture, i and j represent respectively the x axle of pixel and the coordinate figure of y axle, w I, jThe grey scale pixel value that expression pixel (i, j) is located.
In the described step 5), the process of determining three direction points in the label is: calculate any in twos wire length between the index point in the label, choose wherein elder, and extract corresponding two index points of the longest line, these two index points are labeled as respectively A and B, calculate the mid point of A and B line, then calculate respectively in the label distance of other these mid points of having a few except A and B point, choose wherein the longest, and extract the corresponding index point of this longest distance, it is labeled as O, and then A, O, 3 of B are three direction points.
Definite process of tag coordinate system is: the O point is the origin of coordinates of tag coordinate system, and establishing the pixel coordinate of O point in image coordinate system is (x 1, y 1), in A and B, to choose any wantonly and turn 90 degrees around the O dextrorotation, the pixel coordinate of establishing the point of choosing is (x 2, y 2), the coordinate that remains another point is (x 3, y 3), the computing formula of corresponding points P pixel coordinate (x, y) is after the rotation:
x = ( x 2 - x 1 ) cos ( α ) - ( y 2 - y 1 ) sin ( α ) + x 1 y = ( y 2 - y 1 ) cos ( α ) + ( x 2 - x 1 ) sin ( α ) + y 1
Wherein α represents the anglec of rotation.
Judge the δ angle:
δ=(x-x 1)(x 3-x 1)+(y-y 1)(y 3-y 1)
If δ〉angle between 0 item two vector is acute angle, the point (x that then chooses 2, y 2) corresponding reference axis is X-axis, (x 3, y 3) point corresponding reference axis be Y-axis, if the angle between two vector of δ<0 is the obtuse angle, the point (x that then chooses 2, y 2) corresponding reference axis is Y-axis, (x 3, y 3) corresponding reference axis is X-axis, corresponding relation can determine that tag coordinate is XOY thus.
In the described step 5), the position acquiring process is: in the image that alignment sensor obtains, the R point is the projected position of alignment sensor in image, and this position is picture centre.Set up the AOB coordinate system according to the rule of determining tag coordinate system in image, wherein OA is X-direction, and OB is Y direction.
Find the solution the coordinate time of alignment sensor under this label, use the method for twice affine transformation, affine transformation is two conversions between the plane coordinate system.
It at first is the conversion of AOB coordinate system and image coordinate system, can be determined the coordinate figure of 3 of AOB in the image coordinate system by the central point of the index point that extracts, also can determine in the AOB coordinate system simultaneously 3 coordinate figure according to the distance between the each point, three coordinates be brought into namely determined affine transformation matrix and translation matrix in the affine transformation formula;
Suppose that the affine matrix of trying to achieve is n 11 n 12 n 21 n 22 , Translation matrix is k 1 k 2 , The image coordinate that R is ordered in the image is (u 0, v 0), then try to achieve the coordinate of R point in the AOB coordinate system x r y r ,
x r y r = n 11 n 12 n 21 n 22 u 0 v 0 + k 1 k 2
The below asks the relation between AOB coordinate system and the tag coordinate system, select three direction points of label, because size and the actual range between the index point of label are known, so in the tag coordinate system coordinate of each point can in the hope of the coordinate of three direction points in the AOB coordinate system simultaneously can in the hope of, suppose to try to achieve affine matrix and be m 11 m 12 m 21 m 22 , Since in the image under O point and the tag coordinate coordinate of initial point be (0,0) so translation matrix is 0 0 , In image, can try to achieve the R point in image coordinate system AOB in coordinate, be made as x r y r , Coordinate after the conversion is x r , y r , , Then:
x r , y r , = m 11 m 12 m 21 m 22 x r y r
Determine thus the coordinate of locator under this label (x ' r, y ' r).By coordinate (x ' r, y ' r) can try to achieve the angle of locator and X-axis and Y-axis in tag coordinate system, can determine course angle thus.
In the described step 5), determine that the process of the elevation information between alignment sensor and the adhesive label ceiling is: according to the projection theory of camera, L is the length of actual object, through after the projection, be l plane of delineation upslide movie queen, f is focal length, Z is the distance of object and camera, then has:
l L = f Z
So
Figure BDA00001958160600077
Wherein f obtains by the camera calibration intrinsic parameter, and L and l are obtained by label and image calculation respectively, just can determine thus the elevation information between alignment sensor and the adhesive label ceiling;
ID number of label identified: in any distribution of a label under image coordinate system, the origin of coordinates O ' of image coordinate system X ' O ' Y ' is the top left corner pixel point of image, the X-axis level to the right, Y-axis vertically downward, with O, A, B, 3 consisted of the coordinate system AOB of label in image, the respective pixel coordinate is respectively (x o, y o) (x a, y a) and (x b, y b), wherein O is that tag coordinate is initial point, OA points to the x axle, OB points to the y axle, M point and N point are two Along ents between A point and the C point, and the N point marks with empty circle and is illustrated in that this some place does not have photosensitive material on this label, and other sensitivity specks except direction point are the encoded point of label, be used for determining the id information of label, the step below adopting is determined the position of label coding point.
(1) determines the position of the encoded point on the straight line OB
Determine vector
Figure BDA00001958160600078
The vector that consists of with the O point of asking for all the other points except O, A, B at 3 with
Figure BDA00001958160600079
Angle theta, such as another vector So
Figure BDA000019581606000711
With
Figure BDA000019581606000712
Angle theta be:
θ = arccos ( ( x b - x o ) ( x f - x o ) + ( y b - y o ) ( y f - y o ) ( x b - x o ) 2 + ( y b - y o ) 2 ( x f - x o ) 2 + ( y f - y o ) 2 )
Choose decision threshold, think that this point is on the OB line when angle theta during less than this threshold value, judge that the M point is on the OB line this moment.
Utilize the relation between the vector length to determine the particular location that M is ordered, establish
Figure BDA00001958160600082
Its length is:
| OM → | = ( x m - x o ) 2 + ( y m - y o ) 2
Ask for vector this moment
Figure BDA00001958160600084
With vector
Figure BDA00001958160600085
Lenth ratio determine the particular location of some M.
For the N point that does not have photosensitive material, can use during owing to other sensitivity specks of differentiation, so will obtain its coordinate, set ON → = ( x n - x o , y n - y o ) , ON → = 2 / 3 OB → , Then have:
x n - x o = 2 3 ( x b - x o ) y n - y o = 2 3 ( y b - y o )
Obtain N point coordinates (x n, y n), if the M point does not have photosensitive material yet, the M point coordinates is also determined with the method so.
(2) determine the position of the encoded point on the straight line OA
Try to achieve
Figure BDA00001958160600089
According to method described in (1) ask for its complement vector with
Figure BDA000019581606000810
Angle, can judge that the D point is the point on the OA line, utilize the vector length relation can determine the position that D is ordered.
(3) determine the position of all the other encoded points
The location determining method of all the other sensitivity specks and above-mentioned similar, only vector choose different, to determine that the upper sensitivity speck position of straight line L2 is as example, for label shown in Figure 13, get the M point and be vectorial starting point, all the other do not determine that the point of position is four vectors of vectorial evaluation of end point, as
Figure BDA000019581606000811
With the vector that obtains all calculate with
Figure BDA000019581606000812
Angle, passing threshold judges that the E point is l 2On point, by vector
Figure BDA000019581606000813
With vector
Figure BDA000019581606000814
Length relation can obtain the position of E.In like manner can learn the point and the position thereof that are on straight line L3 and the L4.
(4) determine label ID
The diverse location correspondence of encoded point the not coordination of binary number on the label, and the coordinate system x direction of principal axis that consists of along label travels through 12 points, and corresponding successively is first of binary number, second ... the 12; Definite binary number that the sensitivity speck position that utilization obtains is unique, it is the ID of this label just.
The utility model relates to a kind of sophisticated sensor and the method for work thereof that can quick and precisely locate indoor mobile robot, the mobile robot carries this sensor can realize accurate indoor positioning, thereby for the correct navigation of robot provides basic condition, have very high using value and economic benefit.
Description of drawings
Fig. 1 is alignment sensor entire system block diagram;
Fig. 2 is that infrared transmitting tube is laid schematic diagram;
Fig. 3 is the label schematic diagram;
Fig. 4 is image coordinate system;
Fig. 5 is camera coordinate system and world coordinate system;
Fig. 6 is index point image and grey value profile;
Point search figure centered by Fig. 7;
Fig. 8 is boundary search figure;
Fig. 9 is direction point identification schematic diagram;
Figure 10 is that the tag coordinate axle is determined schematic diagram;
Figure 11 is that positional information is determined schematic diagram;
Figure 12 is projection theory figure;
Figure 13 is the distribution map of label under image coordinate system;
Figure 14 a-d is label infrared image processing figure;
Figure 15 is the working sensor flow chart.
Wherein, 1. picture processing chip, 2.COMS camera, 3.UART interface, 4.SDRAM, 5.FLASH, 6.EEPROM, 7. infrared tube, 8. power supply.
The specific embodiment
Below in conjunction with accompanying drawing and embodiment the utility model is described further.
Among Fig. 1, it comprises picture processing chip 1, picture processing chip 1 is connected with memory module, COMS camera 3, data-interface, power supply 8 and infrared transmission module respectively, and infrared transmission module comprises a plurality of infrared tubes 7, and they are centered around around the COMS camera 2 and are divided into several groups.Described data-interface is UART interface 3, and described memory module is SDRAM4, FLASH5, EEPROM6.Described infrared tube has 12, and per 4 are divided into one group, totally 3 groups, all open for 3 groups when beginning to measure, close immediately one group after recording the result, measure again, if certainty of measurement is impact not, then close again one group, purpose is infrared tube 7 minimum number that guarantee that precision is unaffected and use.
1 label light spot image is processed
Picture processing chip adopts the TMS320DM642 dsp processor, and its dominant frequency scope is 480MHz ~ 720MHz, and under the dominant frequency of 600MHz, disposal ability can reach 4800MIPS.DM642 is with the configurable video interface VideoPort (VP) of 3 special uses, for the acquisition and processing of video data provides a great convenience.Figure 1 shows that the alignment sensor entire block diagram, TMS320DM642 extends out SDRAM, FLASH, EEPROM by the EMIF interface, and wherein EEPROM is used for the storage of sensor information configuration and reads; Change the data of coming through AD by Video Port reading images sensor; View data draws locating information (spatial value and course angle) after TMS320DM642 carries out algorithm process, be sent to robot controller by UART serial data mouth.
2 infrared transmission modules
Infrared transmission module is comprised of 12 infrared tubes and control circuit, in order to obtain best picture quality and positioning accuracy, the infrared tube optimal placement as shown in Figure 2, red part is infrared transmitting tube among the figure, black and white partly is the COMS camera.Infrared tube is laid in around the COMS camera, though its number does not affect positioning accuracy, number has been lacked the acquisition range that can dwindle alignment sensor, and number is many and can increase power consumption, causes the sensor heating.For solving above-mentioned contradiction, in the practical application, adopted the method for infrared tube grouping control, per 4 infrared tubes are one group, and being divided into is 3 groups, all open for 3 groups when beginning to measure, close immediately one group after recording the result, measure again, if not impact of certainty of measurement, then close one group, purpose is the infrared tube minimum number that guarantees that precision is unaffected and use again.
3 imageing sensors
Imageing sensor adopts the COMS photosensitive array, because the restriction of sensor bulk, the design is integrated in imageing sensor on the mainboard.
4 power supplys
Probe power is input as DC8.4V~12V, and the power supply of three different electric pressures of 1.4V, 3.3V and 5V that system needs is produced by DCDC voltage stabilizing chip.Because the infrared transmission module power consumption is maximum, therefore adopted high-current switch type voltage stabilizing chip LM2576.
5 communication interfaces
Communication interface must satisfy the interface needs of general robot controller computer as the data output interface of sensor.The utility model adopts the UART interface, and binding post is selected 9 pin D type serial ports heads and 3 pin 2mm spacing binding posts, exports simultaneously 232 level and Transistor-Transistor Logic level, to satisfy the needs without the type host computer.In addition, the UART interface also can connect display device, and this locality of carrying out data shows.
Fig. 3 is the reflected infrared ray label schematic diagram that the utility model needs, and its size is 15cm * 15cm, above the diameter of 15 dots be 15mm.Three white round dots are pasted with the reflective infrared wire material among the figure, be used for sensor localization, all labels all will have such three round dots, all the other 12 round dots, ID number of reflected infrared ray material structure tag attributes by pasting varying number at diverse location got by the value addition calculation shown in Fig. 3 for ID number.Attribute information can oneself be set as required.
The tag recognition principle
1 camera calibration
Desirable camera model is pinhole camera modeling, but real pin hole can not provide enough light for instantaneous exposure, thus eyes and video camera all will with lens rather than only a point collect more light.But utilize lens model just to deviate from simple pin hole geometrical model, and introduced the impacts such as distortion of lens.So in order to obtain desirable image, must correct image, need to know the distortion factor of camera, simultaneously for the distance between computing camera and the label, also need to know the intrinsic parameter focal distance f oSo before sensor comes into operation, must demarcate camera, obtain intrinsic parameter and distortion parameter.
At first to set up three coordinate systems: image coordinate system, camera coordinate system and world coordinate system.With the form storage of the two-dimensional array of M * N, each element in the image of the capable N row of M is called pixel to digital picture in computer.At image definition rectangular coordinate system UOV, the coordinate (u, v) of each pixel represents line number and the columns of this pixel in image, and this coordinate system is called image coordinate system, as shown in Figure 4.Also need set up simultaneously an image coordinate system X who represents with physical unit 1O 1Y 1, this coordinate system is with the intersection point O of camera optical axis and the plane of delineation 1Be the origin of coordinates, X 1Axle Y 1Axle is parallel with U, V axle respectively.
Initial point O 1Coordinate in U, V coordinate system is (u 0, v 0), the physical size of each pixel on X-axis and Y direction is dx, dy, and then the pass of any one pixel in two coordinate systems is in the image:
u = x d x + u 0 v = y d y + v 0
With homogeneous coordinates and matrix representation be:
u v 1 = 1 d x 0 u 0 0 1 d y v 0 0 0 1 x y 1
The geometrical relationship of video camera imaging as shown in Figure 5, wherein, O cX cY cZ cBe camera coordinate system, O cO 1Be the focal length of video camera, O cBe video camera photocentre, X cAxle and Y cAxle respectively with image X 1And Y 1Axle is parallel, Z cAxle is the optical axis of video camera, and is vertical with the plane of delineation.
World coordinate system is a benchmark describing the position in the video camera installation environment, and the relation of camera coordinate system and world coordinate system is called outer parameter, because just can use this relation in calibration process, system can not relate to outer parameter during actual use.If the homogeneous coordinates of spatial point p under world coordinate system and camera coordinate system are respectively (x w, y w, z w, 1) and (x c, y c, z c, 1), then there is following relation:
x c y c z c 1 = R T 0 1 x w y w z w 1 = M 2 x w y w z w 1
According to desirable pin-hole imaging model, space any point P and its pass between the projected position p on the image are:
x = f x c z c y = f y c z c
(x, y) is p point physical image coordinate, (x c, y c, z c) be coordinate under the camera coordinate system.Above various abbreviation is got:
z c u v 1 = 1 d x 0 u 0 0 1 d y v 0 0 0 1 f 0 0 0 0 f 0 0 0 0 1 0 R T 0 1 x w y w z w 1
= a x x u 0 0 a y v 0 0 0 1 R T x w y w z w 1
A wherein x, a y, u 0, v 0Only relevant with the video camera internal structure, be called inner parameter.
In fact, actual imaging is not desirable imaging, can be with in various degree distortion, and desirable imaging point is supposed ideal image point (x because the existence of distortion can be offset u, y u), distortion imaging point (x d, y d), available following formula is described distortion model:
x u = x d + δ x ( x d , y d ) y u = y d + δ y ( x d , y d )
Wherein, δ x, δ yBe the nonlinear distortion value, it is relevant with the position of picture point in image.At first be radial distortion, its Mathematical Modeling is as follows:
δ x ( x d , y d ) = x d ( k 1 r d 2 + k 2 r d 4 + k 3 r d 6 + · · · ) δ y ( x d , y d ) = y d ( k 1 r d 2 + k 2 r d 4 + k 3 r d 6 + · · · )
Wherein,
Figure BDA00001958160600127
k 1, k 2, k 3Be coefficient of radial distortion, front two rank of generally getting radially just can meet the demands.
Centrifugal distortion is that centrifugal distortion comprises radially and tangential distortion, available following model representation owing to the optical axis center of each camera lens in the camera optics system is strictly not coplanar:
δ x ( x d , y d ) = p 1 ( 3 x d 2 + y d 2 ) + 2 p 2 x d y d δ y ( x d , y d ) = p 2 ( 3 y d 2 + x d 2 ) + 2 p 1 x d y d
Wherein, p 1, p 2Be the centrifugal distortion coefficient.
Also have a kind of distortion owing to Lens Design, production imperfection or equipment imperfection cause available following model representation:
δ x ( x d , y d ) = s 1 ( x d 2 + y d 2 ) δ y ( x d , y d ) = s 2 ( x d 2 + y d 2 )
Wherein, s 1, s 2Be distortion factor.
By above summary, complete distortion model is:
δ x ( x d , y d ) = x d ( k 1 r d 2 + k 2 r d 4 ) + p 1 ( 3 x d 2 + y d 2 ) + 2 p 2 x d y d + s 1 ( x d 2 + y d 2 ) δ y ( x d , y d ) = y d ( k 1 r d 2 + k 2 r d 4 ) + p 2 ( 3 y d 2 + x d 2 ) + 2 p 1 x d y d + s 2 ( x d 2 + y d 2 )
The calibration algorithm based on 2D plane reference plate that has adopted during camera calibration Zhang Zhengyou to propose in the native system, the hypothesis intrinsic parameters of the camera is constant all the time in calibration process, namely no matter video camera is from any angle shot plane template, intrinsic parameters of the camera all is constant, only has external parameter to change.Basic step is:
(1) chessboard of printing a standard is as scaling board, and it is attached on the plane of a rigidity.
(2) take from different perspectives many scaling board images, quantity is many can to make demarcation more accurate.
(3) characteristic point on the detection scaling board, and definite image coordinate and actual coordinate.
(4) utilize linear model to calculate the inner parameter of video camera.
(5) utilize distortion model, camera intrinsic parameter is optimized, obtain all parameters.
Determine just can correct image behind the intrinsic parameter, and be that subsequent calculations is ready.
2 effective labels and optimum label are confirmed
In the described step 5), the process that defines criterion label and optimum label in infrared image is: the infrared image that obtains is carried out preliminary treatment, at first infrared image is carried out Gaussian smoothing filtering, choose afterwards suitable threshold value and carry out binary conversion treatment acquisition binary image, extract the profile in the bianry image, remove larger and less profile with the interference of noise decrease.
Alignment sensor adopts and the bianry image that obtains above to be carried out the method that floor projection and upright projection combine comes the position of positioning label in infrared image, determines current optimum label with the rule of arest neighbors.
Given straight line becomes some parts with the equidistant straight line of cluster of vertical this straight line with binary image segmentation, and adding up pixel value in every part and be 1 number of pixels is that bianry image is in the projection to this part in the boning out; When being level or vertical line to boning out, pixel value is 1 pixel quantity in calculating each row of bianry image or the every delegation, just obtained the horizontal and vertical projection of bianry image, in some applications projection can be used as a feature of object identification, the bianry image that Figure 14 a has obtained after having represented to process, Figure 14 b and Figure 14 c have represented respectively upright projection and the floor projection of bianry image, the visible following steps of concrete classification:
(1) the upright projection image that obtains is traveled through the spacing distance d that obtains its adjacent projections pixel clusters from left to right one by one 1, d 2D n, equally the floor projection image is taked to travel through one by one from top to bottom the spacing distance d ' that can get the adjacent projections pixel clusters 1, d ' 2D ' n, since label standing time every far away, thereby the projected pixel between different label bunch also can the interval larger distance, can be with spacing distance as the foundation of distinguishing label.
(2) determine view field corresponding to label on the projected image, get seven times of minimum of a value d ' of the minimum of a value d of above-mentioned upright projection image pixel clusters spacing distance and floor projection image pixel clusters spacing distance, be that 7d and 7d ' carry out nearest neighbour classification as threshold value, concrete sorting technique: travel through from left to right for the upright projection image, find first projected pixel bunch, add up all the other projected pixel bunch distance to this pixel clusters, if less than 7d, the view field that then belongs to same label, then take the next one greater than the projected pixel of 7d bunch as benchmark, continue traversal to the right, be still is the view field at second label place less than 7d, traversal can obtain the zones of different of different labels in the upright projection image downwards successively, for the floor projection image, travel through from top to bottom, take 7d ' as apart from benchmark, adopt said method can obtain the corresponding region of label on the floor projection image.Shown in Figure 14 b and 14c, can obtain A, B, C, four view fields of D.
(3) find label area on the original image, straight line is done at edge with each definite in (2) view field, the straight line in the straight line in upright projection zone and floor projection zone can intersect on original image and obtains different rectangular areas, the zone that this rectangular area then may exist for label, Figure 14 d has represented four rectangular areas having intersecting straight lines to obtain, irrational zone is wherein arranged, namely there are two rectangular areas not have label, next step will introduce how to remove unreasonable zone, obtain effective label.
(4) remove irrational zone, obtain effective label, can obtain the zone at label possibility place through step (3), but need to remove inactive area and obtained effective label region, mainly be the interference of getting rid of two aspects: a kind of is to get rid of to intersect the situation that does not have label in the rectangular area, to such an extent as to a kind of be to get rid of label does not obtain complete label in the image border situation.
(5) in (3), obtained possible view field, and can determine the coordinate range of label possibility region in the image by the edge line in projecting edge zone, whether there is label in this zone by having or not reflective spot to judge in the surveyed area, if there is reflective spot then to have label, if without then not existing.Because the probability in place, image border error detection is larger, so, when keeping to the side, label need to give up, whether can very simply judge this label in the image border by the coordinate in zone.Exclude the effective label that is that is left after disturbing.
(6) if effectively the quantity of label need to the optimum label of selection more than one in the image.Figure 14 d has represented to obtain the image-region at label place.Find the solution two rectangular area centre coordinate a (x a, y a) and b (x b, y b), picture centre point coordinates o (x o, y o), calculate oa and ob distance:
d oa = ( x o - x a ) 2 + ( y o - y a ) 2
d ob = ( x o - x b ) 2 + ( y o - y b ) 2
Get apart from the label of less and be the effective label under precondition, when unnecessary two of the number of label, judgment mode and top similar.
The identification of 3 label reflective spots
Reflective spot uses reflectorized material to process, and incident ray can be reflected back the light source place, can form significantly " accurate two-value " image of gray scale contrast at photo under paraxial light source irradiation, is especially suitable for use as photogrammetric high accuracy characteristic point.
Because the impact of a variety of causes such as camera imaging, the actual index point that detects not is complete circle, comes the distinguishing mark zone so adopted in the native system based on the method for pixel search.Take Fig. 6 as example, identifying is as follows:
(1) calculate for convenient, at first we define a gradient.Stipulate that a pixel is this pixel and the party to the gray value of next pixel along the gradient of certain direction poor.From left to right search from top to bottom, if at a certain pixel, right gradient is greater than setting threshold ε 1, then think this point be in the mark region a bit.
(2) take this point as starting point, search for maximum gradation value in its eight neighborhood, finally find maximum gradation value point in the mark region.If ε among the figure 1Be made as 10, then starting point is (5,8), and namely gray value is 7 point.
(3) take central point as starting point, up and down, left and right four direction search, when certain some grey scale pixel value during less than setting threshold, thinks then that this point is the boundary point of mark region less than setting value and gradient.
(4) point that links to each other take center diagonal respectively begins the horizontal and vertical search as starting point, until find the mark region boundary point.Other are searched for by that analogy, such as Fig. 7, shown in Figure 8.
(5) detected zone might not all be mark region, also need to remove interference region, the mean value of all pixels in the calculation flag zone at first, mean value is crossed low then eliminating, and then the size in calculation flag zone and boundary length do not meet the eliminating of threshold value, because mark region is circular, though imaging may be out of shape to some extent, do not have too large impact, so if the dimension scale of the x of mark region and y direction has big difference then should get rid of.
Substantially can determine within sweep of the eye all index points through above step, but the background area is not desirable black when considering imaging, so its gray value and non-vanishing in order to make the calculation flag center more accurate, needs to get rid of the interference of background.Adopt the method set up grey level histogram in the native system, choose the lowest point between the peak value of mark region and background area gray scale as threshold value, then deduct this threshold value with image each point gray value, obtain the new image of a width of cloth.
When determining central point, select the intensity-weighted centroid method, so-called intensity-weighted barycenter refers to, take gray value as weight, and the mean value of all index point pixel coordinates in the computed image, formula is as follows:
x 0 = Σ ( i , j ) ∈ S i w i , j Σ ( i , j ) ∈ S w i , j y 0 = Σ ( i , j ) ∈ S j w i , j Σ ( i , j ) ∈ S w i , j
X in the formula 0And y 0For calculating the pixel coordinate of gained central point, certain pixel in (i, j) presentation graphs picture, i and j represent respectively the x axle of pixel and the coordinate figure of y axle, w I, jThe grey scale pixel value that expression pixel (i, j) is located.
4 determine three direction points and coordinate system
In a label, index point is divided into two classes, and the first kind is the direction point, namely determines the direction of tag coordinate axle, for unique directions, can only have three angles that index point is arranged in four angles of label.Equations of The Second Kind is encoded point, is used for determining the numbering of this label.
As shown in Figure 9, in a upper joint, found the central point of each index point, next step will distinguish direction point and encoded point, at first calculate any in twos length between the index point in the label, choose wherein elder, and extract corresponding two index points of the longest line, these two index points are labeled as respectively A and B, calculate the mid point of A and B line, then calculate respectively in the label distance of other these mid points of having a few except A and B point, choose wherein the longlyest, and extract the corresponding index point of this longest distance, it is labeled as O, then A, O, 3 of B are three direction points.
Definite process of tag coordinate system is: the O point is the origin of coordinates of tag coordinate system, and establishing the pixel coordinate of O point in image coordinate system is (x 1, y 1), in A and B, to choose any wantonly and turn 90 degrees around the O dextrorotation, the pixel coordinate of establishing the point of choosing is (x 2, y 2), the coordinate that remains another point is (x 3, y 3), the computing formula of corresponding points P pixel coordinate (x, y) is after the rotation:
x = ( x 2 - x 1 ) cos ( α ) - ( y 2 - y 1 ) sin ( α ) + x 1 y = ( y 2 - y 1 ) cos ( α ) + ( x 2 - x 1 ) sin ( α ) + y 1
Wherein α is the anglec of rotation.
Judge the δ angle:
δ=(x-x 1)(x 3-x 1)+(y-y 1)(y 3-y 1)
If δ〉angle between 0 item two vector is acute angle, the point (x that then chooses 2, y 2) corresponding reference axis is X-axis, (x 3, y 3) point corresponding reference axis be Y-axis, if the angle between two vector of δ<0 is the obtuse angle, the point (x that then chooses 2, y 2) corresponding reference axis is Y-axis, (x 3, y 3) corresponding reference axis is X-axis, corresponding relation can determine that tag coordinate is XOY thus.
5 position acquirings
The position acquiring process is: in the image that alignment sensor obtains, the R point is the projected position of alignment sensor in image, and this position is picture centre.Set up the AOB coordinate system according to the rule of determining tag coordinate system in image, wherein OA is X-direction, and OB is Y direction.
Find the solution the coordinate time of locator under this label, used the method for twice affine transformation, affine transformation is two conversions between the plane coordinate system.The basic principle of affine transformation is: suppose P 1, P 2, P 3Any three points of conllinear not in the plane, P' 1, P ' 2, P' 3Also be three of conllinear points not in the plane, also only there is an affine transformation T in that existence, makes T (P i)=P ' i(i=1,2,3).Available following formula represents:
x , y , = a 11 a 12 a 21 a 22 x y + b 1 b 2
Wherein, x y , x , y , Be respectively the coordinate of conversion front and back. a 11 a 12 a 21 a 22 Be affine matrix, produce the conversion such as rotation. b 1 b 2 Be translation matrix.Can uniquely determine an affine transformation by affine matrix and translation matrix.
It at first is the conversion of AOB coordinate system and image coordinate system, can determine the coordinate figure of 3 of A in the image coordinate system, O, B, also can determine simultaneously in the AOB coordinate system 3 coordinate figure, three coordinates be brought in the radiation transformation for mula can be determined to radiate transformation matrix and translation matrix.
Suppose that the radiation matrix of trying to achieve is n 11 n 12 n 21 n 22 , Translation matrix is k 1 k 2 , The image coordinate that R is ordered in the image as can be known is (u 0, v 0), then can try to achieve the coordinate of R point in the AOB coordinate system x r y r ,
x r y r = n 11 n 12 n 21 n 22 u 0 v 0 + k 1 k 2
The below asks the relation between AOB coordinate system and the tag coordinate system, and tag coordinate system and AOB coordinate system are known, supposes to try to achieve affine matrix and is m 11 m 12 m 21 m 22 , Because the coordinate of O point and tag coordinate system time initial point is (0,0) so translation matrix is in the image 0 0 , In image, can try to achieve the coordinate of R point in the AOB coordinate system, be made as x r y r , Coordinate after the conversion is x r , y r , , Then:
x r , y r , = m 11 m 12 m 21 m 22 x r y r
Just can determine thus the coordinate of locator under this label (x ' r, y ' r).By coordinate (x ' r, y ' r) can try to achieve the angle of locator and X-axis and Y-axis in tag coordinate system, can determine course angle thus.
6 determine that locator is apart from the vertical range of ceiling
Figure 12 shows that the projection theory of camera, the process of determining the elevation information between alignment sensor and the adhesive label ceiling is: according to the projection theory of camera, L is the length of actual object, through after the projection, be l plane of delineation upslide movie queen, f is focal length, and Z is the distance of object and camera, then has:
l L = f Z
So
Figure BDA00001958160600186
Wherein f obtains by the camera calibration intrinsic parameter, and L and l are obtained by label and image calculation respectively, just can determine thus the elevation information between alignment sensor and the adhesive label ceiling.
7 label ID numbers identification
How this part article utilizes the vector operation under the plane coordinate system to determine that fast and accurately the position distribution situation of sensitivity speck is to confirm the id information of label.
Figure 13 has represented any distribution situation of a label under image coordinate system, the origin of coordinates O ' of image coordinate system X ' O ' Y ' is the top left corner pixel point of image, the X-axis level to the right, Y-axis vertically downward, with O, A, B, 3 consisted of the coordinate system AOB of label in image, the respective pixel coordinate is respectively (x o, y o) (x a, y a) and (x b, y b), wherein O is that tag coordinate is initial point, OA points to the x axle, OB points to the y axle, M point and N point are two Along ents between A point and the C point, and the N point marks with empty circle and is illustrated in that this some place does not have photosensitive material on this label, and other reflective spots except direction point are the encoded point of identification (RFID) tag, be used for determining the id information of label, the step below adopting is determined the position of label coding point.
(1) determines the position of the encoded point on the straight line OB
Determine vector
Figure BDA00001958160600187
The vector that consists of with the O point of asking for all the other points except O, A, B at 3 with
Figure BDA00001958160600188
Angle theta, such as another vector
Figure BDA00001958160600189
So
Figure BDA000019581606001810
With
Figure BDA000019581606001811
Angle theta be:
θ = arccos ( ( x b - x o ) ( x f - x o ) + ( y b - y o ) ( y f - y o ) ( x b - x o ) 2 + ( y b - y o ) 2 ( x f - x o ) 2 + ( y f - y o ) 2 )
Choose decision threshold, think that this point is on the OB line when angle theta during less than this threshold value, judge that the M point is on the OB line this moment;
Utilize the relation between the vector length to determine the particular location that M is ordered, establish
Figure BDA00001958160600192
Its length is:
| OM → | = ( x m - x o ) 2 + ( y m - y o ) 2
Ask for vector this moment
Figure BDA00001958160600194
With vector
Figure BDA00001958160600195
Lenth ratio determine the particular location of some M;
For the N point that does not have photosensitive material, can use during owing to other sensitivity specks of differentiation, so will obtain its coordinate, set ON → = ( x n - x o , y n - y o ) , ON → = 2 / 3 OB → , Then have:
x n - x o = 2 3 ( x b - x o ) y n - y o = 2 3 ( y b - y o )
Obtain N point coordinates (x n, y n), if the M point does not have photosensitive material yet, the M point coordinates is also determined with the method so;
(2) determine the position of the encoded point on the straight line OA
Try to achieve
Figure BDA00001958160600199
According to method described in (1) ask for its complement vector with Angle, can judge that the D point is the point on the OA line, utilize the vector length relation can determine the position that D is ordered;
(3) determine the position of all the other sensitivity specks
The location determining method of all the other sensitivity specks and above-mentioned similar, only vector choose different, to determine that the upper sensitivity speck position of straight line L2 is as example, for label shown in Figure 13, get the M point and be vectorial starting point, all the other do not determine that the point of position is four vectors of vectorial evaluation of end point, as
Figure BDA000019581606001911
With the vector that obtains all calculate with
Figure BDA000019581606001912
Angle, passing threshold judges that the E point is l 2On point, by vector
Figure BDA000019581606001913
With vector
Figure BDA000019581606001914
Length relation can obtain the position of E.In like manner can learn the sensitivity speck and the position thereof that are on straight line L3 and the L4.
(4) determine label ID
The diverse location correspondence of encoded point the not coordination of binary number on the label, and the coordinate system x direction of principal axis that consists of along label travels through 12 points, and corresponding successively is first of binary number, second ... the 12; Definite binary number that the sensitivity speck position that utilization obtains is unique, it is the ID of this label just.For the label that Figure 13 represents, its corresponding binary number is 101010010101, and then the ID of this label number is 2709.
The utility model workflow:
When the robot that this alignment sensor is installed moves in the working region, the sensor infrared transmission module sends infrared ray, label on the irradiation ceiling, miniature CMOS camera collection label light spot image on the sensor, the TMS320DM642 dsp chip adopts advanced algorithm that image is processed, obtain sensor with respect to the position information such as X coordinate figure, Y coordinate figure, course angle and height of label, its workflow as shown in figure 15.

Claims (2)

1. a quick Precise Position System that is used for indoor mobile robot is characterized in that, it comprises the alignment sensor that is installed in the robot and sticks on the passive label of a plurality of reflected infrared raies on the ceiling of working region; Wherein, alignment sensor comprises picture processing chip, picture processing chip is connected with memory module, COMS video camera, data-interface, power supply and infrared transmission module respectively, and infrared transmission module comprises a plurality of infrared tubes, and they are centered around around the COMS camera and are divided into several groups; Passive label is identification (RFID) tag, at identification (RFID) tag a plurality of index points is arranged, and index point is divided into two classes, the first kind is the direction point, namely determine the direction of reference axis, for unique directions, can only have the position at any three angles that the Directional Sign point is arranged in four angles of identification (RFID) tag; Equations of The Second Kind is encoded point, and namely remaining each index point determines that by the combination that remains each index point the ID of this label numbers; Be stained with the reflective infrared wire material at direction point, on all the other encoded points, select all or part of stickup reflective infrared wire material according to the coding needs.
2. the quick Precise Position System for indoor mobile robot as claimed in claim 1 is characterized in that, described infrared tube has 12, and per 4 are divided into one group, totally 3 groups; All open for 3 groups when beginning to measure, close immediately one group after recording the result, measure again, if not impact of certainty of measurement is then closed one group again.
CN 201220375716 2012-07-31 2012-07-31 Rapid and accurate positioning system used for indoor mobile robot Expired - Fee Related CN202702247U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201220375716 CN202702247U (en) 2012-07-31 2012-07-31 Rapid and accurate positioning system used for indoor mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201220375716 CN202702247U (en) 2012-07-31 2012-07-31 Rapid and accurate positioning system used for indoor mobile robot

Publications (1)

Publication Number Publication Date
CN202702247U true CN202702247U (en) 2013-01-30

Family

ID=47582583

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201220375716 Expired - Fee Related CN202702247U (en) 2012-07-31 2012-07-31 Rapid and accurate positioning system used for indoor mobile robot

Country Status (1)

Country Link
CN (1) CN202702247U (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102773862A (en) * 2012-07-31 2012-11-14 山东大学 Quick and accurate locating system used for indoor mobile robot and working method thereof
CN103528583A (en) * 2013-10-24 2014-01-22 北京理工大学 Embedded robot locating device
CN104123015A (en) * 2014-07-18 2014-10-29 广东易凌信息科技有限公司 System for simulating laser pen through mobile phone and achieving method of system
CN104216202A (en) * 2014-08-25 2014-12-17 太仓中科信息技术研究院 Inertia gyroscope combined real-time visual camera positioning system and method
CN105856227A (en) * 2016-04-18 2016-08-17 呼洪强 Robot vision navigation technology based on feature recognition
CN105955260A (en) * 2016-05-03 2016-09-21 大族激光科技产业集团股份有限公司 Mobile robot position perception method and device
WO2016165548A1 (en) * 2015-04-16 2016-10-20 北京蚁视科技有限公司 Vision localization system and method based on high reflective infrared identification
CN106338289A (en) * 2016-08-11 2017-01-18 张满仓 Robot-based indoor positioning and navigation system and method
CN107218886A (en) * 2016-03-22 2017-09-29 周恺弟 A kind of optical alignment tracing system and method based on stealthy combination road sign
CN107436422A (en) * 2017-08-22 2017-12-05 杭州慧仓信息科技有限公司 A kind of robot localization method based on infrared lamp solid array
WO2018049710A1 (en) * 2016-09-14 2018-03-22 哈工大机器人集团上海有限公司 Road sign for determining position of robot, device, and method for distinguishing labels
CN108227687A (en) * 2016-12-09 2018-06-29 广东德豪润达电气股份有限公司 Intelligent robot identification virtual boundary method, mode of progression and beacon
CN108240812A (en) * 2017-12-31 2018-07-03 芜湖哈特机器人产业技术研究院有限公司 A kind of Navigation Sensor System based on Passive Positioning
CN110824498A (en) * 2018-08-07 2020-02-21 杭州海康机器人技术有限公司 Obstacle detection method, device and system

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102773862B (en) * 2012-07-31 2015-01-07 山东大学 Quick and accurate locating system used for indoor mobile robot and working method thereof
CN102773862A (en) * 2012-07-31 2012-11-14 山东大学 Quick and accurate locating system used for indoor mobile robot and working method thereof
CN103528583A (en) * 2013-10-24 2014-01-22 北京理工大学 Embedded robot locating device
CN104123015A (en) * 2014-07-18 2014-10-29 广东易凌信息科技有限公司 System for simulating laser pen through mobile phone and achieving method of system
CN104216202A (en) * 2014-08-25 2014-12-17 太仓中科信息技术研究院 Inertia gyroscope combined real-time visual camera positioning system and method
WO2016165548A1 (en) * 2015-04-16 2016-10-20 北京蚁视科技有限公司 Vision localization system and method based on high reflective infrared identification
CN107218886A (en) * 2016-03-22 2017-09-29 周恺弟 A kind of optical alignment tracing system and method based on stealthy combination road sign
CN105856227A (en) * 2016-04-18 2016-08-17 呼洪强 Robot vision navigation technology based on feature recognition
CN105955260B (en) * 2016-05-03 2019-05-17 大族激光科技产业集团股份有限公司 Position of mobile robot cognitive method and device
CN105955260A (en) * 2016-05-03 2016-09-21 大族激光科技产业集团股份有限公司 Mobile robot position perception method and device
CN106338289A (en) * 2016-08-11 2017-01-18 张满仓 Robot-based indoor positioning and navigation system and method
WO2018049710A1 (en) * 2016-09-14 2018-03-22 哈工大机器人集团上海有限公司 Road sign for determining position of robot, device, and method for distinguishing labels
CN108227687A (en) * 2016-12-09 2018-06-29 广东德豪润达电气股份有限公司 Intelligent robot identification virtual boundary method, mode of progression and beacon
CN107436422A (en) * 2017-08-22 2017-12-05 杭州慧仓信息科技有限公司 A kind of robot localization method based on infrared lamp solid array
CN107436422B (en) * 2017-08-22 2019-09-03 杭州慧仓信息科技有限公司 A kind of robot localization method based on infrared lamp solid array
CN108240812A (en) * 2017-12-31 2018-07-03 芜湖哈特机器人产业技术研究院有限公司 A kind of Navigation Sensor System based on Passive Positioning
CN110824498A (en) * 2018-08-07 2020-02-21 杭州海康机器人技术有限公司 Obstacle detection method, device and system

Similar Documents

Publication Publication Date Title
CN102773862B (en) Quick and accurate locating system used for indoor mobile robot and working method thereof
CN202702247U (en) Rapid and accurate positioning system used for indoor mobile robot
Robertson et al. An Image-Based System for Urban Navigation.
Geiger et al. Automatic camera and range sensor calibration using a single shot
Naimark et al. Circular data matrix fiducial system and robust image processing for a wearable vision-inertial self-tracker
Lloyd et al. Recognition of 3D package shapes for single camera metrology
CN109931939A (en) Localization method, device, equipment and the computer readable storage medium of vehicle
CN107610176A (en) A kind of pallet Dynamic Recognition based on Kinect and localization method, system and medium
CN112667837A (en) Automatic image data labeling method and device
Gong et al. A Frustum-based probabilistic framework for 3D object detection by fusion of LiDAR and camera data
CN106092090A (en) A kind of infrared road sign for indoor mobile robot location and using method thereof
US20210342620A1 (en) Geographic object detection apparatus and geographic object detection method
CN102831595B (en) Marker detection method for image recognition of target points in natural environments
CN105865438A (en) Autonomous precise positioning system based on machine vision for indoor mobile robots
JP2009139197A (en) Color-coded target, color code extracting device and three-dimensional measuring system
CN104835173A (en) Positioning method based on machine vision
CN103994762A (en) Mobile robot localization method based on data matrix code
Mi et al. A vision-based displacement measurement system for foundation pit
CN112735253B (en) Traffic light automatic labeling method and computer equipment
CN102915433A (en) Character combination-based license plate positioning and identifying method
CN105865419A (en) Autonomous precise positioning system and method based on ground characteristic for mobile robot
CN109829476A (en) End-to-end three-dimension object detection method based on YOLO
CN109920009A (en) Control point detection and management method and device based on two dimensional code mark
CN106403926A (en) Positioning method and system
CN103196440B (en) M sequence discrete-type artificial signpost arrangement method and related mobile robot positioning method

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130130

Termination date: 20150731

EXPY Termination of patent right or utility model