CN113490171B - Indoor positioning method based on visual label - Google Patents
Indoor positioning method based on visual label Download PDFInfo
- Publication number
- CN113490171B CN113490171B CN202110919082.XA CN202110919082A CN113490171B CN 113490171 B CN113490171 B CN 113490171B CN 202110919082 A CN202110919082 A CN 202110919082A CN 113490171 B CN113490171 B CN 113490171B
- Authority
- CN
- China
- Prior art keywords
- label
- coordinates
- candidate
- labels
- visual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
- H04W4/33—Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/025—Services making use of location information using location based information parameters
- H04W4/026—Services making use of location information using location based information parameters using orientation information, e.g. compass
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/025—Services making use of location information using location based information parameters
- H04W4/027—Services making use of location information using location based information parameters using movement velocity, acceleration information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W64/00—Locating users or terminals or network equipment for network management purposes, e.g. mobility management
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention relates to an indoor positioning method based on a visual label, and belongs to the technical field of indoor positioning. The method comprises the following steps: s1: collecting environmental image information, identifying the labels by adopting an image processing algorithm, extracting pixel coordinates of the centroid of the labels, and calculating observation azimuth angles between the labels and optical axes of the visual sensor according to internal and external parameters of the visual sensor by the coordinates; s2: matching and inquiring corresponding label coordinates from label position information measured off line according to the observation azimuth; s3: performing pose collaborative solution according to the observation azimuth and the label coordinate to obtain a positioning result; s4: and performing fusion positioning according to the positioning result and the motion state. The invention reduces the complexity of the label detection process and improves the positioning precision.
Description
Technical Field
The invention belongs to the technical field of indoor positioning, and relates to an indoor positioning method based on a visual label.
Background
With the rapid development of technologies such as artificial intelligence and mobile internet, intelligent mobile platforms such as service robots and automatic driving vehicles are widely applied to the fields of medical treatment, warehouse logistics, industrial production and the like, and the demand is increasing day by day. The accurate autonomous positioning is one of key technologies of the intelligent mobile platform and is an important prerequisite for autonomous behaviors such as navigation and decision.
Although the satellite positioning technology (GNSS) is mature and stable, the positioning requirement under an open scene can be better met. However, in a relatively closed indoor environment, due to factors such as closed building shielding, complex propagation process interference and the like, the signal strength of the satellite signal is seriously attenuated when the satellite signal reaches indoors. So that the satellite positioning technology is difficult to solve the indoor positioning problem. In order to fill up the positioning blind area, various indoor positioning technologies appear in succession, and the positioning blind area mainly comprises two directions: wireless signal based positioning techniques and vision based positioning techniques. Compared with the former, the vision-based positioning technology utilizes rich position information in the image to calculate the position of the positioning device, does not depend on high base station equipment support, greatly reduces the system deployment cost, and has the advantages of high positioning precision and strong expandability, so that the positioning device has wide application prospect.
The positioning technology based on the visual tag is one of visual positioning technologies, and utilizes a sensor to measure azimuth angle information of the tag and utilizes a trigonometric geometric positioning algorithm to realize positioning according to known tag coordinates. The method has the advantages of low complexity of the positioning algorithm and high positioning precision, and is an effective way for realizing the indoor positioning technology with low cost and high precision. However, most of the current positioning methods based on visual tags rely on storing location information by using tag patterns and inquiring tag coordinates by decoding or feature matching. The complexity of the label identification process is high, the identification robustness is insufficient, and the system is difficult to operate in real time. In addition, the positioning accuracy is greatly influenced by the geometric distribution of the labels, and when the visual labels and the sensors are in a concentric geometric distribution relationship, the positioning error of the system is extremely large.
Disclosure of Invention
In view of this, the present invention provides an indoor positioning method based on a visual tag, which does not rely on storing location information by using tag features, but uses a visual tag with simplified features, and uses an angle matching correlation method to realize tag coordinate query, thereby reducing the complexity of a tag detection process. And the distribution working conditions of the tags are identified, a collaborative solving method is established, the limitation of a single algorithm is effectively avoided, meanwhile, the weighting coefficient can be determined according to the geometric distribution relation of different tag groups, and the positioning precision is improved.
In order to achieve the purpose, the invention provides the following technical scheme:
an indoor positioning method based on a visual label is disclosed, in the method, the visual label does not store information such as position and the like, the visual label only contains simple color and shape characteristics, and the labels have no uniqueness. Before positioning, the position information of all the tags needs to be measured off-line: l isi=(xi yi)T,i=1,2,...,n。
The method specifically comprises the following steps:
s1: the method comprises the steps of collecting environment image information through a visual sensor, identifying labels by adopting an image processing algorithm, extracting pixel coordinates of the centroid of the labels, and calculating an observation azimuth angle alpha between each label and an optical axis of the visual sensor according to internal and external parameters of the visual sensor by the coordinatesi;
S2: observation azimuth angle alpha obtained from S1iFrom offline measured tag position information LiMatching and inquiring corresponding label coordinates;
s3: observation azimuth angle alpha obtained from S1iPerforming pose collaborative solution on the tag coordinates obtained in the S2 to obtain a positioning result;
s4: and performing fusion positioning according to the positioning result and the motion state obtained in the step S3.
Further, the step S2 specifically includes the following steps:
s21: determining the perception range of the visual sensor according to the visual sensor parameters;
s22: combining the label position information L of off-line measurement according to the visual sensor perception range obtained in S21iScreening out all labels which are possibly observed by a visual sensor as candidate labels;
s23: calculating theoretical azimuth angles beta of the candidate labels and the visual sensor according to the known coordinates of the candidate labels obtained in the step S22i:
Wherein, betaiRepresenting the ith candidate tagTheoretical azimuth angle (x)i,yi) Coordinates representing candidate tags, (x)p,yp,θp) The pose information of the vision sensor can be obtained by the state or dead reckoning of the last moment;
s24: the theoretical azimuth angle beta obtained by S23iObservation azimuth angle α obtained from S1iAnd obtaining an angle matching matrix according to the absolute value of the deviation between each observation azimuth and the theoretical azimuth:
wherein m is the number of observation tags, and n is the number of candidate tags;
s25: and matching each observed label according to the angle matching matrix D obtained in the step S24, and judging whether the successfully matched candidate labels are distributed in a common circle.
Further, in step S25, the matching method of the label is: when min isj D(i,j)<TtThen the observed ith tag and the kth argminjD (i, j) candidate tag associations; otherwise, the non-associated candidate label is taken as interference rejection; wherein, TtIs a set limit.
Further, in step S25, the condition for determining whether the matching candidate tags are distributed in a concentric manner is:
when Cond (G)TG)>TcIf so, judging the distribution to be a common circle; otherwise, the distribution is out of the same circle; wherein, TcIndicating a set threshold; cond (-) denotes the matrix condition number,(xi,yi) Coordinates representing candidate tags, (x)p,yp) Representing the coordinates of the vision sensor.
Further, the step S3 specifically includes: if the candidate tags are determined to be distributed in an out-of-circle manner by S25, the position information of the candidate tags successfully matched according to S25 and the observation azimuth angle alpha obtained by S1iBy usingPerforming pose solving by a step-by-step weighted least square method to obtain pose information; otherwise, an iterative search method (suitable for any number of observation labels) is adopted to solve the pose information.
Further, the specific steps of solving the pose information by adopting an iterative search method are as follows:
s31: setting course angle search range [ theta ]p-εθp+ε]Wherein, thetapRepresenting the course angle of the previous moment, wherein epsilon is a set parameter; dispersing the course angle searching range into theta by adopting a set step length deltai,i=1,2,...;
S32: for each thetaiThe position coordinates (x, y) determined for any two tags are solved according to the following formula:
wherein i, j represents any pair of n successfully matched tags in S25, and the total number of tags can be calculatedPosition coordinates (x, y);
s33: for each thetaiAccording to the method obtained in S32Position coordinates, respectively calculating the variance of the coordinates xVariance of sum yTo obtain thetaiCorresponding degree of positional coordinate dispersion
S34: and taking the average value of a group of position coordinates with the minimum dispersion degree of the position coordinates as a final position coordinate, wherein the corresponding course angle is the final course angle.
Further, the step S4 specifically includes: and performing fusion positioning according to the positioning result and the motion state obtained in the step S3 by using a Kalman filtering algorithm.
The invention has the beneficial effects that: the method adopts the color and shape attributes to design the visual label with simplified characteristics, and realizes accurate query of the label coordinate according to the correlation characteristic between the observation azimuth angle and the theoretical azimuth angle of the label by determining the perception range area of the sensor, thereby reducing the calculated amount in the visual label detection process; and the pose is cooperatively solved by identifying the distribution working condition of the tags, so that the pose can be estimated by determining the weighting coefficient according to the geometric distribution relation of different tag groups while the limitation of a single algorithm is effectively avoided, and the Kalman filtering algorithm is applied to positioning fusion, thereby improving the positioning precision of the system.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a flow chart of the indoor positioning method based on visual label of the present invention;
FIG. 2 is a schematic view of a tag azimuth measurement;
FIG. 3 is a schematic diagram of the sensing range of a sensor;
FIG. 4 is a schematic view of angle matching of an observation tag;
fig. 5 is a schematic diagram of a triangular geometric positioning.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
Referring to fig. 1 to 5, according to the indoor positioning method based on the visual tag, accurate query of the tag coordinate is achieved by using the sensor sensing range attribute and the correlation characteristic between the tag observation azimuth and the theoretical azimuth. And the geometric distribution condition of the labels is identified, a collaborative solving method is designed, and the robustness and the positioning accuracy of the system are improved. As shown in fig. 1, the indoor positioning method operates as follows:
first, before positioningAnd setting a visual label in the positioning area, and acquiring the coordinate of the visual label off line to establish a label database. The visual labels are arranged to ensure that the number of the visual labels which can be detected by any position sensor in a positioning area is more than or equal to three, so that the positioning continuity is ensured. After the visual labels are set, measuring the coordinates L of each label off linei=(xi yi)T1, 2.., n, and storing the coordinate information in a tag database.
Then, indoor positioning is carried out according to the visual label, and the method specifically comprises the following steps:
s1: acquiring environment image information through a camera, preprocessing an image through filtering, histogram equalization, binarization and the like, detecting a label by adopting a shape identification method, extracting a centroid pixel coordinate of an observed label, and calculating a visual label azimuth angle according to a label azimuth angle measurement schematic diagram shown in fig. 2:
wherein alpha isiObserve azimuth, u, for visual tagi、u0F is the horizontal coordinate of the pixel of the centroid of the label, the horizontal coordinate of the principal point of the image and the focal length of the camera in sequence, and is obtained by calibrating the camera.
S2: tag position information L measured from off-line based on the observation azimuth of the tag obtained at S1iIn the method, matching and inquiring out the corresponding label coordinate specifically comprises the following steps:
s21: from the sensor parameters, a sensor sensing range region is determined, which, with reference to the sensor sensing range embodiment shown in fig. 3, can be calculated as:
where A represents the camera optical center coordinates, θARepresenting a camera heading angle; B. c represents other two vertex coordinates of the sensor sensing range; h representsThe sensor detects the distance, and gamma represents the sensor half-horizontal field of view angle.
S22: according to the sensor sensing range obtained in the step S21, combining with offline measurement of position information L of all labelsiAnd screening out all labels which are possibly observed by the visual sensor as candidate labels. According to fig. 3, the screening that the label falls into the sensing range area is based on: the three vector products LA × LB, LB × LC, LC × LA have the same sign.
S23: calculating theoretical azimuth angles beta of the candidate labels and the sensor according to the known coordinates of the candidate labels obtained in the step S22iFor easy understanding, referring to fig. 4, the specific steps are described as follows:
wherein, betaiIndicates the theoretical azimuth of the ith candidate tag, (x)i,yi) Coordinates representing candidate tags, (x)p,yp,θp) The pose information of the camera can be obtained by the state or dead reckoning of the last moment.
S24: the candidate label theoretical azimuth angle beta obtained in S231~β3And the tag observation azimuth angle α in S1iAnd (3) solving the difference, and taking an absolute value to obtain an angle matching matrix:
D=[|αi-β1| |αi-β2| |αi-β3|]T
here, if there are a plurality of candidate tags and observation tags, the above formula is correspondingly expanded to a matrix form;
s25: and matching each observed label according to the angle matching matrix D obtained in the step S24, wherein the matching method comprises the following steps: when min isj D(i,j)<TtThen the observed ith tag and the kth argminjAnd D (i, j) candidate labels are associated. From fig. 4, it is evident that the theoretical azimuth β corresponding to candidate tag 11Observation azimuth angle alpha closer to observation labeliNamely: k is 1, and then the coordinates of candidate tag 1 are assigned toObserving a label i; otherwise, the non-associated candidate label is taken as interference rejection; wherein, TtIs a set limit.
S3: performing pose collaborative solution according to the observation azimuth of the tag obtained in the step S1 and the tag coordinate obtained in the step S2 to obtain a positioning result, specifically including the following steps:
s31: judging whether the successfully matched candidate tags are distributed in a circle or not in S25, wherein the judgment conditions are as follows: when Cond (G)TG)>TcIf so, judging the circle to be a common circle; otherwise, it is out of round. Wherein Cond (-) denotes the matrix condition number,(xi,yi) Denotes the coordinates of the label, (x)p,yp) Representing camera coordinates, TcIndicating a set threshold value, TcIndicating a set threshold, such as 200.
S32: if the non-co-circular distribution is judged by S31, performing pose solution by adopting a step-by-step weighted least square method according to the position information of the candidate tags successfully matched by S25 and the observation azimuth angle of the tags obtained by S1 to obtain pose information; otherwise, solving the pose information by adopting an iterative search method.
In step S32, the step-by-step weighted least squares solution process is as follows:
first, according to the principle of triangle geometry positioning as shown in fig. 5, a positioning model can be obtained:
wherein, (x, y, theta) is the position coordinate and the course angle to be solved, (x)i,yi) As a label coordinate, αiIs the corresponding tag azimuth.
According to the positioning model, analytic solutions of all states can be obtained:
wherein, tθIs the heading angle tangent value, x is the longitudinal position, y is the transverse position; t is tiIs the tag azimuthal tangent value, (x)i,yi) I is a label coordinate value of 1,2, 3.
Thus, the solution of the heading angle and the position coordinates can be performed in two steps as follows:
1) solving a course angle theta;
based on the above analytic solution, every third visual label is a group to obtainSolving an equation by each course angle, and performing optimal estimation by adopting a weighted least square method.
Determining a course angle coefficient matrix AH、BH:
determiningHeading angle weighting matrix W of each label groupH:
Wherein the content of the first and second substances,is according to tθAnd analyzing the solution, and measuring the variance of the course angle estimated by the coordinates and the azimuth angle of each label group.m represents the mth tag group, and i represents the three tags i, j and k under the tag group.
The heading angle is then solved as:
θ=arctan tθ
2) solving the position coordinates x and y;
based on the above analytic solution, every two tags are obtained as a groupAnd solving an equation for the position coordinates, and performing optimal estimation by adopting a weighted least square method.
Determining a position coordinate coefficient matrix Ax、Bx、Ay、By:
determining a position coordinate weighting matrix W of each label groupx、Wy:
Wherein the content of the first and second substances,respectively, according to the x and y analytic solutions, the position coordinate measurement variance estimated by the coordinates and azimuth angles of each label group,m represents the mth tag group, and i represents the i, j two tags under the tag group.
The position coordinates are then solved as:
in step S32, the iterative search method solving step is as follows:
s321: setting course angle search range [ theta ]p-ε θp+ε]Wherein, thetapIndicating the heading angle at the previous moment, epsilon is a set parameter, such as 5 deg.. Dispersing the course angle searching range into theta by adopting a set step length delta, such as 0.01 DEGi,i=1,2,...;
S322: for each thetaiThe position coordinates (x, y) determined for any two tags are solved according to the following formula:
wherein i, j represents any pair of n labels successfully matched in step S25, and the total number of matched labels is calculatedPosition coordinates (x, y);
s323: for each thetaiAccording to the result obtained in S322Position coordinates, respectively calculating the variance of the coordinates xVariance of sum yTo obtain thetaiCorresponding degree of positional coordinate dispersion
S324: and taking the average value of a group of position coordinates with the minimum dispersion degree of the position coordinates as a final position coordinate, wherein the corresponding course angle is the final course angle.
S4: and performing fusion positioning according to the positioning result and the motion state obtained in the step S3 by using a Kalman filtering algorithm. Taking the motion state as the velocity v and the angular velocity ω as an example, the calculating step includes:
where k denotes a sampling time, and Z ═ x y θ]TIn the pose state, x is the longitudinal position, y is the transverse position, theta is the course angle, and xk-1,yk-1,θk-1Representing the pose state of the sensor at the previous moment, v representing the speed, omega representing the angular speed, and T representing the sampling time;is a state transition matrix;for the observation matrix, Q and R are the covariance matrices of the system and observed noise, respectively.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.
Claims (4)
1. An indoor positioning method based on a visual label is characterized by specifically comprising the following steps:
s1: the method comprises the steps of collecting environment image information through a visual sensor, identifying labels by adopting an image processing algorithm, extracting pixel coordinates of the centroid of the labels, and calculating an observation azimuth angle alpha between each label and an optical axis of the visual sensor according to internal and external parameters of the visual sensor by the coordinatesi;
S2: observation azimuth angle alpha obtained from S1iFrom off-line measured tag position information LiIn the method, the corresponding label coordinate is inquired in a matching way, and the method specifically comprises the following steps:
s21: determining the perception range of the visual sensor according to the visual sensor parameters;
s22: combining the label position information L of off-line measurement according to the visual sensor perception range obtained in S21iScreening out all labels which are possibly observed by a visual sensor as candidate labels;
s23: calculating theoretical azimuth angles beta of the candidate labels and the visual sensor according to the known coordinates of the candidate labels obtained in the step S22i:
Wherein, betaiIndicates the theoretical azimuth of the ith candidate tag, (x)i,yi) Coordinates representing candidate tags, (x)p,yp,θp) The pose information of the vision sensor is obtained by the state or dead reckoning of the last moment;
s24: obtained in S23Theoretical azimuth angle betaiAnd the observed azimuth angle alpha obtained in S1iAnd obtaining an angle matching matrix according to the absolute value of the deviation between each observation azimuth and the theoretical azimuth:
wherein m is the number of observation tags, and n is the number of candidate tags;
s25: matching each observed label according to the angle matching matrix D obtained in the step S24, and judging whether the successfully matched candidate labels are distributed in a common circle; the judgment condition for judging whether the successfully matched candidate tags are distributed in a common circle is as follows:
when Cond (G)TG)>TcIf so, judging the distribution to be a common circle; otherwise, the distribution is out of the same circle; wherein, TcIndicating a set threshold; cond (-) denotes the matrix condition number,(xi,yi) Coordinates representing candidate tags, (x)p,yp) Coordinates representing a vision sensor;
s3: observation azimuth angle alpha obtained from S1iPerforming pose collaborative solution on the tag coordinates obtained in the S2 to obtain a positioning result;
s4: and performing fusion positioning by using a Kalman filtering algorithm according to the positioning result and the motion state obtained in the step S3, wherein the calculation step comprises the following steps:
where k denotes a sampling time, and Z ═ x y θ]TIn the pose state, x is the longitudinal position, y is the transverse position, theta is the course angle, and xk-1,yk-1,θk-1Representing the pose state of the sensor at the previous moment, v representing the speed, omega representing the angular speed, and T representing the sampling time;is a state transition matrix;for the observation matrix, Q and R are the covariance matrices of the system and observed noise, respectively.
2. The indoor positioning method based on visual label as claimed in claim 1, wherein in step S25, the matching method of label is: when min isjD(i,j)<TtThen the observed ith tag and the kth argminjD (i, j) candidate tag associations; otherwise, the non-associated candidate label is taken as interference rejection; wherein, TtIs a set limit.
3. The method for indoor positioning based on visual label as claimed in claim 1, wherein said step S3 specifically includes: if the candidate tags are determined to be distributed in an out-of-circle manner by S25, the position information of the candidate tags successfully matched according to S25 and the observation azimuth angle alpha obtained by S1iPose by using step-by-step weighted least square methodSolving to obtain pose information; otherwise, solving the pose information by adopting an iterative search method.
4. The indoor positioning method based on the visual tag as claimed in claim 3, wherein the specific steps of solving the pose information by using the iterative search method are as follows:
s31: setting course angle search range [ theta ]p-ε θp+ε]Wherein, thetapRepresenting the course angle of the previous moment, wherein epsilon is a set parameter; dispersing the course angle searching range into theta by adopting a set step length deltai,i=1,2,…;
S32: for each thetaiThe position coordinates (x, y) determined for any two tags are solved according to the following formula:
wherein i, j represents any pair of n successfully matched tags in S25, and the total number of tags can be calculatedPosition coordinates (x, y);
s33: for each thetaiAccording to the method obtained in S32A position coordinate, respectively calculating the variance of coordinate xVariance of sum yTo obtain thetaiCorresponding degree of positional coordinate dispersion
S34: and taking the average value of a group of position coordinates with the minimum dispersion degree of the position coordinates as a final position coordinate, wherein the corresponding course angle is the final course angle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110919082.XA CN113490171B (en) | 2021-08-11 | 2021-08-11 | Indoor positioning method based on visual label |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110919082.XA CN113490171B (en) | 2021-08-11 | 2021-08-11 | Indoor positioning method based on visual label |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113490171A CN113490171A (en) | 2021-10-08 |
CN113490171B true CN113490171B (en) | 2022-05-13 |
Family
ID=77946271
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110919082.XA Active CN113490171B (en) | 2021-08-11 | 2021-08-11 | Indoor positioning method based on visual label |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113490171B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104330090A (en) * | 2014-10-23 | 2015-02-04 | 北京化工大学 | Robot distributed type representation intelligent semantic map establishment method |
CN106291517A (en) * | 2016-08-12 | 2017-01-04 | 苏州大学 | Indoor cloud robot angle positioning method based on position and visual information optimization |
CN108051007A (en) * | 2017-10-30 | 2018-05-18 | 上海神添实业有限公司 | AGV navigation locating methods based on ultrasonic wave networking and stereoscopic vision |
US10285017B1 (en) * | 2018-03-30 | 2019-05-07 | Motorola Mobility Llc | Navigation tracking in an always aware location environment with mobile localization nodes |
CN112967341A (en) * | 2021-02-23 | 2021-06-15 | 湖北枫丹白露智慧标识科技有限公司 | Indoor visual positioning method, system, equipment and storage medium based on live-action image |
CN113034589A (en) * | 2021-02-25 | 2021-06-25 | 上海杰屿智能科技有限公司 | Unmanned aerial vehicle positioning method based on prior visual label |
-
2021
- 2021-08-11 CN CN202110919082.XA patent/CN113490171B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104330090A (en) * | 2014-10-23 | 2015-02-04 | 北京化工大学 | Robot distributed type representation intelligent semantic map establishment method |
CN106291517A (en) * | 2016-08-12 | 2017-01-04 | 苏州大学 | Indoor cloud robot angle positioning method based on position and visual information optimization |
CN108051007A (en) * | 2017-10-30 | 2018-05-18 | 上海神添实业有限公司 | AGV navigation locating methods based on ultrasonic wave networking and stereoscopic vision |
US10285017B1 (en) * | 2018-03-30 | 2019-05-07 | Motorola Mobility Llc | Navigation tracking in an always aware location environment with mobile localization nodes |
CN112967341A (en) * | 2021-02-23 | 2021-06-15 | 湖北枫丹白露智慧标识科技有限公司 | Indoor visual positioning method, system, equipment and storage medium based on live-action image |
CN113034589A (en) * | 2021-02-25 | 2021-06-25 | 上海杰屿智能科技有限公司 | Unmanned aerial vehicle positioning method based on prior visual label |
Non-Patent Citations (6)
Title |
---|
"Extended Reality (XR) in 5G";3GPP;《3GPP TR 26.928 V1.3.0.0 》;20200219;全文 * |
基于机器视觉的室内定位***设计与研究;张华;《中国优秀硕士论文电子期刊网》;20190115;全文 * |
基于消防安全疏散标志的高精度室内视觉定位;陶倩文等;《交通信息与安全》;20180428(第02期);全文 * |
基于连通性的无线传感器网络节点定位技术研究;张强;《中国优秀博士论文电子期刊网》;20120615;全文 * |
多点定位***方法研究;高锋;《中国优秀硕士论文电子期刊网》;20210115;全文 * |
多目标室内超声波三维定位***的研究;张强;《中国优秀硕士论文电子期刊网》;20190115;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113490171A (en) | 2021-10-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108132675B (en) | Autonomous path cruising and intelligent obstacle avoidance method for factory inspection unmanned aerial vehicle | |
Breitenmoser et al. | A monocular vision-based system for 6D relative robot localization | |
CN113052908B (en) | Mobile robot pose estimation algorithm based on multi-sensor data fusion | |
CN111882612A (en) | Vehicle multi-scale positioning method based on three-dimensional laser detection lane line | |
CN112085003B (en) | Automatic recognition method and device for abnormal behaviors in public places and camera equipment | |
CN112325883B (en) | Indoor positioning method for mobile robot with WiFi and visual multi-source integration | |
CN108664930A (en) | A kind of intelligent multi-target detection tracking | |
KR20130022994A (en) | Method for recognizing the self position of a mobile robot unit using arbitrary ceiling features on the ceiling image/feature map | |
CN108680177B (en) | Synchronous positioning and map construction method and device based on rodent model | |
CN112734765A (en) | Mobile robot positioning method, system and medium based on example segmentation and multi-sensor fusion | |
CN110009680B (en) | Monocular image position and posture measuring method based on circle feature and different-surface feature points | |
CN113627473A (en) | Water surface unmanned ship environment information fusion sensing method based on multi-mode sensor | |
Momeni-k et al. | Height estimation from a single camera view | |
US20230236280A1 (en) | Method and system for positioning indoor autonomous mobile robot | |
CN111998862A (en) | Dense binocular SLAM method based on BNN | |
CN114325634A (en) | Method for extracting passable area in high-robustness field environment based on laser radar | |
CN111811502B (en) | Motion carrier multi-source information fusion navigation method and system | |
CN112101160A (en) | Binocular semantic SLAM method oriented to automatic driving scene | |
CN113759928B (en) | Mobile robot high-precision positioning method for complex large-scale indoor scene | |
CN113971697A (en) | Air-ground cooperative vehicle positioning and orienting method | |
CN113490171B (en) | Indoor positioning method based on visual label | |
CN111340884B (en) | Dual-target positioning and identity identification method for binocular heterogeneous camera and RFID | |
CN113378701A (en) | Ground multi-AGV state monitoring method based on unmanned aerial vehicle | |
CN116929336A (en) | Minimum error-based laser reflection column SLAM (selective laser absorption) mapping method | |
CN114608560B (en) | Passive combined indoor positioning system and method based on intelligent terminal sensor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |