CN104661300A - Positioning method, device, system and mobile terminal - Google Patents

Positioning method, device, system and mobile terminal Download PDF

Info

Publication number
CN104661300A
CN104661300A CN201310598348.0A CN201310598348A CN104661300A CN 104661300 A CN104661300 A CN 104661300A CN 201310598348 A CN201310598348 A CN 201310598348A CN 104661300 A CN104661300 A CN 104661300A
Authority
CN
China
Prior art keywords
image
reference substance
distance
field picture
initial position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310598348.0A
Other languages
Chinese (zh)
Other versions
CN104661300B (en
Inventor
白耕
王晋高
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Autonavi Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Autonavi Software Co Ltd filed Critical Autonavi Software Co Ltd
Priority to CN201310598348.0A priority Critical patent/CN104661300B/en
Publication of CN104661300A publication Critical patent/CN104661300A/en
Application granted granted Critical
Publication of CN104661300B publication Critical patent/CN104661300B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • H04W64/003Locating users or terminals or network equipment for network management purposes, e.g. mobility management locating network equipment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the invention discloses a positioning method which comprises the following steps: acquiring an image of a target object in a present position through a user terminal, confirming an image of a reference object matched with the target object in images of the reference object within an initial position, and acquiring a size ratio of a zoomed image of the target object to the acquired image of the target object, wherein the initial position of the target object can be acquired by using a conventional positioning method; confirming the distance between a user terminal and the target object; taking the distance between the user terminal and the target object and a confirmed acquisition angle of the reference object as positioning result; or confirming the positioning position of the user terminal according to the distance between the user terminal and the target object, a confirmed coordinate of the reference object and the confirmed acquisition angle of the reference object. The user terminal does not need to be transformed, extra calibration is also not needed, the positioning precision is improved, and at the same time, the implementation cost is lowered. The invention further discloses a positioning device, a positioning system and a mobile terminal.

Description

Localization method, device, system and mobile terminal
Technical field
The present invention relates to field of locating technology, more particularly, relate to a kind of localization method, device, system and mobile terminal.
Background technology
Develop rapidly along with socioeconomic, the location requirement of people to self position is more and more extensive, except outdoor positioning, also more and more need to realize indoor to locate the position of oneself, as in huge building, market, if determine self position, provide the service of hommization more by the aspect such as to share to walking leading, position.
Traditional indoor positioning technologies mainly uses the modes such as WIFI, GPS, base station signal to position, positioning precision is low, and current solution otherwise to the mobile device of user transform (as, embedded in mobile phone positioning chip positions), extra demarcation is carried out (as at indoor location localizing emission platform in indoor, use bluetooth modules etc. communicate with localizing emission platform and position), realize cost higher.
Summary of the invention
The object of this invention is to provide a kind of localization method, device, system and mobile terminal, realize cost with what reduce indoor positioning.
For achieving the above object, the invention provides following technical scheme:
A kind of localization method, comprising:
The image of the object of current position is gathered by user terminal;
Obtain the initial position of described current location;
In the image data base prestored, obtain the information of the reference substance within the scope of described initial position; Wherein, the information of some reference substances is stored in described image data base, described information at least comprises: the coordinate of reference substance, some two field pictures of reference substance, the acquisition angles of the image of reference substance, the focal length of the image capture device used when gathering the image of reference substance in advance, and the distance between described image capture device and reference substance;
The image of described object is mated with each two field picture of each reference substance within the scope of described initial position, determines a two field picture of the reference substance mated with described object;
Obtain the zoomed image of described object and the size ratio of the image of described object;
According to the focal length of described user terminal, the focal length of the image capture device used when gathering the image of determined reference substance in advance, distance between image capture device and reference substance, and described size ratio, determine the distance between described user terminal and described object;
By the distance between described user terminal and described object, and the acquisition angles of determined reference substance is as positioning result; Or according to the distance between described user terminal and described object, the coordinate of determined reference substance, and the acquisition angles of determined reference substance, determine the position location of described user terminal.
A kind of positioner, comprising:
Image capture module, for gathering the image of the object of current position;
Initial position acquisition module, for obtaining the initial position of described current location;
Reference substance data obtaining module, in the image data base prestored, obtains the information of the reference substance within the scope of described initial position; Wherein, the information of some reference substances is stored in described image data base, described information at least comprises: the coordinate of reference substance, some two field pictures of reference substance, the acquisition angles of reference substance, the focal length of the image capture device used when gathering the image of reference substance in advance, and the distance between described image capture device and reference substance;
Matching module, for being mated with each two field picture of each reference substance within the scope of described initial position by the image of described object, determines a two field picture of the reference substance mated with described object;
Size than determination module, for the size ratio of the zoomed image with the image of gathered object that obtain described object;
Distance determination module, for the focal length according to described image capture module, the focal length of the image capture device used when gathering the image of determined reference substance in advance, distance between image capture device and determined reference substance, and described size ratio, determine the distance between described current location and described object;
Positioning result module, for by the distance between described current location and described object, and the acquisition angles of determined reference substance is as positioning result; Or according to the distance between described current location and described object, the coordinate of determined reference substance, and the acquisition angles of determined reference substance, determine the position location of described current location.
A kind of mobile terminal, comprises a kind of positioner of any one provided by the invention.
A kind of navigation system, comprising:
Mobile terminal and server; Wherein,
Described mobile terminal comprises:
Image capture module, for gathering the image of the object of current position;
Initial position acquisition module, for obtaining the initial position of described current location;
First sending module, for sending the image of described object and described initial position;
First receiver module, for receiving the information of the determined reference substance that described server sends, described information comprises: the coordinate of determined reference substance, the image of determined reference substance, the acquisition angles of determined reference substance, the focal length of the image capture device used when gathering the image of determined reference substance in advance, and the distance between image capture device and determined reference substance;
Size than determination module, for the size ratio of the zoomed image with the image of gathered object that obtain described object;
Distance determination module, for the focal length according to described first image capture module, the focal length of the image capture device used when gathering the image of determined reference substance in advance, distance between image capture device and determined reference substance, and described size ratio, determine the distance between described mobile terminal and described object;
Positioning result determination module, for by the distance between described mobile terminal and described object, and the acquisition angles of determined reference substance is as positioning result; Or according to the distance between described mobile terminal and described object, the coordinate of determined reference substance, and the acquisition angles of determined reference substance, determine the position location of described mobile terminal;
Described server comprises:
Second receiver module, for receiving image and the initial position of the object that described mobile terminal sends;
Reference substance data obtaining module, in the image data base prestored, obtains the information of the reference substance within the scope of described initial position; Wherein, the information of some reference substances is stored in described image data base, described information at least comprises: the coordinate of reference substance, some two field pictures of reference substance, the acquisition angles of reference substance, the focal length of the image capture device used when gathering the image of reference substance in advance, and the distance between described image capture device and reference substance;
Matching module, for being mated with each two field picture of each reference substance within the scope of described initial position by the image of described object, determines a two field picture of the reference substance mated with described object;
Second sending module, for sending the information of determined reference substance, described information comprises: the coordinate of determined reference substance, the image of determined reference substance, the acquisition angles of determined reference substance, the focal length of the image capture device used when gathering the image of determined reference substance in advance, and the distance between described image capture device and determined reference substance.
Known by above scheme, the location technology scheme that the application provides, prestore the information of reference substance, described information at least comprises: the coordinate of reference substance, some two field pictures of reference substance, the acquisition angles of the image of reference substance, the focal length of the image capture device used when gathering the image of reference substance in advance, and the distance between this image capture device and reference substance.Gathered the image of the object of current position by user terminal, its initial position can be obtained by traditional localization method, then by images match, determines a two field picture of the reference substance mated with described object; Obtain the zoomed image of described object and the size ratio of the image of gathered object; According to the focal length of described user terminal, the focal length of the image capture device used when gathering the image of determined reference substance in advance, distance between image capture device and determined reference substance, and described size ratio, determine the distance between described user terminal and described object; By the distance between described user terminal and described object, and the acquisition angles of determined reference substance is as positioning result; Or according to the distance between described user terminal and described object, the coordinate of determined reference substance, and the acquisition angles of determined reference substance, determine the position location of described user terminal.
It can thus be appreciated that, the location technology scheme that the embodiment of the present application provides, it is the position by reference to thing, and the position of object is determined in the position of object relative reference thing, therefore, do not need to transform mobile terminal, do not need additionally to demarcate yet, because the position of reference substance gathers in advance, can be as accurate as certain particular location in indoor floor or floor, so, the location technology scheme that the embodiment of the present application provides, while improve positioning precision, reduce and realize cost.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
The flow chart of a kind of localization method that Fig. 1 provides for the embodiment of the present application;
A kind of schematic diagram reference substance being carried out to IMAQ that Fig. 2 provides for the embodiment of the present application;
A kind of schematic diagram setting up relative coordinate system that Fig. 3 provides for the embodiment of the present application;
What Fig. 4 provided for the embodiment of the present application mates the image of described object with each two field picture of each reference substance within the scope of described initial position, determines the specific implementation flow chart of reference substance one two field picture mated with described object;
The specific implementation flow chart of the image distance between each two field picture of each reference substance within the scope of the image of the described object of acquisition that Fig. 5-a provides for the embodiment of the present application and described initial position;
The flow chart of a kind of specific implementation of Fig. 5-a illustrated embodiment that Fig. 5-b provides for the embodiment of the present application;
Another specific implementation flow chart of image distance between each two field picture of each reference substance within the scope of the image of the described object of acquisition that Fig. 6 provides for the embodiment of the present application and described initial position;
The specific implementation flow chart of the distance between one two field picture of each the first/the second image block of acquisition that Fig. 7 provides for the embodiment of the present application and a reference substance;
Fig. 8 for the embodiment of the present application provide by distance and value be defined as the specific implementation flow chart of the image distance between the image of described object and this image of this reference substance;
The specific implementation flow chart two distances and minimum in value one and value being defined as the image distance between the image of described object and this two field picture of this reference substance that Fig. 9 provides for the embodiment of the present application;
In the image of the reference substance within the scope of described initial position that Figure 10 provides for the embodiment of the present application, determine that the first image distance, the second image distance and the 3rd image distance meet the specific implementation flow chart of a two field picture of a reference substance of pre-set image matching condition;
The specific implementation flow chart of the first color moment characteristic distance between one two field picture of each first image block of acquisition that Figure 11 provides for the embodiment of the present application and a reference substance;
Figure 12 for the embodiment of the present application provide by the schematic diagram of the first image block continuous moving within the scope of the pre-set image that this first image block is corresponding on a two field picture of a reference substance;
The specific implementation flow chart of the first shape facility distance between one two field picture of each first image block of acquisition that Figure 13 provides for the embodiment of the present application and a reference substance;
The specific implementation flow chart of the first textural characteristics distance between one two field picture of each first image block of acquisition that Figure 14 provides for the embodiment of the present application and a reference substance;
The zoomed image of the described object of acquisition that Figure 15 provides for the embodiment of the present application and the specific implementation flow chart of the size ratio of the image of gathered object;
The structural representation of a kind of positioner that Figure 16 provides for the embodiment of the present application;
A kind of structural representation of the matching module that Figure 17 provides for the embodiment of the present application;
The structural representation of the acquisition submodule that Figure 18 provides for the embodiment of the present application;
The another kind of structural representation of the acquisition submodule that Figure 19 provides for the embodiment of the present application;
A kind of structural representation of first/second acquisition unit that Figure 20 provides for the embodiment of the present application;
The structural representation of the first determining unit that Figure 21 provides for the embodiment of the present application;
The structural representation of the second determining unit that Figure 22 provides for the embodiment of the present application;
Figure 23 provides a kind of structural representation of stator modules really for the embodiment of the present application;
A kind of structural representation of the first acquisition subelement that Figure 24 provides for the embodiment of the present application;
A kind of structural representation of the second acquisition subelement that Figure 25 provides for the embodiment of the present application;
A kind of structural representation of the 3rd acquisition subelement that Figure 26 provides for the embodiment of the present application;
The size that Figure 27 provides for the embodiment of the present application is than a kind of structural representation of determination module;
The structural representation of a kind of navigation system that Figure 28 provides for the embodiment of the present application;
The structural representation of another navigation system that Figure 29 provides for the embodiment of the present application.
Term " first ", " second ", " the 3rd " " 4th " etc. (if existence) in specification and claims and above-mentioned accompanying drawing are for distinguishing similar part, and need not be used for describing specific order or precedence.Should be appreciated that the data used like this can be exchanged in the appropriate case, so that the embodiment of the application described herein can be implemented with the order except illustrated here.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
Refer to Fig. 1, the flow chart of a kind of localization method that Fig. 1 provides for the embodiment of the present application, comprising:
Step S11: the image being gathered the object of current position by user terminal;
In the application, can selection marker object as object, such as, object can be the trade mark of trade company, also can be door, window, ceiling, pillar, word, pattern etc., or landmark, or the number of community etc.
Step S12: the initial position obtaining described current location;
The initial position of current location can be obtained by traditional localization method, positions as used the mode such as WIFI or GPS or base station signal.
Step S13: in the image data base prestored, obtains the information of the reference substance within the scope of described initial position; Wherein, the information of some reference substances is stored in described image data base, described information at least comprises: the coordinate of reference substance, some two field pictures of reference substance, the acquisition angles of reference substance, the focal length of the image capture device used when gathering the image of reference substance in advance, and the distance between described image capture device and reference substance;
In the embodiment of the present application, specifically when gathering, some two field pictures are gathered to each reference substance, specifically can be gathered the image of object by the mode of video, the IMAQ that also can carry out on the spot to reference substance by acquisition angles interval, specifically can referring to Fig. 2, shown in Fig. 2 to the mode that reference substance carries out IMAQ be: collector is centered by reference substance, distance L be radius circular arc on move, every θ angle, an IMAQ is carried out to reference substance.A kind of schematic diagram reference substance being carried out to IMAQ is illustrate only in Fig. 2, namely acquisition angles interval is fixing, when specific implementation, acquisition angles interval also can be unfixed, in like manner, described distance may not be fixing, when IMAQ, also may have certain angle of pitch, the described angle of pitch is also variable.It should be noted that, due to the reference substance had may only have part visible (as reference substance around the corner), therefore, the collection of the image of reference substance gathers in reference substance visible range.
After an IMAQ is often carried out to reference substance, also to preserve the focal length of the image capture device gathering this image, acquisition angles, and the distance between image capture device and reference substance, wherein, described acquisition angles is relative to predefined 0 ° of determined angle, such as, can define direct north is 0 °, or Due South is to being 0 ° etc., but no matter defining in which way, just can not change after predefine, when namely carrying out IMAQ to all reference substances, 0 of same definition ° is all adopted to judge acquisition angles for benchmark.
When carrying out reference substance IMAQ, in an image, multiple reference substance can be comprised, as multiple reference substance relatively, or certain reference substance to be attached to another reference substance first-class; When there being multiple reference substance, can will determine that a reference substance is main reference substance, and the distance determining between image capture device and this main reference substance is the distance between image capture device and reference substance.
For the coordinate of reference substance, if reference substance is in indoor, the relative position of described reference substance can be determined by the mode setting up relative coordinate system, the mode specifically setting up coordinate system can see Fig. 3, a kind of schematic diagram setting up relative coordinate system that Fig. 3 provides for the embodiment of the present application, the described mode setting up coordinate system can be:
In advance a relative coordinate system is set up to building, a geographical position can be selected as initial point, such as, one deck that initial point may be selected in building enters doorway (certainly, initial point can unrestricted choice, but selected after can not change), X-axis is selected perpendicular to inside direction, gate, Y-axis is selected perpendicular to X-axis in horizontal plane, and Z axis is selected perpendicular to horizontal plane.
After setting up relative coordinate system, each reference substance can regard a coordinate points as, can gather X, Y, Z coordinate of described reference substance under relative coordinate system, and wherein, Z axis represents the height of referential matter for initial point; X, Y then represent the unique positions at sustained height.Such as, Z value can be the height of reference substance distance one deck, the distance that X value and Y value are then reference substances between the projection of X-axis and Y-axis and initial point.
The position of outdoor reference substance then can by the position of the locate mode determination reference substances such as GPS.
Step S14: mated with each two field picture of each reference substance within the scope of described initial position by the image of described object, determines a two field picture of the reference substance mated with described object;
Step S15: obtain the zoomed image of described object and the size ratio of the image of gathered object;
The zoomed image of described object can be carry out to the image of object the zoomed image that convergent-divergent obtains by the pantograph ratio preset.
Step S16: according to the focal length of described user terminal, the focal length of the image capture device used when gathering the image of determined reference substance in advance, distance between image capture device and reference substance, and described size is than the distance determined between described user terminal and described object;
The distance specifically can determining between described user terminal and described object according to the first formula, described first formula is:
D=(λ/μ)*L*β (1)
Wherein, D is the distance between described user terminal and described object; λ is the focal length of the camera of described user terminal; μ is the focal length of image capture device when gathering the image of determined reference substance; L is image capture device when gathering the image of reference substance, the distance between image capture device and reference substance, the size ratio of the zoomed image that β is described object and the image of gathered object.
The size of above-mentioned image can refer to the pixel count of image.
Step S17: by the distance between described user terminal and described object, and the acquisition angles of determined reference substance is as positioning result; Or according to the distance between described user terminal and described object, the coordinate of determined reference substance, and the acquisition angles of determined reference substance, determine the position location of described user terminal.
After determining the distance between user terminal and object, can directly by the distance between user terminal and object, and the acquisition angles of determined reference substance feeds back to user as positioning result, by user according to the distance between described user terminal and object, and the particular location of user terminal described in the acquisition angles of determined reference substance.Such as, suppose that the distance between user terminal and object is L, the acquisition angles of determined reference substance is θ, so, user can centered by described object, with the distance L between described user terminal and object for radius, be that the determined position of benchmark anglec of rotation θ is for position location with predefined 0 °.
And owing to knowing in advance the position of reference substance, after determining the distance between user terminal and object, can also according to the distance between user terminal and described object, the coordinate of determined reference substance, and the acquisition angles of reference substance, uniquely determine the position of described user terminal.Concrete, if the coordinate of reference substance is (x, y), (x, y) can be previously described relative coordinate, also can be the latitude and longitude coordinates determined by GPS, the distance between user terminal and object be D, and the acquisition angles of determined reference substance is α, so, can according to the position (u, v) of the second formula determination user terminal, described second formula is:
u = sin α × D + x v = cos α × D + y - - - ( 2 )
A kind of localization method that the application provides, prestore the information of reference substance, described information at least comprises: the coordinate of reference substance, some two field pictures of reference substance, the acquisition angles of the image of reference substance, the focal length of the image capture device used when gathering the image of reference substance in advance, and the distance between this image capture device and reference substance.Gathered the image of the object of current position by user terminal, its initial position can be obtained by traditional localization method, then by images match, determines the image of the reference substance mated with described object; Obtain the zoomed image of described object and the size ratio of the image of gathered object; According to the focal length of described user terminal, the focal length of the image capture device used when gathering the image of determined reference substance in advance, distance between image capture device and reference substance, and described size ratio, determine the distance between described user terminal and described object; By the distance between described user terminal and described object, and the acquisition angles of determined reference substance is as positioning result; Or according to the distance between described user terminal and described object, the coordinate of determined reference substance, and the acquisition angles of determined reference substance, determine the position location of described user terminal.
It can thus be appreciated that, a kind of localization method that the embodiment of the present application provides, it is the position by reference to thing, and the position of object is determined in the position of object relative reference thing, therefore, do not need to transform mobile terminal, do not need additionally to demarcate yet, because the position of reference substance gathers in advance, can be as accurate as certain particular location in indoor floor or floor, so, the location technology scheme that the embodiment of the present application provides, while improve positioning precision, reduce and realize cost.
Above-described embodiment, preferably, describedly mated with each two field picture of each reference substance within the scope of described initial position by the image of described object, the specific implementation flow process determining a two field picture of the reference substance mated with described object as shown in Figure 4, can comprise:
Step S41: the image distance between each two field picture obtaining each reference substance within the scope of the image of described object and described initial position;
Concrete, the characteristics of image of image can be utilized to obtain distance between the image of each two field picture of each reference substance within the scope of the image of described object and described initial position, such as, the color moment feature of each two field picture of each reference substance within the scope of initial position described in the color moment characteristic sum that can extract the image of object, the distance between the color moment feature calculating each two field picture of each reference substance within the scope of the color moment feature of the image of described object and described initial position.Certainly, the image distance of each two field picture of each reference substance within the scope of the image of described object and described initial position can also be obtained by other characteristics of image.
Step S42: determine that a two field picture that image distance meets a reference substance of pre-set image matching condition is a two field picture of the reference substance mated with described object.
Such as, during the image distance of each two field picture of each reference substance within the scope of the image being obtained described object by color moment feature and described initial position, a two field picture of a reference substance the shortest with the image distance between the image of described object can be defined as a two field picture of the reference substance mated with described object.
On basis embodiment illustrated in fig. 4, the specific implementation flow process of the image distance between each two field picture of each reference substance within the scope of the image of the described object of the acquisition that the embodiment of the present application provides and described initial position, as shown in Fig. 5-a, can comprise:
Step S51: the image of described object is on average divided into several the first image blocks according to preset block of pixels first size;
For each two field picture of each reference substance within the scope of described initial position, perform following steps with the image distance between each two field picture obtaining each reference substance within the scope of the image of described object and described initial position,
Step S52: the distance between the two field picture obtaining each first image block and a reference substance;
Step S53: by described distance and image distance that value is defined as between the image of described object and this two field picture of this reference substance.
In actual applications, a kind of specific implementation of Fig. 5-a illustrated embodiment, as shown in Fig. 5-b, can comprise:
Step S511: the distance between the two field picture obtaining first image block and a reference substance;
Step S521: judge whether the first image block obtained is last block, if so, then enters step S541, otherwise, perform step S531
Step S531: the distance between this two field picture obtaining next first image block and this reference substance, and return step S521;
Step S541: by described distance and image distance that value is defined as between the image of described object and this two field picture of this reference substance
Step S551: whether the image in determining step S541 is the last frame image of reference substance, if so, then enters step S571, if not, then performs step S561;
Step S561: the distance between another two field picture of acquisition first image block and this reference substance, and return step S521;
Step S571: judge that whether reference substance is last reference substance within the scope of described initial position, if so, then process ends, otherwise, the distance between the two field picture obtaining first image block and another reference substance, and return step S521.
In the embodiment of the present application, by the distance between a two field picture of all first image blocks and a reference substance and image distance that value is defined as between the image of described object and this two field picture of this reference substance.
On basis embodiment illustrated in fig. 4, the specific implementation flow process of the image distance between each two field picture of each reference substance within the scope of the image of the described object of acquisition that another embodiment of the application provides and described initial position as shown in Figure 6, can comprise:
Step S61: the image of described object is on average divided into several the first image blocks according to preset block of pixels first size;
Step S62: the image of described object is on average divided into several the second image blocks according to preset block of pixels second size, described block of pixels second size is less than described block of pixels first size;
Concrete, when carrying out second time and dividing, can be reduced the size of image block by the mode of quad-tree partition, such as, when supposing that first time divides, the size of image block is m*m, and so, when second time divides, the size of image block can be
For each two field picture of each reference substance within the scope of described initial position, perform following steps with the image distance between each two field picture obtaining each reference substance within the scope of the image of object and initial position:
Step S63: the distance between the two field picture obtaining each first image block and a reference substance;
Step S64: the distance between this two field picture obtaining each the second image block and this reference substance;
Step S65: respectively calculate two distances and value, described two distances and minimum in value one and value are defined as the image distance between the image of object and this two field picture of this reference substance.
In actual applications, flow process shown in Fig. 6 can according to the principle specific implementation of described step S511 to step S571 above, and for saving space, the application repeats no more.
Convenient in order to describe, distance between one two field picture of each first image block and a reference substance can be designated as the first distance, distance between this two field picture of each the second image block and this reference substance is designated as second distance, in step S65, calculate the first distance respectively with value and second distance and value, and by described first distance and value and described second distance and image distance that minimum value in value is defined as between the image of object and this two field picture of this reference substance.
Different from embodiment illustrated in fig. 5, the embodiment of the present application carries out twice division to the image of object, image distance between an object image and the image of reference substance is all calculated (namely for each division, first distance with value and second distance and be worth), and the minimum value in the image distance of twice acquisition is defined as the image distance between the image of object and this two field picture of this reference substance.
Above-described embodiment, preferably, color moment feature can be applied, shape facility and textural characteristics by obtain each the first/the second image block and a reference substance a two field picture between distance, the specific implementation flow process of the distance between one two field picture of each the first/the second image block of the acquisition that the embodiment of the present application provides and a reference substance as shown in Figure 7, can comprise:
Step S71: application color moment feature obtains the first/the second color moment characteristic distance between a two field picture of each the first/the second image block described and a reference substance;
Concrete, when only once dividing the image of described object, application color moment feature obtains the distance between a two field picture of each first image block and a reference substance, is designated as the first color moment characteristic distance;
When carrying out twice division to the image of described object, the distance between this two field picture that color moment feature obtains each the second image block and this reference substance can also be applied, be designated as the second color moment characteristic distance.
Step S72: application of shape feature obtains the first/the second shape facility distance between this two field picture of each the first/the second image block described and this reference substance;
Concrete, when only once dividing the image of described object, application of shape feature obtains the distance between a two field picture of each first image block and a reference substance, is designated as the first shape facility distance;
When carrying out twice division to the image of described object, application of shape feature can also obtain this two field picture spacing of each the second image block and this reference substance, being designated as the second shape facility distance.
Step S73: application textural characteristics obtains the first/the second textural characteristics distance between this two field picture of each the first/the second image block described and this reference substance.
Concrete, when only once dividing the image of described object, application textural characteristics obtains the distance between a two field picture of each first image block and a reference substance, is designated as the first textural characteristics distance;
When carrying out twice division to the image of described object, the distance between this two field picture that textural characteristics obtains each the second image block and this reference substance can also be applied, be designated as the second textural characteristics distance.
It should be noted that, the execution sequence of step S71, step S72 and step S73 can be not limited to the order that above-described embodiment limits, and namely step S71, step S72 and step S73 execution sequence can adjust arbitrarily, are not specifically limited here.
It should be noted that, in the application, "/" represents "or", and the implication of described the first/the second image block is the first image block or the second image block.
In above-described embodiment, when only once dividing the image of described object, described by described distance and value be defined as the specific implementation flow process of the image distance between the image of described object and this image of this reference substance as shown in Figure 8, can comprise:
Step S81: by described first color moment characteristic distance and the first image distance that value is defined as between the image of described object and this two field picture of this reference substance;
Due to the corresponding first color moment characteristic distance of each first image block, in the embodiment of the present application, the first corresponding for all first image blocks of the image of object color moment characteristic distance is defined as the first image distance of the image of described object and this two field picture of this reference substance with value.
Step S82: by described first shape facility distance and the second image distance that value is defined as between the image of described object and this two field picture of this reference substance;
Due to the corresponding first color moment characteristic distance of each first image block, in the embodiment of the present application, the first corresponding for all first image blocks of the image of object shape facility distance is defined as the second image distance of the image of described object and this two field picture of this reference substance with value.
Step S83: by described first textural characteristics distance and the 3rd image distance that value is defined as between the image of described object and this two field picture of this reference substance;
Due to the corresponding first color moment characteristic distance of each first image block, in the embodiment of the present application, the first corresponding for all first image blocks of the image of object textural characteristics distance is defined as the 3rd image distance of the image of described object and this two field picture of this reference substance with value;
In the image of described reference substance within the scope of initial position, determine that a two field picture that image distance meets a reference substance of pre-set image matching condition is that a two field picture of the reference substance mated with described object is specially:
In the image of the reference substance within the scope of described initial position, determine that a two field picture that the first image distance, the second image distance and the 3rd image distance meet a reference substance of pre-set image matching condition is a two field picture of the reference substance mated with described object.
It should be noted that, the execution sequence of step S81, step S82 and step S83 can be not limited to the order that above-described embodiment limits, and namely step S81, step S82 and step S83 execution sequence can adjust arbitrarily, are not specifically limited here.
In above-described embodiment, when carrying out twice division to the image of described object, described described two distances and minimum in value one and value are defined as the specific implementation flow process of the image distance between the image of described object and this two field picture of this reference substance as shown in Figure 9, can comprise:
Step S91: by described first color moment characteristic distance and value and described second color moment characteristic distance and the first image distance that minimum in value one and value are defined as between the image of described object and this two field picture of this reference substance;
Concrete, when the image of described object is on average divided into several first image blocks according to preset block of pixels first size, application color moment feature obtains the first color moment characteristic distance between a two field picture of each first image block described and a reference substance, and calculate the first color moment characteristic distance and value;
The image of described object is on average divided into several the second image blocks according to preset block of pixels second size, when described block of pixels second size is less than described block of pixels first size, application color moment feature obtains the second color moment characteristic distance between a two field picture of each second image block described and a reference substance, and calculate the second color moment characteristic distance and value;
By described first color moment characteristic distance and value and described second color moment characteristic distance and the first image distance that minimum value in value is defined as between the image of described object and this two field picture of this reference substance;
Step S92: by described first shape facility distance and value and described second shape facility distance and the second image distance that minimum in value one and value are defined as between the image of described object and this two field picture of this reference substance;
Concrete, when the image of described object is on average divided into several first image blocks according to preset block of pixels first size, application of shape feature obtains the first shape facility distance between a two field picture of each first image block described and a reference substance, and calculate the first shape facility distance and value;
The image of described object is on average divided into several the second image blocks according to preset block of pixels second size, when described block of pixels second size is less than described block of pixels first size, application of shape feature obtains the second shape facility distance between a two field picture of each second image block described and a reference substance, and calculate the second shape facility distance and value;
By described first shape facility distance and value and described second shape facility distance and the second image distance that minimum value in value is defined as between the image of described object and this two field picture of this reference substance.
Step S93: by described first textural characteristics distance and value and described second textural characteristics distance and the 3rd image distance that minimum in value one and value are defined as between the image of described object and this two field picture of this reference substance;
Concrete, when the image of described object is on average divided into several first image blocks according to preset block of pixels first size, application of shape feature obtains the first textural characteristics distance between a two field picture of each first image block described and a reference substance, and calculate the first textural characteristics distance and value;
The image of described object is on average divided into several the second image blocks according to preset block of pixels second size, when described block of pixels second size is less than described block of pixels first size, application textural characteristics obtains the second textural characteristics distance between a two field picture of each second image block described and a reference substance, and calculate the second textural characteristics distance and value;
By described first shape facility distance and value and described second shape facility distance and the second image distance that minimum value in value is defined as between the image of described object and this two field picture of this reference substance.
In the image of described reference substance within the scope of initial position, determine that a two field picture that image distance meets a reference substance of pre-set image matching condition is that a two field picture of the reference substance mated with described object is specially:
In the image of the reference substance within the scope of described initial position, determine that a two field picture that the first image distance, the second image distance and the 3rd image distance meet a reference substance of pre-set image matching condition is a two field picture of the reference substance mated with described object.
It should be noted that, the execution sequence of step S91, step S92 and step S93 can be not limited to the order that above-described embodiment limits, and namely step S91, step S92 and step S93 execution sequence can adjust arbitrarily, are not specifically limited here.
Above-described embodiment, preferably, in order to improve matching speed, when the image of image block and each reference substance is carried out images match, first can apply color moment feature and each two field picture of each reference substance within the scope of the image block of acquisition and described initial position is carried out images match, obtain the image of the successful reference substance of application color moment characteristic matching; The image that the image block of acquisition and each apply the successful reference substance of color moment characteristic matching is carried out images match by application of shape characteristic sum textural characteristics again.
Concrete, in the image of the described reference substance within the scope of described initial position that the embodiment of the present application provides, determine that the first image distance, the second image distance and the 3rd image distance meet the specific implementation flow process of a two field picture of a reference substance of pre-set image matching condition as shown in Figure 10, can comprise:
Step S101: in the image of the reference substance within the scope of described initial position, determines that the first image distance is less than the image of the reference substance of the first preset distance threshold;
Namely first by the subset of the image of color moment feature determination reference substance, the image of the successful reference substance of color moment characteristic matching in the subset of the image of this reference substance, is only included.
Step S102: from the image of the described reference substance determined, determines that the two field picture that the weighted sum of the second image distance and the 3rd image distance is less than a reference substance of preset second distance threshold value is the first image distance, a two field picture that the second image distance and the 3rd image distance meet a reference substance of pre-set image matching condition.
In this step, from the subset of the image of the reference substance determined by color moment feature, determine that the two field picture that the second image distance of the image of reference substance and the weighted sum of the 3rd image distance are less than a reference substance of preset second distance threshold value is the first image distance, a two field picture that the second image distance and the 3rd image distance meet a reference substance of pre-set image matching condition.
Wherein, the weights of described second image distance and the weights of the 3rd image distance can be equal, are namely all 0.5, also rule of thumb can adjust accordingly, be not specifically limited here.
In the embodiment of the present application, first by comparatively simple color moment characteristic matching, most reference substance image has a long way to go due to color and image block and is eliminated, and which saves complex characteristic below and extracts, and the time of coupling.
Above-described embodiment, preferably, for each the first image block, performs the first color moment characteristic distance between a two field picture that flow process as shown in figure 11 obtains each first image block and a reference substance, can comprise:
Step S111: by the first image block continuous moving within the scope of the pre-set image that this first image block is corresponding on a two field picture of a reference substance, often mobile once this first image block, the distance between the image block utilizing the image of color moment feature calculation once this first image block and this reference substance to be covered by this first image block;
Wherein, when the pre-set image scope that first image block is corresponding on a two field picture of a reference substance refers to and this reference substance is divided into several image blocks by same dividing mode, the image block of the same area and the expansion fringe region of this image block is positioned at the first image block in reference substance, as shown in figure 12, Figure 12 for the embodiment of the present application provide by the schematic diagram of the first image block continuous moving within the scope of the pre-set image that this first image block is corresponding on a two field picture of a reference substance: for convenience of describing, in Figure 12, be identical numeral by portion markings corresponding in the image block of the image block of the image of object and the image of reference substance, such as, the expansion fringe region being labeled as the image block of " 1 " in the image block of " 1 " and the image of reference substance is labeled as in the image of reference substance, the dash area being labeled as " 1 " in the image of i.e. reference substance is the pre-set image scope corresponding with the first image block being labeled as " 1 " in the image of described object, in like manner, be labeled as the expansion fringe region being labeled as the image block of " 7 " in the image block of " 7 " and the image of reference substance in the image of reference substance, i.e. the dash area being labeled as " 7 " in the image of reference substance is be labeled as the corresponding pre-set image scope of first image block of " 7 " in the image with object.
Concrete, the color moment feature of image can be characterized by the single order center square of the color of image, second-order moment around mean and third central moment, wherein,
Single order center square is: μ = 1 n Σ i Σ j p ij ;
Second-order moment around mean is: σ = [ 1 n Σ i Σ j ( p ij - μ ) ] 1 2 ;
Third central moment is: s = [ 1 n Σ i Σ j ( p ij - μ ) ] 1 3 ;
Wherein, n is the sum of all pixels of image, p ijfor (the i of image space two-dimensional coordinate, j) pixel value after place's synthesis, pixel value after described synthesis can be the HSI(Hue Saturation Intensty after synthesis) pixel value, also can be the YUV pixel value after synthesis, the pixel value of other color space after can also synthesizing, such as, can be synthesis after rgb pixel value etc.
Like this, image just can use color moment characteristic vector (μ, σ, s) characterize, distance between two width images (supposing to be respectively A and B) can characterize with Euclidean distance, and the distance D (A, B) between concrete image A and image B can be expressed as:
D(A,B)=|μ AB|+|σ AB|+|s A-s B|
Step S112: the first color moment characteristic distance between this two field picture distance the shortest in described distance being defined as this first image block and this reference substance.
Above-described embodiment, preferably, for each the first image block, performs the first shape facility distance between a two field picture that flow process as shown in figure 13 obtains each first image block and a reference substance, can comprise:
Step S131: utilize shape facility to calculate distance between the first image block image block corresponding on this two field picture of this reference substance with this first image block;
Also for Figure 12, the image block in the image of reference substance, being labeled as " 6 " is the image block corresponding with the image block being labeled as " 6 " in the image of described object.
Concrete, the shape facility of image, can characterize with the Fourier descriptor of the profile of image, that is, the distance between the Fourier descriptor of the profile of two images is defined as the distance between two images.
Step S132: the first shape facility distance between this two field picture this distance being defined as this first image block and this reference substance.
Above-described embodiment, preferably, for each the first image block, performs the first textural characteristics distance between a two field picture that flow process as shown in figure 14 obtains each first image block and a reference substance, can comprise:
Step S141: utilize textural characteristics to calculate distance between the first image block image block corresponding on this image of this reference substance with this first image block;
Concrete, the textural characteristics of image can characterize with gray level co-occurrence matrixes, that is, the distance between the gray level co-occurrence matrixes of two images is defined as the distance between two images.
Step S142: the first textural characteristics distance between this two field picture this distance being defined as this first image block and this reference substance.
Above-described embodiment, preferably, the flow chart of the zoomed image of the described object of described acquisition and the size ratio of the image of gathered object as shown in figure 15, can comprise:
Step S151: the image of described object is carried out convergent-divergent according to each preset pantograph ratio, obtains the zoomed image of object under each pantograph ratio;
Described pantograph ratio can according to preset convergent-divergent radix a(0<a<1) determine, concrete, described pantograph ratio is a n, wherein, namely during each convergent-divergent, the size of image is reduced into original a doubly, such as, the original size of the image of hypothetical target thing is M, so, for the first time after convergent-divergent, the size of the image of object becomes M*a, and after second time convergent-divergent, the size of the image of object becomes M*a 2, for the third time after convergent-divergent, the size of the image of object becomes M*a 3, the rest may be inferred, until reach default convergent-divergent number of times, or the size of image after convergent-divergent reaches preset value with the size ratio of original image.
Step S152: by zoomed image corresponding for each pantograph ratio of described object continuous moving on the image of described reference substance, calculate the degree of correlation between the zoomed image of described object and the overlapping image of the described reference substance determined;
Step S153: the size of zoomed image maximum for the degree of correlation and the ratio of the size of the image of gathered object are defined as the zoomed image of described object and the size ratio of the image of gathered object.
In the embodiment of the present application, the zoomed image of the object when degree of correlation is maximum as driven dimension than time object used zoomed image, and the size of the zoomed image of the object this determined and the ratio of the size of the image of gathered object are defined as the zoomed image of described object and the size ratio of the image of gathered object.
Preferably, in order to reduce the memory space shared by image data base, the characteristics of image of the image of reference substance can be extracted, the characteristics of image of image is preserved in image data base, described characteristics of image comprises color moment feature, textural characteristics, shape facility etc., it can also be further feature, as spatial relationship feature etc., in this case, another embodiment of the present invention provides a kind of localization method, the difference of method shown in the method and Fig. 1 is, the method described method before the two field picture determining the reference substance mated with described object comprises further: the characteristics of image extracting the image of object,
And, described the image of described object to be mated with each two field picture of each reference substance within the scope of described initial position, determine that a two field picture of the reference substance mated with described object is specially: mated by the characteristics of image of the characteristics of image of the image of described object with each two field picture of each reference substance within the scope of described initial position, determine a two field picture of the reference substance mated with described object.Other specific embodiment modes are with execution mode is identical above, and change and be only that the process for image all changes to the process of the characteristics of image for image, for saving space, related content refers to related content above, repeats no more herein.
Corresponding with embodiment of the method, the embodiment of the present application also provides a kind of positioner, and as shown in figure 16, this device can be applied to mobile terminal to the structural representation of a kind of positioner that the embodiment of the present application provides, and this positioner can comprise:
Image capture module 161, initial position acquisition module 162, reference substance data obtaining module 163, matching module 164, size than determination module 165, distance determination module 166 and positioning result determination module 167; Wherein,
Image capture module 161 is for gathering the image of the object of current position;
Initial position acquisition module 162 is for obtaining the initial position of described current location;
Reference substance data obtaining module 163 is connected with described initial position acquisition module 162, in the image data base prestored, obtains the information of the reference substance within the scope of described initial position; Wherein, the information of some reference substances is stored in described image data base, described information at least comprises: the coordinate of reference substance, the image of reference substance, the acquisition angles of reference substance, the focal length of the image capture device used when gathering the image of reference substance in advance, and the distance between described image capture device and reference substance;
Matching module 164 is connected with described reference substance data obtaining module 163 with described image capture module 161 respectively, for being mated with each two field picture of each reference substance within the scope of described initial position by the image of described object, determine a two field picture of the reference substance mated with described object;
Size is connected with described matching module 164 with described image capture module 161 respectively than determination module 165, for the size ratio of the zoomed image with the image of gathered object that obtain described object;
Distance determination module 166 is connected with matching module 164 with described image capture module 161, reference substance data obtaining module 163 respectively, for the focal length according to described image capture module, the focal length of the image capture device used when gathering the image of determined reference substance in advance, distance between image capture device and determined reference substance, and described size ratio, determine the distance between described current location and described object;
Distance between current location and described object is the distance between mobile terminal and object.
Positioning result determination module 167 is connected with described distance determination module 166 with described reference substance data obtaining module 163, matching module 164 respectively, for by the distance between described current location and described object, and the acquisition angles of determined reference substance is as positioning result; Or according to the distance between described current location and described object, the coordinate of determined reference substance, and the acquisition angles of determined reference substance, determine the position location of described current location.
The position location of described current location and the position location of described mobile terminal.
A kind of positioner that the embodiment of the present application provides, prestore the information of reference substance, described information at least comprises: the coordinate of reference substance, some two field pictures of reference substance, the acquisition angles of the image of reference substance, the focal length of the image capture device used when gathering the image of reference substance in advance, and the distance between this image capture device and reference substance.Gathered the image of the object of current position by image capture module, its initial position can be obtained by traditional localization method, then by images match, determines the image of the reference substance mated with described object; Obtain the zoomed image of described object and the size ratio of the image of gathered object; According to the focal length of described image capture module, the focal length of the image capture device used when gathering the image of determined reference substance in advance, distance between image capture device and reference substance, and described size ratio, determine the distance between described mobile terminal and described object, by the distance between described mobile terminal and described object, and the acquisition angles of determined reference substance is as positioning result; Or according to the distance between described mobile terminal and described object, the coordinate of determined reference substance, and the acquisition angles of determined reference substance, determine the position of described mobile terminal.
It can thus be appreciated that, a kind of positioner that the embodiment of the present application provides, it is the position by reference to thing, and the position of object is determined in the position of object relative reference thing, therefore, do not need to transform mobile terminal, do not need additionally to demarcate yet, because the position of reference substance gathers in advance, can be as accurate as certain particular location in indoor floor or floor, so, the location technology scheme that the embodiment of the present application provides, while improve positioning precision, reduce and realize cost.
In the embodiment shown in fig. 16, a kind of structural representation of the matching module 164 that the embodiment of the present application provides as shown in figure 17, can comprise:
Obtain submodule 171 and determine submodule 172; Wherein,
Obtain submodule 171 for obtaining the image distance of each two field picture of each reference substance within the scope of the image of described object and described initial position;
Determine that submodule 172 is connected with described acquisition submodule 171, in image for the reference substance within the scope of initial position, determine that a two field picture that image distance meets a reference substance of pre-set image matching condition is a two field picture of the reference substance mated with described object.
On basis embodiment illustrated in fig. 17, the structural representation of the acquisition submodule 171 that the embodiment of the present application provides as shown in figure 18, can comprise:
First division unit, for being on average divided into several the first image blocks by the image of described object according to preset block of pixels first size;
First acquiring unit, for each two field picture for each reference substance within the scope of described initial position, the distance between the two field picture obtaining each first image block and a reference substance;
First determining unit, for by described distance and image distance that value is defined as between the image of described object and this two field picture of this reference substance.
On basis embodiment illustrated in fig. 17, the another kind of structural representation of the acquisition submodule 171 that the embodiment of the present application provides as shown in figure 19, can comprise:
First division unit, for being on average divided into several the first image blocks by the image of described object according to preset block of pixels first size;
Second division unit, for the image of described object is on average divided into several the second image blocks according to preset block of pixels second size, described block of pixels second size is less than described block of pixels first size;
First acquiring unit is connected with described first division unit, for each two field picture for each reference substance within the scope of described initial position, and the distance between the two field picture obtaining each first image block and a reference substance;
Second acquisition unit is connected with described second division unit, for each two field picture for each reference substance within the scope of described initial position, and the distance between this two field picture obtaining each the second image block and this reference substance;
Second determining unit is connected with second acquisition unit with described first acquiring unit respectively, for calculate two distances respectively and value, described two distances and minimum in value one and value are defined as the image distance between the image of object and this two field picture of this reference substance.
Figure 18 or embodiment illustrated in fig. 19 in, a kind of structural representation of described first/second acquisition unit can comprise as shown in figure 20:
First obtains subelement 201, obtains the first/the second color moment characteristic distance of a two field picture of each the first/the second image block described and a reference substance for applying color moment feature;
Second obtains subelement 202, the first/the second shape facility distance between this two field picture of each the first/the second image block and this reference substance described in obtaining for application of shape feature;
3rd obtains subelement 203, the first/the second textural characteristics distance between this two field picture obtaining each the first/the second image block described and this reference substance for applying textural characteristics.
That is, no matter be the first acquiring unit or second acquisition unit, all apply the distance between image that these three features of color moment feature, shape facility and textural characteristics obtain image blocks and reference substance.
On basis embodiment illustrated in fig. 20, the structural representation of the first determining unit that the embodiment of the present application provides as shown in figure 21, can comprise:
First determines subelement 211, for by described first color moment characteristic distance and the first image distance that value is defined as between the image of described object and this two field picture of this reference substance;
Second determines subelement 212, for by described first shape facility distance and the second image distance that value is defined as between the image of described object and this two field picture of this reference substance;
3rd determines subelement 213, for by described first textural characteristics distance and the 3rd image distance that value is defined as between the image of described object and this two field picture of this reference substance;
Described determine submodule 172 specifically for, in the image of the reference substance within the scope of described initial position, determine that the first image distance, the second image distance and the 3rd image distance meet a two field picture of a reference substance of pre-set image matching condition.
On basis embodiment illustrated in fig. 20, the structural representation of the second determining unit that the embodiment of the present application provides as shown in figure 22, can comprise:
3rd determines subelement 221, for by described first color moment characteristic distance and value and described second color moment characteristic distance and the first image distance that minimum in value one and value are defined as between the image of described object and this two field picture of this reference substance;
4th determines subelement 222, for by described first shape facility distance and value and described second shape facility distance and the second image distance that minimum in value one and value are defined as between the image of described object and this two field picture of this reference substance;
5th determines subelement 223, for by described first textural characteristics distance and value and described second textural characteristics distance and the 3rd image distance that minimum in value one and value are defined as between the image of described object and this two field picture of this reference substance;
Described determine submodule 172 specifically for, in the image of the reference substance within the scope of described initial position, determine that a two field picture that the first image distance, the second image distance and the 3rd image distance meet a reference substance of pre-set image matching condition is a two field picture of the reference substance mated with described object.
On Figure 21 or basis embodiment illustrated in fig. 22, the embodiment of the present application provides a kind of structural representation of stator modules 172 really as shown in figure 23, can comprise:
3rd determining unit 231, in the image of the reference substance of described initial position scope, determines that the first image distance is less than the image of the reference substance of the first preset distance threshold;
4th determining unit 232, for in the image from the described reference substance determined, determine that the two field picture that the second image distance of the image of reference substance and the weighted sum of the 3rd image distance are less than a reference substance of preset second distance threshold value is the first image distance, a two field picture that the second image distance and the 3rd image distance meet a reference substance of pre-set image matching condition.
On basis embodiment illustrated in fig. 20, a kind of structural representation of the first acquisition subelement 201 that the embodiment of the present application provides as shown in figure 24, can comprise:
First computing unit 241, for for each the first image block, by the first image block continuous moving within the scope of the pre-set image that this first image block is corresponding on a two field picture of a reference substance, often mobile once this first image block, the distance between the image block utilizing the image of color moment feature calculation once this first image block and this reference substance to be covered by this first image block;
Color moment characteristic distance determining unit 242, for for each the first image block, the first color moment characteristic distance between this two field picture distance the shortest in described distance being defined as this first image block and this reference substance.
On basis embodiment illustrated in fig. 20, a kind of structural representation of the second acquisition subelement 202 that the embodiment of the present application provides as shown in figure 25, can comprise:
Second computing unit 251, for for each the first image block, utilizes shape facility to calculate distance between the first image block image block corresponding on this image of this reference substance with this first image block;
Shape facility distance determining unit 252, for for each the first image block, the first shape facility distance between this two field picture distance that described second computing unit 251 calculates being defined as this first image block and this reference substance.
On basis embodiment illustrated in fig. 20, a kind of structural representation of the 3rd acquisition subelement 203 that the embodiment of the present application provides as shown in figure 26, can comprise:
3rd computing unit 261, for for each the first image block, utilizes textural characteristics to calculate distance between the first image block image block corresponding on this image of this reference substance with this first image block;
Textural characteristics distance determining unit 262, for for each the first image block, the first textural characteristics distance between this two field picture distance that described 3rd computing unit 261 calculates being defined as this first image block and this reference substance.
In above-described embodiment, preferably, the size that the embodiment of the present application provides than determination module 165 a kind of structural representation as shown in figure 27, can comprise:
Convergent-divergent submodule 271, for the image of described object is carried out convergent-divergent according to each preset pantograph ratio, obtains the zoomed image of object under each pantograph ratio;
Degree of correlation determination submodule 272, for zoomed image continuous moving on the image of described reference substance that each pantograph ratio by described object is corresponding, calculate the degree of correlation between the zoomed image of described object and the overlapping image of the described reference substance determined;
Size ratio determines submodule 273, for the size ratio of the zoomed image with the image of described object that the size of zoomed image maximum for the degree of correlation and the ratio of the size of the image of gathered object are defined as described object.
On basis embodiment illustrated in fig. 16, in another embodiment of the positioner that the embodiment of the present application provides, described positioner can also comprise:
Characteristic extracting module, is connected with described matching module 164 with described image capture module 161 respectively, for extracting the characteristics of image of the image of described object;
Described matching module 164 mates with the characteristics of image of each two field picture of each reference substance within the scope of described initial position specifically for the characteristics of image of the image by described object, determines a two field picture of the reference substance mated with described object.
The embodiment of the present application also provides a kind of mobile terminal, and it has positioner as above.
The localization method that the application provides can also be realized in conjunction with server by mobile terminal, and the structural representation of a kind of navigation system that the embodiment of the present application provides as shown in figure 28, can comprise:
Mobile terminal 281 and server 282; Wherein,
Described mobile terminal 281 comprises:
Image capture module 2811, for gathering the image of the object of current position;
Initial position acquisition module 2812, for obtaining the initial position of described current location;
First sending module 2813, for sending described object image and described initial position message;
First receiver module 2814, for receiving the information of the determined reference substance that described server sends, described information comprises: the coordinate of determined reference substance, the image of determined reference substance, the acquisition angles of determined reference substance, the focal length of the image capture device used when gathering the image of determined reference substance in advance, and the distance between image capture device and determined reference substance;
Size than determination module 2815, for the size ratio of the zoomed image with the image of gathered object that obtain described object;
Distance determination module 2816, for the focal length according to described image capture module 2811, the focal length of the image capture device used when gathering the image of determined reference substance in advance, distance between image capture device and determined reference substance, and described size ratio, determine the distance between described mobile terminal and described object;
Positioning result determination module 2817, for by the distance between described mobile terminal and described object, and the acquisition angles of determined reference substance is as positioning result; Or according to the distance between described mobile terminal and described object, the coordinate of determined reference substance, and the acquisition angles of determined reference substance, determine the position location of described mobile terminal;
Described server 282 comprises:
Second receiver module 2821, for receiving image and the initial position of the object that described mobile terminal sends;
Reference substance data obtaining module 2822, in the image data base prestored, obtains the information of the reference substance within the scope of described initial position; Wherein, the information of some reference substances is stored in described image data base, described information at least comprises: the coordinate of reference substance, some two field pictures of reference substance, the acquisition angles of reference substance, the focal length of the image capture device used when gathering the image of reference substance in advance, and the distance between described image capture device and reference substance;
Matching module 2823, for being mated with each two field picture of each reference substance within the scope of described initial position by the image of described object, determines a two field picture of the reference substance mated with described object;
Second sending module 2824, for sending the information of determined reference substance, described information comprises: the coordinate of determined reference substance, the image of determined reference substance, the acquisition angles of determined reference substance, the focal length of the image capture device used when gathering the image of determined reference substance in advance, and the distance between described image capture device and determined reference substance.
In order to reduce expending client data flow, reduce the cost that unnecessary information is transmitted in limited network bandwidth, to extract characteristics of image more accurately, on basis embodiment illustrated in fig. 28, the structural representation of another navigation system that the embodiment of the present application provides as shown in figure 29
Described mobile terminal 281 can also comprise:
Characteristic extracting module 291, is connected with described first sending module 2813 with described image capture module 2811, respectively for extracting the characteristics of image of the image of described object;
Described first sending module 2813, specifically for sending the characteristics of image of the image of described object and described initial position;
Described second receiver module 2821, specifically for receiving characteristics of image and the initial position of the image of the described object that described mobile terminal sends;
Described matching module 2823, specifically for being mated by the characteristics of image of the characteristics of image of described object with each two field picture of each reference substance within the scope of described initial position, determines a two field picture of the reference substance mated with described object.
In the embodiment of the present application, extract characteristics of image at mobile terminal side, while minimizing network bandwidth requirements, the precision of the characteristics of image extracted can be ensured, avoid the image fault that Internet Transmission causes, make image characteristics extraction precision low and the problem that positioning precision that is that cause is low.
To the above-mentioned explanation of the disclosed embodiments, professional and technical personnel in the field are realized or uses the present invention.For system disclosed in embodiment, because it corresponds to the method disclosed in Example, so description is fairly simple, relevant part illustrates see method part.To be apparent for those skilled in the art to the multiple amendment of these embodiments, General Principle as defined herein can without departing from the spirit or scope of the present invention, realize in other embodiments.Therefore, the present invention can not be restricted to these embodiments shown in this article, but will meet the widest scope consistent with principle disclosed herein and features of novelty.

Claims (29)

1. a localization method, is characterized in that, comprising:
The image of the object of current position is gathered by user terminal;
Obtain the initial position of described current location;
In the image data base prestored, obtain the information of the reference substance within the scope of described initial position; Wherein, the information of some reference substances is stored in described image data base, described information at least comprises: the coordinate of reference substance, some two field pictures of reference substance, the acquisition angles of the image of reference substance, the focal length of the image capture device used when gathering the image of reference substance in advance, and the distance between described image capture device and reference substance;
The image of described object is mated with each two field picture of each reference substance within the scope of described initial position, determines a two field picture of the reference substance mated with described object;
The zoomed image obtaining described object and the size ratio of the image of object gathered;
According to the focal length of described user terminal, the focal length of the image capture device used when gathering the image of determined reference substance in advance, distance between image capture device and reference substance, and described size ratio, determine the distance between described user terminal and described object;
By the distance between described user terminal and described object, and the acquisition angles of determined reference substance is as positioning result; Or according to the distance between described user terminal and described object, the coordinate of determined reference substance, and the acquisition angles of determined reference substance, determine the position location of described user terminal.
2. method according to claim 1, is characterized in that, is mated by the image of described object with each two field picture of each reference substance within the scope of described initial position, determines that a two field picture of the reference substance mated with described object comprises:
Obtain the image distance of each two field picture of each reference substance within the scope of the image of described object and described initial position;
In the image of the reference substance within the scope of described initial position, determine that a two field picture that image distance meets a reference substance of pre-set image matching condition is a two field picture of the reference substance mated with described object.
3. method according to claim 2, is characterized in that, the image distance between each two field picture of each reference substance within the scope of the image of the described object of described acquisition and described initial position comprises:
The image of described object is on average divided into several the first image blocks according to preset block of pixels first size;
For each two field picture of each reference substance within the scope of described initial position, perform following steps with the image distance between each two field picture obtaining each reference substance within the scope of the image of object and described initial position:
Distance between the two field picture obtaining each first image block and a reference substance;
By described distance and image distance that value is defined as between the image of described object and this two field picture of this reference substance.
4. method according to claim 2, is characterized in that, the image distance between each two field picture of each reference substance within the scope of the image of the described object of described acquisition and described initial position comprises:
The image of described object is on average divided into several the first image blocks according to preset block of pixels first size;
The image of described object is on average divided into several the second image blocks according to preset block of pixels second size, and described block of pixels second size is less than described block of pixels first size;
For each two field picture of each reference substance within the scope of described initial position, perform following steps with the image distance between each two field picture obtaining each reference substance within the scope of the image of object and initial position:
Distance between the two field picture obtaining each first image block and a reference substance;
Distance between this two field picture obtaining each the second image block and this reference substance;
Respectively calculate two distances and value, described two distances and minimum in value one and value are defined as the image distance between the image of object and this two field picture of this reference substance.
5. the method according to claim 3 or 4, is characterized in that, the distance between the two field picture obtaining each the first/the second image block and a reference substance comprises:
Application color moment feature obtains the first/the second color moment characteristic distance between a two field picture of each the first/the second image block described and a reference substance;
Application of shape feature obtains the first/the second shape facility distance between this two field picture of each the first/the second image block described and this reference substance;
Application textural characteristics obtains the first/the second textural characteristics distance between this two field picture of each the first/the second image block described and this reference substance.
6. method according to claim 5, is characterized in that, by described distance and image distance that value is defined as between the image of described object and this two field picture of this reference substance specifically comprise:
By described first color moment characteristic distance and the first image distance that value is defined as between the image of described object and this two field picture of this reference substance;
By described first shape facility distance and the second image distance that value is defined as between the image of described object and this two field picture of this reference substance;
By described first textural characteristics distance and the 3rd image distance that value is defined as between the image of described object and this two field picture of this reference substance;
In the image of described reference substance within the scope of initial position, determine that a two field picture that image distance meets a reference substance of pre-set image matching condition is that a two field picture of the reference substance mated with described object is specially:
In the image of the reference substance within the scope of described initial position, determine that a two field picture that the first image distance, the second image distance and the 3rd image distance meet a reference substance of pre-set image matching condition is a two field picture of the reference substance mated with described object.
7. according to claim 5 methods stated, it is characterized in that, the image distance that described two distances and minimum in value and value are defined as between the image of object and this two field picture of this reference substance is comprised:
By described first color moment characteristic distance and value and described second color moment characteristic distance and the first image distance that minimum in value one and value are defined as between the image of described object and this two field picture of this reference substance;
By described first shape facility distance and value and described second shape facility distance and the second image distance that minimum in value one and value are defined as between the image of described object and this two field picture of this reference substance;
By described first textural characteristics distance and value and described second textural characteristics distance and the 3rd image distance that minimum in value one and value are defined as between the image of described object and this two field picture of this reference substance;
In the image of described reference substance within the scope of initial position, determine that a two field picture that image distance meets a reference substance of pre-set image matching condition is that a two field picture of the reference substance mated with described object is specially:
In the image of the reference substance within the scope of described initial position, determine that a two field picture that the first image distance, the second image distance and the 3rd image distance meet a reference substance of pre-set image matching condition is a two field picture of the reference substance mated with described object.
8. method according to claim 6, it is characterized in that, in the image of described reference substance within the scope of described initial position, determine that the two field picture that the first image distance, the second image distance and the 3rd image distance meet a reference substance of pre-set image matching condition comprises:
In the image of the reference substance of described initial position scope, determine that the first image distance is less than the image of the reference substance of the first preset distance threshold;
From the image of the described reference substance determined, determine that the second image distance of the image of reference substance and the weighted sum of the 3rd image distance are less than a two field picture of a reference substance of preset second distance threshold value.
9. method according to claim 5, is characterized in that, the first color moment characteristic distance between a two field picture of each first image block and a reference substance described in described application color moment feature obtains comprises:
The first color moment characteristic distance between a two field picture that following steps obtain each first image block and a reference substance is performed for each first image block:
By the first image block continuous moving within the scope of the pre-set image that this first image block is corresponding on a two field picture of a described reference substance, often mobile once this first image block, the distance between the image block utilizing the image of color moment feature calculation once this first image block and this reference substance to be covered by this first image block;
The first color moment characteristic distance between this two field picture distance the shortest in described distance being defined as this first image block and this reference substance.
10. method according to claim 5, is characterized in that, the first shape facility distance between this two field picture of each the first image block and this reference substance described in described application of shape feature obtains comprises:
The first shape facility distance between this two field picture that following steps obtain each the first image block and this reference substance is performed for each first image block:
Shape facility is utilized to calculate distance between the first image block image block corresponding on this two field picture of this reference substance with this first image block;
The first shape facility distance between this two field picture this distance being defined as this first image block and this reference substance.
11. methods according to claim 5, is characterized in that, the first textural characteristics distance between this two field picture of each the first image block and this reference substance described in described application textural characteristics obtains comprises:
The first textural characteristics distance between this two field picture that following steps obtain each the first image block and this reference substance is performed for each first image block:
Textural characteristics is utilized to calculate distance between the first image block image block corresponding on this image of this reference substance with this first image block;
The first textural characteristics distance between this two field picture this distance being defined as this first image block and this reference substance.
12., according to the method in claim 1-4 described in any one, is characterized in that, the zoomed image of the described object of described acquisition comprises with the size ratio of the image of gathered object:
The image of described object is carried out convergent-divergent according to each preset pantograph ratio, obtains the zoomed image of object under each pantograph ratio;
By zoomed image corresponding for each pantograph ratio of described object continuous moving on the image of described reference substance, calculate the degree of correlation between the zoomed image of described object and the overlapping image of described determined reference substance;
The size of zoomed image maximum for the degree of correlation and the ratio of the size of the image of gathered object are defined as the zoomed image of described object and the size ratio of the image of gathered object.
13. methods according to claim 1, is characterized in that, described method comprises further:
Extract the characteristics of image of the image of described object;
Described the image of described object to be mated with each two field picture of each reference substance within the scope of described initial position, determines that a two field picture of the reference substance mated with described object is specially:
The characteristics of image of the characteristics of image of the image of described object with each two field picture of each reference substance within the scope of described initial position is mated, determines a two field picture of the reference substance mated with described object.
14. 1 kinds of positioners, is characterized in that, comprising:
Image capture module, for gathering the image of the object of current position;
Initial position acquisition module, for obtaining the initial position of described current location;
Reference substance data obtaining module, in the image data base prestored, obtains the information of the reference substance within the scope of described initial position; Wherein, the information of some reference substances is stored in described image data base, described information at least comprises: the coordinate of reference substance, some two field pictures of reference substance, the acquisition angles of reference substance, the focal length of the image capture device used when gathering the image of reference substance in advance, and the distance between described image capture device and reference substance;
Matching module, for being mated with each two field picture of each reference substance within the scope of described initial position by the image of described object, determines a two field picture of the reference substance mated with described object;
Size than determination module, for the size ratio of the zoomed image with the image of gathered object that obtain described object;
Distance determination module, for the focal length according to described image capture module, the focal length of the image capture device used when gathering the image of determined reference substance in advance, distance between image capture device and determined reference substance, and described size ratio, determine the distance between described current location and described object;
Positioning result module, for by the distance between described current location and described object, and the acquisition angles of determined reference substance is as positioning result; Or according to the distance between described current location and described object, the coordinate of determined reference substance, and the acquisition angles of determined reference substance, determine the position location of described current location.
15. devices according to claim 14, is characterized in that, described matching module comprises:
Obtain submodule, for obtaining the image distance of each two field picture of each reference substance within the scope of the image of described object and described initial position;
Determine submodule, in the image for the reference substance within the scope of described initial position, determine that a two field picture that image distance meets a reference substance of pre-set image matching condition is a two field picture of the reference substance mated with described object.
16. devices according to claim 15, is characterized in that, described acquisition submodule comprises:
First division unit, for being on average divided into several the first image blocks by the image of described object according to preset block of pixels first size;
First acquiring unit, for each two field picture for each reference substance within the scope of described initial position, the distance between the two field picture obtaining each first image block and a reference substance;
First determining unit, for by described distance and image distance that value is defined as between the image of described object and this two field picture of this reference substance.
17. devices according to claim 15, is characterized in that, described acquisition submodule comprises:
First division unit, for being on average divided into several the first image blocks by the image of described object according to preset block of pixels first size;
Second division unit, for the image of described object is on average divided into several the second image blocks according to preset block of pixels second size, described block of pixels second size is less than described block of pixels first size;
First acquiring unit, for each two field picture for each reference substance within the scope of described initial position, the distance between the two field picture obtaining each first image block and a reference substance;
Second acquisition unit, for each two field picture for each reference substance within the scope of described initial position, the distance between this two field picture obtaining each the second image block and this reference substance;
Second determining unit, for calculate two distances respectively and value, described two distances and minimum in value one and value are defined as the image distance between the image of object and this two field picture of this reference substance.
18. devices according to claim 16 or 17, it is characterized in that, described first/second acquisition unit comprises:
First obtains subelement, the first/the second color moment characteristic distance between the two field picture obtaining each the first/the second image block described and a reference substance for applying color moment feature;
Second obtains subelement, the first/the second shape facility distance between this two field picture of each the first/the second image block and this reference substance described in obtaining for application of shape feature;
3rd obtains subelement, the first/the second textural characteristics distance between this two field picture obtaining each the first/the second image block described and this reference substance for applying textural characteristics.
19. devices according to claim 18, is characterized in that, described first determining unit comprises:
First determines subelement, for by described first color moment characteristic distance and the first image distance that value is defined as between the image of described object and this two field picture of this reference substance;
Second determines subelement, for by described first shape facility distance and the second image distance that value is defined as between the image of described object and this two field picture of this reference substance;
3rd determines subelement, for by described first textural characteristics distance and the 3rd image distance that value is defined as between the image of described object and this two field picture of this reference substance;
Described determine submodule specifically for, in the image of the reference substance within the scope of described initial position, determine that the first image distance, the second image distance and the 3rd image distance meet a two field picture of a reference substance of pre-set image matching condition.
20. devices according to claim 18, is characterized in that, described second determining unit comprises:
3rd determines subelement, for by described first color moment characteristic distance and value and described second color moment characteristic distance and the first image distance that minimum in value one and value are defined as between the image of described object and this two field picture of this reference substance;
4th determines subelement, for by described first shape facility distance and value and described second shape facility distance and the second image distance that minimum in value one and value are defined as between the image of described object and this two field picture of this reference substance;
5th determines subelement, for by described first textural characteristics distance and value and described second textural characteristics distance and the 3rd image distance that minimum in value one and value are defined as between the image of described object and this two field picture of this reference substance;
Described determine submodule specifically for, in the image of the reference substance within the scope of described initial position, determine that a two field picture that the first image distance, the second image distance and the 3rd image distance meet a reference substance of pre-set image matching condition is a two field picture of the reference substance mated with described object.
21. devices according to claim 19, is characterized in that, describedly determine that submodule comprises:
3rd determining unit, in the image of the reference substance of described initial position scope, determines that the first image distance is less than the image of the reference substance of the first preset distance threshold;
4th determining unit, for in the image from the described reference substance determined, determine that the two field picture that the second image distance of the image of reference substance and the weighted sum of the 3rd image distance are less than a reference substance of preset second distance threshold value is the first image distance, a two field picture that the second image distance and the 3rd image distance meet a reference substance of pre-set image matching condition.
22. devices according to claim 18, is characterized in that, described first obtains subelement comprises:
First computing unit, for for each the first image block, by the first image block continuous moving within the scope of the pre-set image that this first image block is corresponding on a two field picture of a reference substance, often mobile once this first image block, the distance between the image block utilizing the image of color moment feature calculation once this first image block and this reference substance to be covered by this first image block;
Color moment characteristic distance determining unit, for for each the first image block, the first color moment characteristic distance between this image distance the shortest in described distance being defined as this first image block and this reference substance.
23. devices according to claim 18, is characterized in that, described second obtains subelement comprises:
Second computing unit, for for each the first image block, utilizes shape facility to calculate distance between the first image block image block corresponding on this image of this reference substance with this first image block;
Shape facility distance determining unit, for for each the first image block, the first shape facility distance between this two field picture distance that described second computing unit calculates being defined as this first image block and this reference substance.
24. devices according to claim 18, is characterized in that, the described 3rd obtains subelement comprises:
3rd computing unit, for for each the first image block, utilizes textural characteristics to calculate distance between the first image block image block corresponding on this image of this reference substance with this first image block;
Textural characteristics distance determining unit, for for each the first image block, the first textural characteristics distance between this two field picture distance that described 3rd computing unit calculates being defined as this first image block and this reference substance.
25., according to the device in claim 14-17 described in any one, is characterized in that, described size comprises than determination module:
Convergent-divergent submodule, for the image of described object is carried out convergent-divergent according to each preset pantograph ratio, obtains the zoomed image of object under each pantograph ratio;
Degree of correlation determination submodule, for zoomed image continuous moving on the image of described reference substance that each pantograph ratio by described object is corresponding, calculates the degree of correlation between the zoomed image of described object and the overlapping image of the described reference substance determined;
Size ratio determines submodule, for the size ratio of the zoomed image with the image of gathered object that the size of zoomed image maximum for the degree of correlation and the ratio of the size of the image of gathered object are defined as described object.
26. devices according to claim 14, is characterized in that, described device comprises further:
Characteristic extracting module, with the characteristics of image of the image of the described object of extraction;
Described matching module, the characteristics of image specifically for the image by described object mates with the characteristics of image of each two field picture of each reference substance within the scope of described initial position, determines a two field picture of the reference substance mated with described object.
27. 1 kinds of mobile terminals, is characterized in that, comprise the positioner as described in claim 14-26 any one.
28. 1 kinds of navigation systems, is characterized in that, comprising:
Mobile terminal and server; Wherein,
Described mobile terminal comprises:
Image capture module, for gathering the image of the object of current position;
Initial position acquisition module, for obtaining the initial position of described current location;
First sending module, for sending the image of described object and described initial position;
First receiver module, for receiving the information of the determined reference substance that described server sends, described information comprises: the coordinate of determined reference substance, the image of determined reference substance, the acquisition angles of determined reference substance, the focal length of the image capture device used when gathering the image of determined reference substance in advance, and the distance between image capture device and determined reference substance;
Size than determination module, for the size ratio of the zoomed image with the image of gathered object that obtain described object;
Distance determination module, for the focal length according to described first image capture module, the focal length of the image capture device used when gathering the image of determined reference substance in advance, distance between image capture device and determined reference substance, and described size ratio, determine the distance between described mobile terminal and described object;
Positioning result determination module, for by the distance between described mobile terminal and described object, and the acquisition angles of determined reference substance is as positioning result; Or according to the distance between described mobile terminal and described object, the coordinate of determined reference substance, and the acquisition angles of determined reference substance, determine the position location of described mobile terminal;
Described server comprises:
Second receiver module, for receiving image and the initial position of the object that described mobile terminal sends;
Reference substance data obtaining module, in the image data base prestored, obtains the information of the reference substance within the scope of described initial position; Wherein, the information of some reference substances is stored in described image data base, described information at least comprises: the coordinate of reference substance, some two field pictures of reference substance, the acquisition angles of reference substance, the focal length of the image capture device used when gathering the image of reference substance in advance, and the distance between described image capture device and reference substance;
Matching module, for being mated with each two field picture of each reference substance within the scope of described initial position by the image of described object, determines a two field picture of the reference substance mated with described object;
Second sending module, for sending the information of determined reference substance, described information comprises: the coordinate of determined reference substance, the image of determined reference substance, the acquisition angles of determined reference substance, the focal length of the image capture device used when gathering the image of determined reference substance in advance, and the distance between described image capture device and determined reference substance.
29. systems according to claim 28, is characterized in that, described mobile terminal comprises further:
Characteristic extracting module, for extracting the characteristics of image of the image of described object;
Described first sending module, specifically for sending the characteristics of image of the image of described object and described initial position;
Described second receiver module, specifically for receiving characteristics of image and the initial position of the image of the described object that described mobile terminal sends;
Described matching module, specifically for being mated by the characteristics of image of the characteristics of image of described object with each two field picture of each reference substance within the scope of described initial position, determines a two field picture of the reference substance mated with described object.
CN201310598348.0A 2013-11-22 2013-11-22 Localization method, device, system and mobile terminal Active CN104661300B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310598348.0A CN104661300B (en) 2013-11-22 2013-11-22 Localization method, device, system and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310598348.0A CN104661300B (en) 2013-11-22 2013-11-22 Localization method, device, system and mobile terminal

Publications (2)

Publication Number Publication Date
CN104661300A true CN104661300A (en) 2015-05-27
CN104661300B CN104661300B (en) 2018-07-10

Family

ID=53251874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310598348.0A Active CN104661300B (en) 2013-11-22 2013-11-22 Localization method, device, system and mobile terminal

Country Status (1)

Country Link
CN (1) CN104661300B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105890597A (en) * 2016-04-07 2016-08-24 浙江漫思网络科技有限公司 Auxiliary positioning method based on image analysis
CN107144857A (en) * 2017-05-17 2017-09-08 深圳市伊特利网络科技有限公司 Assisted location method and system
CN107816983A (en) * 2017-08-28 2018-03-20 深圳市赛亿科技开发有限公司 A kind of shopping guide method and system based on AR glasses
CN110095752A (en) * 2019-05-07 2019-08-06 百度在线网络技术(北京)有限公司 Localization method, device, equipment and medium
CN110645986A (en) * 2019-09-27 2020-01-03 Oppo广东移动通信有限公司 Positioning method and device, terminal and storage medium
CN111862146A (en) * 2019-04-30 2020-10-30 北京初速度科技有限公司 Target object positioning method and device
CN113389202A (en) * 2021-07-01 2021-09-14 山东省鲁南地质工程勘察院(山东省地勘局第二地质大队) Device and method for preventing aligning deviation of pile foundation engineering reinforcement cage
CN115131583A (en) * 2022-06-24 2022-09-30 佛山市天劲新能源科技有限公司 X-Ray detection system and detection method for lithium battery core package structure

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101299269A (en) * 2008-06-13 2008-11-05 北京中星微电子有限公司 Method and device for calibration of static scene
CN102253995A (en) * 2011-07-08 2011-11-23 盛乐信息技术(上海)有限公司 Method and system for realizing image search by using position information
US20120127276A1 (en) * 2010-11-22 2012-05-24 Chi-Hung Tsai Image retrieval system and method and computer product thereof
CN103067856A (en) * 2011-10-24 2013-04-24 康佳集团股份有限公司 Geographic position locating method and system based on image recognition
CN103245337A (en) * 2012-02-14 2013-08-14 联想(北京)有限公司 Method for acquiring position of mobile terminal, mobile terminal and position detection system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101299269A (en) * 2008-06-13 2008-11-05 北京中星微电子有限公司 Method and device for calibration of static scene
US20120127276A1 (en) * 2010-11-22 2012-05-24 Chi-Hung Tsai Image retrieval system and method and computer product thereof
CN102253995A (en) * 2011-07-08 2011-11-23 盛乐信息技术(上海)有限公司 Method and system for realizing image search by using position information
CN103067856A (en) * 2011-10-24 2013-04-24 康佳集团股份有限公司 Geographic position locating method and system based on image recognition
CN103245337A (en) * 2012-02-14 2013-08-14 联想(北京)有限公司 Method for acquiring position of mobile terminal, mobile terminal and position detection system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张鹏等: "给予图像匹配的目标定位技术研究", 《机器视觉》 *
颜洁等: "基于图像匹配的定位分析", 《信息传输与接入技术》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105890597A (en) * 2016-04-07 2016-08-24 浙江漫思网络科技有限公司 Auxiliary positioning method based on image analysis
CN105890597B (en) * 2016-04-07 2019-01-01 浙江漫思网络科技有限公司 A kind of assisted location method based on image analysis
CN107144857A (en) * 2017-05-17 2017-09-08 深圳市伊特利网络科技有限公司 Assisted location method and system
CN107816983A (en) * 2017-08-28 2018-03-20 深圳市赛亿科技开发有限公司 A kind of shopping guide method and system based on AR glasses
CN111862146A (en) * 2019-04-30 2020-10-30 北京初速度科技有限公司 Target object positioning method and device
CN111862146B (en) * 2019-04-30 2023-08-29 北京魔门塔科技有限公司 Target object positioning method and device
CN110095752A (en) * 2019-05-07 2019-08-06 百度在线网络技术(北京)有限公司 Localization method, device, equipment and medium
CN110645986A (en) * 2019-09-27 2020-01-03 Oppo广东移动通信有限公司 Positioning method and device, terminal and storage medium
WO2021057797A1 (en) * 2019-09-27 2021-04-01 Oppo广东移动通信有限公司 Positioning method and apparatus, terminal and storage medium
CN113389202A (en) * 2021-07-01 2021-09-14 山东省鲁南地质工程勘察院(山东省地勘局第二地质大队) Device and method for preventing aligning deviation of pile foundation engineering reinforcement cage
CN113389202B (en) * 2021-07-01 2022-07-05 山东省鲁南地质工程勘察院(山东省地勘局第二地质大队) Device and method for preventing aligning deviation of pile foundation engineering reinforcement cage
CN115131583A (en) * 2022-06-24 2022-09-30 佛山市天劲新能源科技有限公司 X-Ray detection system and detection method for lithium battery core package structure

Also Published As

Publication number Publication date
CN104661300B (en) 2018-07-10

Similar Documents

Publication Publication Date Title
CN104661300A (en) Positioning method, device, system and mobile terminal
CN106793086B (en) Indoor positioning method
US9529075B2 (en) Concept for determining an orientation of a mobile device
CN110443850B (en) Target object positioning method and device, storage medium and electronic device
KR102035388B1 (en) Real-Time Positioning System and Contents Providing Service System Using Real-Time Positioning System
KR20130091908A (en) Apparatus and method for providing indoor navigation service
JP2020510813A (en) Positioning method and positioning device
CN109068272B (en) Similar user identification method, device, equipment and readable storage medium
JP2014509384A (en) Position determination using horizontal angle
CN106455046B (en) satellite-WiFi flight time combined positioning system and method thereof
CN102711247B (en) Anchor-node-free three-dimensional wireless sensor network physical positioning method
EP3286575A1 (en) Supporting the use of radio maps
Gorovyi et al. Real-time system for indoor user localization and navigation using bluetooth beacons
Pereira et al. A smart-phone indoor/outdoor localization system
US11016176B2 (en) Method, device and system for mapping position detections to a graphical representation
CN103596265A (en) Multiple-user indoor positioning method based on voice distance measuring and movement vector
Guo et al. Virtual wireless device-constrained robust extended Kalman filters for smartphone positioning in indoor corridor environment
Bhargava et al. Locus: An indoor localization, tracking and navigation system for multi-story buildings using heuristics derived from Wi-Fi signal strength
CN105043375A (en) Navigation method, navigation system and corresponding mobile terminal
TW201140123A (en) Locating electromagnetic signal sources
KR20190060266A (en) Apparatus and method for recognizing location of target using two unmanned aerial vehicles
EP3096154B1 (en) Method, system and computer-readable medium to determine the position of an apparatus
CN103813446A (en) Method and device for estimating coverage of staying area
CN112261573A (en) Relative positioning method, device and system between intelligent devices
Jacq et al. Towards zero-configuration for Wi-Fi indoor positioning system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200514

Address after: 310052 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee after: Alibaba (China) Co.,Ltd.

Address before: 100020, No. 18, No., Changsheng Road, Changping District science and Technology Park, Beijing, China. 1-5

Patentee before: AUTONAVI SOFTWARE Co.,Ltd.