WO2021057797A1 - Positioning method and apparatus, terminal and storage medium - Google Patents

Positioning method and apparatus, terminal and storage medium Download PDF

Info

Publication number
WO2021057797A1
WO2021057797A1 PCT/CN2020/117156 CN2020117156W WO2021057797A1 WO 2021057797 A1 WO2021057797 A1 WO 2021057797A1 CN 2020117156 W CN2020117156 W CN 2020117156W WO 2021057797 A1 WO2021057797 A1 WO 2021057797A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature
key frame
map
information
Prior art date
Application number
PCT/CN2020/117156
Other languages
French (fr)
Chinese (zh)
Inventor
金珂
马标
李姬俊男
刘耀勇
蒋燚
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021057797A1 publication Critical patent/WO2021057797A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • This application relates to indoor positioning technology, but is not limited to positioning methods and devices, terminals, and storage media.
  • PDR Pedestrian Dead Reckoning
  • the embodiments of the present application provide a positioning method and device, a terminal, and a storage medium in order to solve at least one problem existing in the related art.
  • the embodiment of the present application provides a positioning method, which includes:
  • the pose information of the image acquisition device is determined.
  • an embodiment of the present application provides a positioning device, which includes: a first determination module, a first search module, a second determination module, a first extraction module, a first matching module, and a third determination module, wherein:
  • the first determining module is configured to determine current network feature information of the current location of the network where the image acquisition device is located;
  • the first search module is configured to search for an area identifier corresponding to the current network feature information from a preset first map
  • the second determining module is configured to determine the target area where the image acquisition device is located according to the area identifier
  • the first extraction module is configured to use the image acquisition device to collect an image to be processed, and extract the first image feature of the image to be processed;
  • the first matching module is configured to match the image features corresponding to the first image features from the image features of the key frame images stored in the preset second map corresponding to the target area to obtain a second image feature;
  • the third determining module is configured to determine the pose information of the image acquisition device according to the second image feature.
  • An embodiment of the present application provides a terminal, including a memory and a processor, the memory stores a computer program that can run on the processor, and the processor implements the steps in the positioning method when the program is executed.
  • the embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned positioning method are realized.
  • the embodiments of the present application provide a positioning method, device, terminal, and storage medium.
  • the area identification corresponding to the current network feature information; according to the area identification, the target area where the image acquisition device is located is determined;
  • the image acquisition device is used to collect the image to be processed, and extract the information of the image to be processed
  • determine the pose information of the image capture device in this way, first use the preset first map to coarsely locate the image capture device, and then use the preset second map corresponding to the target area of the coarse location , To accurately position the image acquisition device based on the key frame image in the preset second map to obtain the pose information of the image acquisition device, which improves the positioning accuracy.
  • FIG. 1 is a schematic diagram of the implementation process of a positioning method according to an embodiment of this application.
  • FIG. 2A is a schematic diagram of another implementation process of the positioning method according to an embodiment of this application.
  • 2B is a schematic diagram of another implementation process of the positioning method according to the embodiment of this application.
  • 3A is a schematic diagram of another implementation process of the positioning method according to an embodiment of the application.
  • FIG. 3B is a schematic diagram of a scene of a positioning method according to an embodiment of this application.
  • FIG. 3C is a schematic diagram of another scene of the positioning method according to the embodiment of this application.
  • FIG. 4 is a schematic diagram of the structure of a ratio vector in an embodiment of the application.
  • FIG. 5A is a diagram of an application scenario for determining a matching frame image according to an embodiment of the application
  • 5B is a schematic structural diagram of determining location information of a collection device according to an embodiment of the application.
  • FIG. 6 is a schematic diagram of the composition structure of a positioning device according to an embodiment of the application.
  • FIG. 1 is a schematic diagram of the implementation process of the positioning method according to the embodiment of the present application. As shown in FIG. 1, the method includes the following steps:
  • Step S101 Determine current network feature information of the current location of the network where the image acquisition device is located.
  • the network characteristic information may be the signal strength of the network where the image acquisition device is located, or the distribution of the signal strength of the network where the image acquisition device is located.
  • Step S102 searching for an area identifier corresponding to the current network feature information from the preset first map.
  • the preset first map may be understood as a wireless fidelity (Wireless Fidelity, WiFi) fingerprint map, that is, the identification information of each area and the signal strength of the network corresponding to the area are stored in the preset first map , Or the distribution of the network signal strength corresponding to the area, so that the identification information of the area corresponds to the signal strength of the network one-to-one.
  • a wireless fidelity Wireless Fidelity, WiFi
  • WiFi Wireless Fidelity, WiFi
  • Step S103 Determine the target area where the image acquisition device is located according to the area identifier.
  • the target area where the image acquisition device is located can be uniquely determined based on the area identification.
  • Step S104 Use the image acquisition device to collect an image to be processed, and extract a first image feature of the image to be processed.
  • the first image feature includes: description information and two-dimensional (2 Dimensions, 2D) position information of the feature points of the image to be processed.
  • step S102 first, extract the first image feature of the image to be processed, for example, extract the feature point of the image to be processed; determine the description information of the feature point and the feature point in the image to be processed 2D coordinate information; where the description information of the feature point can be understood as the descriptor information that can uniquely identify the feature point.
  • Step S105 Match the image features corresponding to the first image feature from the image features of the key frame images stored in the preset second map corresponding to the target area to obtain a second image feature.
  • the second image feature includes: 2D coordinate information, three-dimensional (3 Dimensions, 3D) position information, and description information of the feature point of the key frame image containing the identification information of the target area.
  • the preset second map corresponding to the target area can be understood as a part of the global map corresponding to the key frame image that identifies the identification information of the target area in the global map. For example, for all the key frame images to identify the identification information of the region corresponding to the key frame image, then after the target area is determined, the key frame image that identifies the identification information of the target area can be determined according to the identification information of the target area.
  • the second map that is, the set of key frame images in the preset second map is a set of key frame images that identify the identification information of the target area, and each sample feature point corresponds to the ratio of the key frame image Ratio vector collection.
  • the step S102 can be understood as selecting a second image feature that has a higher degree of matching with the first image feature from the image features of the key frame image stored in the preset second map.
  • Step S106 Determine the pose information of the image acquisition device according to the second image feature.
  • the pose information includes the collection orientation of the image collection device and the position of the image collection device.
  • the location information of the image acquisition device is determined based on the 3D coordinate information of the feature point of the key frame image corresponding to the second image feature and the 2D coordinate information of the feature point of the image to be processed corresponding to the first image feature. For example, first, in the three-dimensional coordinate space where the image acquisition device is located, the 2D coordinate information of the feature points of the image to be processed is converted into 3D coordinate information, and then the 3D coordinate information is combined with the three-dimensional coordinate system of the preset second map The 3D coordinate information of the feature points of the key frame image is compared to determine the position information of the image acquisition device. In this way, considering both the 2D coordinate information and the 3D coordinate information of the feature point, when the image acquisition device is positioned, both the position of the image acquisition device and the acquisition orientation of the image acquisition device can be obtained, which improves the positioning accuracy.
  • the image acquisition device is roughly positioned based on the preset first map to obtain the target area; then, the image feature is selected from the image features of the key frame image in the preset second map.
  • the matched second image feature is used to realize the precise positioning of the image acquisition device.
  • the first preset map is used for coarse positioning, and then the preset second map is used to accurately position the image acquisition device based on the key frame image. To determine the location and collection orientation of the image capture device, thereby improving the positioning accuracy.
  • FIG. 2A is a schematic diagram of another implementation process of the positioning method according to the embodiment of the present application. As shown in FIG. 2A, the method includes the following steps:
  • Step S201 Divide the coverage area of the current network into multiple areas.
  • the coverage of the current network can be divided into multiple grids. As shown in Figure 3B, the coverage of the current network is divided into grids with 4 rows and 7 columns.
  • An area has identification information that can uniquely identify the area, for example, the identity document (ID) of the area.
  • ID identity document
  • Step S202 Determine the network characteristic information of the multiple wireless access points in the current network in each area.
  • APs wireless Access Points
  • the step S202 can be understood as determining AP31 and AP32 respectively.
  • Step S203 Store the identification information of each area and the network feature information corresponding to each area to obtain the preset first map.
  • the network feature information corresponding to each area can be understood as the signal strength of all APs that can be detected in the area.
  • the identification information of each area is different.
  • the identification information of each area and the network feature information corresponding to the area are stored in a preset first map in a one-to-one correspondence.
  • the above steps S201 to S203 give a way to create a preset first map.
  • the identification information of each area corresponds to the network feature information that can be detected in that area.
  • the image acquisition device is determined
  • the network feature information of the network where it is located can roughly determine the area where the image acquisition device is located in the preset first map.
  • Step S204 Determine target feature information that matches the current network feature information from the network feature information stored in the preset first map.
  • the network feature information corresponding to each area is stored in the preset first map. Based on the current network feature information, the target feature information with higher similarity to the current network feature information can be found in the preset first map .
  • Step S205 According to the corresponding relationship between the network feature information and the area identification information stored in the preset first map, search for the area identification corresponding to the current network feature information.
  • the network feature information in the preset first map corresponds to the identification information of the area one-to-one, so after the target feature information is determined, the correspondence between the network feature information stored in the first map and the identification information of the area can be set Relationship, locate the target area where the image capture device is located, so as to achieve rough positioning of the image capture device, for example, determine the room where the image capture device is located.
  • the above steps S204 and S205 give a way to realize "from the preset first map, find the area identifier corresponding to the current network feature information", in this way, the network information of the image acquisition device is obtained.
  • the target area where the image capture device is located is found, thereby achieving a rough understanding of the image capture device. Positioning.
  • Step S206 selecting a plurality of key frame images meeting preset conditions from the sample image library to obtain a key frame image set.
  • a preset number of corner points are selected from the sample image; the corner points are pixels in the sample image that are significantly different from the preset number of surrounding pixels; for example, 150 corner points are selected point.
  • the second step if the number of the same corner points contained in two sample images with adjacent acquisition times is greater than or equal to a certain threshold, it is determined that the scene corresponding to the sample image is a continuous scene; the acquisition times of the two sample images are adjacent, and It can be understood as two consecutive sample images. Determine the number of the same corner points contained in the two sample images. The larger the number, the higher the correlation between the two sample images and the higher the correlation between the two sample images. It is an image from a continuous scene. Continuous scenes, such as a single indoor environment, such as a bedroom, a living room, or a single meeting room.
  • the third step is to determine that the scene corresponding to the sample image is a discrete scene if the number of the same corner points contained in two sample images adjacent to the acquisition time is less than a certain threshold.
  • the scene corresponding to the sample image is a discrete scene
  • the scene corresponding to the sample image is a continuous scene
  • Step S207 Using the identification information of the region corresponding to each key frame image, identify each of the key frame images in a one-to-one correspondence to obtain a set of identified key frame images.
  • the area corresponding to the image acquisition device that collects the key frame image is determined, as well as the identification information of the area, and the identification information is used to identify the key frame image, so that each key frame image is marked with The identification information of the corresponding area.
  • Step S208 Extract the image features of each identified key frame image to obtain a key image feature set.
  • the image feature of the identified key frame includes: 2D coordinate information, 3D coordinate information of the feature point of the key frame image, and description information that can uniquely identify the feature point.
  • Step S209 Determine the ratio of each sample feature point in the sample feature point set in the identified key frame image to obtain a ratio vector set.
  • the different sample feature points and the ratio vector set are stored in the preset bag-of-words model, so that the preset bag-of-words model can be used to retrieve the matching of the image to be processed from the key frame image Frame image.
  • the step S209 can be implemented through the following process:
  • the first average number of times is determined according to the first number of sample images contained in the sample image library and the first number of times that the i-th sample feature point appears in the sample image library.
  • the first average number is used to indicate the average number of times the i-th sample feature point appears in each sample image; for example, the first number of sample images is N, and the i-th sample feature point appears in the sample image library
  • the first number of times is n i , based on this, the first average number of times idf(i) can be obtained.
  • the second average number is used to indicate the proportion of the sample feature points contained in the j-th key frame image occupied by the i-th sample feature point; for example, the second number is The second quantity is Then the second average number tf(i,I t ) can be obtained.
  • the ratio of the sample feature points in the key frame image is obtained, and the ratio vector set is obtained. For example, multiply the first average number and the second average number to get the ratio vector
  • Step S210 Store the ratio vector set and the key image feature set to obtain the global map to which the preset second map belongs.
  • the preset second map is a part of the global map
  • the ratio vector set corresponding to the identified key frame image and the key image feature set are stored in the preset second map, so that when the image acquisition device is located, use this
  • the ratio vector set is compared with the ratio vector set corresponding to the image to be processed determined by using a preset bag-of-words model to determine a matching frame image that is highly similar to the image to be processed from the key image feature set.
  • the above steps S206 to S210 give a way to create a global map.
  • the obtained key frame image is identified by the identification information of the area, so that each key frame image in the global map obtained is an identification There is identification information of the corresponding area.
  • Step S211 Determine the key frame image that identifies the identification information of the target area from the identified key frame images stored in the global map.
  • the target area is determined, based on the identification information of the target area, a part of the global map corresponding to the key frame image of the identification information of the target area can be found in the global map, that is, the second map is preset.
  • Step S212 Use a partial global map corresponding to the key frame image that identifies the identification information of the target area as the preset second map.
  • the above steps S211 and S212 give a way to determine the preset second map.
  • the key frame image that identifies the identification information of the target area is searched for in the global map, and this part is identified with the identification information of the target area
  • the part of the global map corresponding to the key frame image of is used as the preset second map.
  • Step S213 According to the first image feature of the image to be processed, the second image feature is matched from the image feature of the key frame image stored in the preset second map corresponding to the target area.
  • step S213 can be implemented through the following steps:
  • the ratios of different sample feature points in the feature point set are respectively determined, and the first ratio vector is obtained.
  • the preset bag-of-words model includes multiple different sample feature points and the ratio of multiple sample feature points among the feature points contained in the key frame image.
  • the first ratio vector may be determined based on the number of sample images, the number of sample feature points appearing in the sample image, the number of sample feature points appearing in the image to be processed, and the total number of sample feature points appearing in the image to be processed.
  • the second step is to obtain the second ratio vector.
  • the second ratio vector is the ratio of the multiple sample feature points among the feature points contained in the key frame image; the second ratio vector is pre-stored in a preset bag-of-words model, Therefore, when the image features of the image to be processed need to be matched, the second ratio vector is obtained from the preset bag-of-words model.
  • the determination process of the second ratio vector is similar to the determination process of the first ratio vector; and the dimensions of the first ratio vector and the second ratio vector are the same.
  • the third step is to match a second image feature from the image features of the key frame image according to the first image feature, the first ratio vector and the second ratio vector.
  • the third step can be achieved through the following process:
  • a first vector v ratio here one by one comparison image to be processed 1 and the ratio of each second vector v 2 of the key frame image, using the ratio of these two vectors is calculated, to determine whether each key frame image and the image to be processed
  • similar key frame images with similarity greater than or equal to the second threshold are screened out, and a set of similar key frame images is obtained.
  • the similar key frame images to which the similar image features belong are determined to obtain a set of similar key frame images.
  • the second image feature with the highest similarity to the first image feature is selected; for example, first, the time difference between the acquisition times of at least two similar key frame images is determined, And the similarity difference between the image features of the at least two similar key frame images and the first image feature; then, the time difference is smaller than the third threshold, and the similarity difference is smaller than the fourth threshold.
  • Frame images are combined to obtain a joint frame image; that is, multiple similar key frame images that are close in acquisition time and close to the image to be processed are selected, indicating that these key frame images may be continuous pictures.
  • Such multiple similar key frame images are combined together to form a joint frame image (which can also become an island), so that multiple joint frame images are obtained; finally, from the image characteristics of the joint frame image, select the The second image feature whose first image feature similarity meets the preset similarity threshold. For example, firstly, the sum of the similarity between the image feature of each key frame image contained in the multiple joint frame images and the first image feature is separately determined; in this way, the multiple key frame images contained in the multiple joint frame images are determined one by one. The sum of the similarity between the image feature of the frame image and the first image feature.
  • the joint frame image with the largest sum of similarity is determined as the target joint frame image with the highest similarity to the image to be processed; finally, according to the description information of the feature points of the target joint frame image and the image to be processed
  • the description information of the feature point is selected from the image features of the target joint frame image, and the second image feature whose similarity with the first image feature meets a preset similarity threshold is selected.
  • the description information of the feature points of the target joint frame image and the description information of the feature points of the image to be processed can uniquely identify the feature points of the target joint frame image and the feature points of the image to be processed, based on these two
  • This description information can very accurately select the second image feature with the highest similarity to the first image feature from the image features of the target joint frame image. This ensures the accuracy of matching the first image feature of the image to be processed with the second image feature, and ensures that the selected second image feature is extremely similar to the first image feature.
  • Step S214 Determine the pose information of the image acquisition device according to the second image feature.
  • step S214 can be implemented through the following process:
  • the image containing the second image feature is determined as a matching frame image of the image to be processed.
  • the key frame image containing the second image feature indicates that the key frame image is very similar to the image to be processed, so the key frame image is used as the matching frame image of the image to be processed.
  • the second step is to determine the target Euclidean distance between any two feature points contained in the matching frame image and which is less than the first threshold to obtain the target Euclidean distance set.
  • the Euclidean distance between any two feature points contained in the matching frame image first, determine the Euclidean distance between any two feature points contained in the matching frame image, and then select the Euclidean distance less than the first threshold as the target Euclidean distance to obtain the target Euclidean distance.
  • Distance set This is to process a feature point in the image to be processed to get a target Euclidean distance set, then to process multiple feature points in the image to be processed, you can get multiple Euclidean distance sets.
  • the target Euclidean distance that is less than the first threshold can also be regarded as first determining the smallest Euclidean distance from a plurality of Euclidean distances, and then judging whether the smallest Euclidean distance is less than the first threshold, and if it is less than, then determining the smallest Euclidean distance.
  • the distance is the target Euclidean distance, then the target Euclidean distance set is the set with the smallest Euclidean distance among multiple Euclidean distance sets.
  • the third step if the number of target Euclidean distances included in the target Euclidean distance set is greater than the fifth threshold, based on the 3D coordinate information of the feature points of the key frame image corresponding to the second image feature and the first image feature The corresponding 2D coordinate information of the feature point of the image to be processed determines the position information of the image acquisition device.
  • the number of target Euclidean distances included in the target Euclidean distance set is greater than the fifth threshold, it indicates that the number of target Euclidean distances is large enough, and it also indicates that there are enough feature points that match the features of the first image. It shows that the similarity between this key frame image and the image to be processed is sufficiently high.
  • the 3D coordinate information of the feature point of the key frame image and the 2D coordinate information of the feature point of the image to be processed corresponding to the first image feature are used as the front-end pose tracking algorithm (Perspectives-n-Point, PnP) algorithm Input, first find out the 2D coordinate information (for example, 2D coordinates) of the feature point in the current frame of the image to be processed in the current coordinate system (for example, 3D coordinates), and then according to the map coordinate system
  • the 3D coordinate information of the feature points of the key frame image and the 3D coordinate information of the feature points in the current frame of the image to be processed in the current coordinate system can solve the position information of the image acquisition device.
  • the position and posture of the image acquisition device can be provided in the positioning result at the same time, so the positioning accuracy of the image acquisition device is improved.
  • the image acquisition device is roughly positioned through the preset first map to determine the target area where the image acquisition device is located, and then the constructed preset first map is loaded based on the identification information of the target area.
  • Second map and use the preset bag-of-words model to retrieve the matching frame image corresponding to the image to be processed.
  • the 2D coordinate information of the feature point of the image to be processed and the 3D coordinate information of the feature point of the key frame image are combined.
  • the PnP algorithm it can obtain the precise position and collection orientation of the current image acquisition device in the map to achieve the positioning purpose; in this way, the positioning purpose can be achieved through the key frame image, and the position of the image acquisition device in the map coordinate system can be obtained. And the acquisition orientation improves the accuracy of the positioning results and has strong robustness.
  • FIG. 2B is a schematic diagram of another implementation process of the positioning method according to the embodiment of this application. As shown in FIG. 2B, the method includes the following steps:
  • Step S221 Determine current network feature information of the current location of the network where the image acquisition device is located;
  • Step S222 searching for an area identifier corresponding to the current network feature information from the preset first map
  • Step S223 Determine the target area where the image acquisition device is located according to the area identifier.
  • Step S224 According to the first image feature of the image to be processed, the second image feature is matched from the image feature of the key frame image stored in the preset second map corresponding to the target area.
  • Step S225 Determine the feature point of the key frame image corresponding to the second image feature and the map coordinates in the map coordinate system corresponding to the preset second map.
  • the feature point corresponding to the second image feature in the preset second map is acquired, and the 3D coordinates in the map coordinate system corresponding to the preset second map are acquired.
  • Step S226 Determine the feature point of the key frame image corresponding to the second image feature and the current coordinates in the current coordinate system where the image acquisition device is located.
  • map coordinates are used as the input of the PnP algorithm, and the current coordinates of the feature point in the current coordinate system of the image acquisition device are obtained.
  • Step S227 Determine a conversion relationship between the current coordinate system and the map coordinate system according to the map coordinates and the current coordinates.
  • map coordinates and the current coordinates are compared, and the rotation vector and the translation vector of the image acquisition device relative to the map coordinate system in the current coordinate system are determined.
  • Step S228 Determine the position of the image acquisition device in the map coordinate system and the position of the image acquisition device relative to the current coordinate system according to the conversion relationship and the current coordinates of the image acquisition device in the current coordinate system.
  • the collection orientation of the map coordinate system The collection orientation of the map coordinate system.
  • the rotation vector is used to rotate the current coordinates of the image acquisition device to determine the acquisition orientation of the image acquisition device relative to the map coordinate system
  • the translation vector is used to translate the current coordinates of the image acquisition device to determine that the image acquisition device is in the map coordinates The position in the department.
  • FIG. 3A is a schematic diagram of another implementation process of the positioning method according to the embodiment of the present application. As shown in FIG. 3A, the method includes the following steps:
  • Step S301 Load the created preset first map and save it locally.
  • the preset first map may be understood as a WiFi fingerprint map
  • the process of creating the preset first map may be implemented in an offline stage when the current network is not connected.
  • the offline stage in order to collect fingerprints at various locations, first, build a database, that is, perform multiple measurements in multiple areas to obtain the database.
  • the multiple areas can be within the network coverage area or more, including the collection of images to be processed
  • the area where the image capture device is located For example, the area arbitrarily designated by the developer who built the database.
  • the establishment of the corresponding relationship between the location and the fingerprint in the database is usually carried out in the offline stage. As shown in FIG. 3B, the geographic area is covered by a rectangular grid.
  • the geographic area is divided into a grid of 4 rows and 7 columns.
  • AP31 and AP32 are wireless access points in the network.
  • AP31 and AP32 are deployed in this area for communication.
  • the signal strength sent by the AP is used to construct fingerprint information.
  • the average signal strength from each AP is obtained.
  • the collection time is about 5 to 15 minutes, about once every second, and the mobile device may have different orientations and angles during the collection.
  • the average signal strength sample can also be used.
  • Distribution as a fingerprint Each grid point corresponds to a two-dimensional vector (ie fingerprint), thereby constructing a WiFi fingerprint map (ie, the preset first map).
  • the fingerprint ⁇ is an N-dimensional vector.
  • the grid granularity of the preset first map is allowed to be very large, and can reach the room level, because the preset first map is only used for coarse positioning.
  • Step S302 Select a key frame image that meets a preset condition from the sample image library.
  • step S303 the image features in the key frame image are extracted in real time during the acquisition process.
  • image feature extraction is a process of interpretation and annotation of key frame images.
  • step S303 it is necessary to extract the 2D coordinate information, 3D coordinate information, and description information of the feature point of the key frame image (that is, the descriptor sub-information of the feature point); among them, the 3D coordinate information of the feature point of the key frame image is the key
  • the 2D coordinate information of the feature points of the frame image is obtained by mapping in the preset three-dimensional coordinate system where the second map is located.
  • the number of extraction is 150 (150 is the empirical value, the number of feature points is too small, the tracking failure rate is high, the number of feature points is too large, which affects the efficiency of the algorithm), which is used for images Tracking; and extract the descriptor of the feature point for feature point matching;
  • the 3D coordinate information (ie depth information) of the feature point is calculated by the triangulation method, which is used to determine the location of the image acquisition device.
  • Step S304 Determine the ratio of each sample feature point in the key frame image in real time during the acquisition process to obtain a ratio vector.
  • step S304 can be understood as, during the acquisition of the key frame image, for the current frame image, the ratio vector of the key frame image is extracted in real time.
  • the word bag model is described in the form of a vocabulary tree.
  • the bag-of-words model includes sample image database 41, which is the root node of the vocabulary tree; sample images 42, 43 and 44, which are leaf nodes 42, 43 and 44; sample feature points 1 to 3 are different samples in sample image 42 Feature points, sample feature points 4 to 6 are different sample feature points in the sample image 43, and sample feature points 7 to 9 are different sample feature points in the sample image 44.
  • Ratio vector In the process of scoring, multiple parameters need to be obtained, for example, to determine the number of sample images N (ie the first number), the number of times the sample feature point w i appears in the sample image n i (ie the first number), I t Is the image I collected at time t, T the number that appears in the key frame image I w i is the sample feature points collected at time (i.e., the number of times a second), The total number of sample feature points (i.e., the second number), by the sample feature point scale, floating point vector w to give each keyframe-dimensional image is a key frame image I t appeared, i.e. the ratio of the vector, the vector can also be the ratio As the feature information of the preset bag-of-words model.
  • an offline preset second map that depends on the key frame image is constructed.
  • the preset second map stores the image characteristics of the key frame image in a binary format (including: 2D coordinate information, 2D coordinate Information and description information, such as 2D coordinates, 3D coordinates, and descriptive sub-information) are sent to the local device.
  • 2D coordinate information including: 2D coordinate information, 2D coordinate Information and description information, such as 2D coordinates, 3D coordinates, and descriptive sub-information
  • Step S305 Use the identification information of the region corresponding to the key frame image to label the key frame image, so that the identified key frame image is associated with the preset first map to obtain a global map.
  • the key frame image is annotated during the collection process
  • the annotation content is the area ID
  • the annotation content of the key frame image associated with the WiFi fingerprint map is the area ID.
  • the area ID and the grid point when the preset first map is created are corresponding. In this mode, it means that one area of the preset first map corresponds to one area ID, and one area ID corresponds to multiple key frame images. As shown in FIG.
  • the identification information identified in the key frame image 331 and the key frame image 332 is ID341, ID341 is the identification information of area 33; the identification information of key frame image 333 is ID342, and ID342 is the identification information of area 34; key frame The identification information of the image 334 and the key frame image 335 is ID 343, which is the identification information of the area 35; the identification information of the key frame image 336 is ID 344, and the ID 344 is the identification information of the area 36.
  • the above steps S301 to S304 construct a WiFi fingerprint map (that is, the preset first map), and a global map, and the preset second map stores the feature point information of the visual key frame in a binary format (including 2D coordinates, 3D coordinates and descriptor information) and label information to the local.
  • the two maps will be loaded and used separately.
  • step S306 the image acquisition device is roughly positioned by the preset first map to obtain the target area where the image acquisition device is located.
  • Step S307 Determine the key frame image that identifies the identification information of the target area from the identified key frame images stored in the global map, and obtain a preset second map.
  • the preset second map can be understood as a local map of the global map.
  • step S308 image acquisition is performed by using the image acquisition device to obtain an image to be processed.
  • Step S309 in the process of acquiring the image to be processed, extract the first image feature in the current frame of the image to be processed in real time.
  • extracting the first image feature in the current frame of the image to be processed in real time is similar to the process of step S303, but there is no need to determine the 3D coordinate information of the image to be processed, because there is no need to provide the 3D image of the image to be processed in the subsequent PnP algorithm. Coordinate information.
  • step S310 the matching frame image of the current frame of the image to be processed in the preset second map is retrieved through the bag-of-words model.
  • the search for the matching frame image of the current frame of the image to be processed in the preset second map through the bag-of-words model can be understood as using the feature information of the bag-of-words model, that is, the ratio vector set to retrieve the current frame of the image to be processed The matching frame image in the preset second map.
  • the step S310 can be implemented through the following process:
  • the first step is to find the similarity between the current frame of the image to be processed and each key frame image.
  • the similarity s(v 1 , v 2 ) is calculated as follows: First, determine v 1 and v 2 , v 1 and v 2 Respectively represent the first ratio vector of each sample feature point contained in the word bag model in the current frame of the image to be processed, and the second ratio vector of each sample feature point in the key frame image. Based on v 1 and v 2 , the similarity between the current frame of the image to be processed and each key frame image can be determined. If the bag-of-words model contains w sample feature points, then the first ratio vector and the second ratio vector are both w-dimensional vectors. The similar key frame images whose similarity reaches the second threshold among the key frame images are filtered out to form a set of similar key frame images.
  • similar key frame images whose time stamp difference is less than the third threshold and similarity difference less than the fourth threshold are selected from the set of similar key frame images to join together to obtain a joint frame image (or called an island).
  • the second step can be understood as selecting similar key frame images with close timestamps and similarity matching scores in the similar key frame image set, and the similar key frame images are combined together to form an island; in this way, the similar key frame image set is divided into Multiple joint frame images (ie multiple islands) are created.
  • the ratio of the similarity between the first key frame image and the last key frame image in the joint frame image is very small, and the similarity ratio
  • the third step is to respectively determine the sum of similarity between the image feature of each key frame image contained in the multiple joint frame images and the first image feature.
  • the joint frame image with the largest sum of similarity is determined as the target joint frame image with the highest similarity to the image to be processed, and the current frame with the image to be processed is found from the target joint frame image The matching frame image with the highest similarity.
  • step S311 the PnP algorithm is used to determine the current position and the acquisition orientation of the image acquisition device in the map coordinate system.
  • step S311 can be implemented through the following steps:
  • the Nth feature point F CN of the current frame X C of the image to be processed is traversed through all feature points of the matched frame image X 3 and the Euclidean distance between any two feature points in the matched frame image is determined.
  • the current frame Xc51 of the image to be processed is a matching frame image X352 that matches the current frame Xc51.
  • the second step is to select the group with the smallest Euclidean distance (that is, the target Euclidean distance set) for threshold judgment. If it is less than the first threshold, it is determined as the target Euclidean distance, and then the target Euclidean distance set is formed. Otherwise, the target Euclidean distance set is not formed. Go to the first step until all the feature points of X C are traversed, and then go to the third step. For example, as shown in Figure 5A, by comparing multiple Euclidean distances, a set of minimum Euclidean distance combinations ⁇ F 1 , F 2 , F 3 ⁇ are obtained.
  • the third step is to form the target Euclidean distance set, which can be expressed as ⁇ F 1 ,F 2 ,F 3 ⁇ . If the number of elements in the target Euclidean distance set is greater than the fifth threshold, proceed to the fourth step, otherwise the algorithm ends and the matching frame is output X 3 location information.
  • the input of the PnP algorithm is the 3D coordinates of the feature points in the key frame image and the 2D coordinates of the feature points in the current frame of the image to be processed
  • the output of the algorithm is the position of the current frame of the image to be processed in the map coordinate system.
  • the PnP algorithm does not directly obtain the pose matrix of the image acquisition device based on the matching pair sequence, but first obtains the 3D coordinates of the feature points in the key frame image marked with the identification information of the target area in the current coordinate system, and then according to the feature points In the 3D coordinate system of the map coordinate system and the 3D coordinate of the current coordinate system, the rotation vector and translation vector of the current coordinate system relative to the map coordinate system are solved, and then the acquisition orientation of the image acquisition device is solved based on the rotation vector, based on the translation vector Solve the location of the image capture device.
  • the solution of the PnP algorithm starts from the law of cosines.
  • the location of the collection device is determined through the transformation from the map coordinate system to the current coordinate system.
  • the fusion positioning part mainly includes coarse positioning using the preset first map and fine positioning based on the visual key frame image.
  • the coarse positioning process determines the user's approximate location and also determines the local visual map to be loaded; fine positioning uses a monocular camera to collect the current image to be processed, and load the preset second map selected by the coarse positioning target area.
  • the bag-of-words model is used to retrieve and match the corresponding matching frame images, and finally the PnP algorithm is used to solve the current accurate pose of the image acquisition device in the map coordinate system to achieve the positioning purpose.
  • the indoor positioning method combining wireless indoor positioning and visual key frame images helps users locate their own position in real time and with high accuracy.
  • Use the preset first map for example, WiFi fingerprint map
  • the preset second map corresponding to the visual key frame image.
  • the embodiments of the present application can combine WiFi fingerprint maps and visual key frame maps for large-scale indoor scenes, with high positioning accuracy and strong robustness.
  • the embodiment of the present application provides a positioning device, which includes each module included and each unit included in each module, which can be implemented by a processor in a computer device; of course, it can also be implemented by a specific logic circuit;
  • the processor may be a central processing unit, a microprocessor, a digital signal processor, or a field programmable gate array.
  • the device 600 includes: a first determining module 601, a first searching module 602, a second determining module 603, a first extracting module 604, The first matching module 605 and the third determining module 606, wherein:
  • the first determining module 601 is configured to determine current network feature information of the current location of the network where the image acquisition device is located;
  • the first search module 602 is configured to search for an area identifier corresponding to the current network feature information from a preset first map;
  • the second determining module 603 is configured to determine the target area where the image acquisition device is located according to the area identifier
  • the first extraction module 604 is configured to use the image acquisition device to collect an image to be processed, and extract the first image feature of the image to be processed;
  • the first matching module 605 is configured to match the image features corresponding to the first image feature from the image features of the key frame images stored in the preset second map corresponding to the target area, and obtain the second Image feature
  • the third determining module 606 is configured to determine the pose information of the image acquisition device according to the second image feature.
  • the device further includes:
  • the first dividing module is configured to divide the coverage area of the current network into multiple regions
  • a fourth determining module configured to determine the network characteristic information of the multiple wireless access points in the current network in each area
  • the first storage module is configured to store the identification information of each area and the network feature information corresponding to each area as the preset first map; wherein the identification information of each area is different.
  • the first determining module 601 includes:
  • the first determining submodule is configured to determine target feature information that matches the current network feature information from the network feature information stored in the preset first map;
  • the second determining sub-module is configured to search for the area identifier corresponding to the current network characteristic information according to the correspondence between the network characteristic information and the area identification information stored in the preset first map.
  • the device further includes:
  • the second extraction module is configured to extract the feature point set of the image to be processed
  • a fifth determining module configured to determine the description information of each feature point in the feature point set and the two-dimensional coordinate information of each feature point in the image to be processed;
  • the sixth determining module is configured to determine the description information and the two-dimensional coordinate information as the first image feature.
  • the device further includes:
  • the first selection module is configured to select multiple key frame images meeting preset conditions from the sample image library to obtain a set of key frame images
  • the first identification module is configured to use identification information of the region corresponding to each key frame image to identify each of the key frame images in a one-to-one correspondence to obtain a set of identified key frame images;
  • the third extraction module is configured to extract the image features of each identified key frame image to obtain a key image feature set
  • the fourth extraction module is configured to extract feature points of the sample image from the sample image library to obtain a sample feature point set containing different feature points;
  • the seventh determining module is configured to determine the ratio of each sample feature point in the sample feature point set in the identified key frame image to obtain a ratio vector set;
  • the second storage module is configured to store the ratio vector set and the key image feature set to obtain the global map to which the preset second map belongs.
  • the seventh determining module includes:
  • the third determining submodule is configured to determine the first average number of times according to the first number of sample images contained in the sample image library and the first number of times the i-th sample feature point appears in the sample image library; wherein, i is an integer greater than or equal to 1; the first average number is configured to indicate the average number of times the i-th sample feature point appears in each sample image;
  • the fourth determining submodule is configured to, based on the second number of occurrences of the i-th sample feature point in the j-th key frame image and the second number of sample feature points contained in the j-th key frame image, Determine the second average number; where j is an integer greater than or equal to 1; the second average number is used to indicate the ratio of the i-th sample feature point to the sample feature points contained in the j-th key frame image;
  • the fifth determining submodule is configured to obtain the ratio of the sample feature points in the key frame image according to the first average number and the second average number, and obtain the ratio vector set.
  • the device further includes:
  • An eighth determining module configured to determine the key frame image that identifies the identification information of the target area from the identified key frame images stored in the global map;
  • the ninth determining module is configured to use a partial global map corresponding to the key frame image that identifies the identification information of the target area as the preset second map.
  • the first matching module 605 includes:
  • the sixth determining sub-module is configured to respectively determine the ratios of different sample feature points in the feature point set to obtain the first ratio vector
  • the first obtaining submodule is configured to obtain a second ratio vector, where the second ratio vector is the ratio of the multiple sample feature points among the feature points contained in the key frame image;
  • the first matching submodule is configured to match the first image feature corresponding to the first image feature from the image features of the key frame image identifying the identification information of the target area according to the first ratio vector and the second ratio vector 2. Image features.
  • the first matching submodule includes:
  • the first determining unit is configured to determine, according to the first ratio vector and the second ratio vector, from the image features of the key frame image of the identification information identifying the target area, determining the same as the first image feature Similar image features whose similarity is greater than the first threshold;
  • the second determining unit is configured to determine similar key frame images to which the similar image features belong to obtain a set of similar key frame images
  • the first selection unit is configured to select, from the image features of the similar key frame images, a second image feature whose similarity with the first image feature meets a preset similarity threshold.
  • the first selection unit includes:
  • the first determining subunit is configured to determine the time difference between the acquisition times of at least two similar key frame images, and the similarity between the image features of the at least two similar key frame images and the first image features, respectively difference;
  • the first joint subunit is configured to combine similar key frame images whose time difference is less than a second threshold and whose similarity difference is less than a third threshold to obtain a joint frame image;
  • the first selection subunit is configured to select, from the image features of the joint frame image, a second image feature whose similarity with the first image feature meets a preset similarity threshold.
  • the first selection subunit is configured to respectively determine the sum of the similarity between the image feature of each key frame image contained in the multiple joint frame images and the first image feature; And the largest joint frame image is determined as the target joint frame image with the highest similarity to the image to be processed; according to the description information of the feature point of the target joint frame image and the description information of the feature point of the image to be processed, Among the image features of the target joint frame image, a second image feature whose similarity with the first image feature meets a preset similarity threshold is selected.
  • the device further includes:
  • a tenth determining module configured to determine the image containing the second image feature as a matching frame image of the image to be processed
  • An eleventh determining module configured to determine a target Euclidean distance between any two feature points contained in the matching frame image and less than a fourth threshold, to obtain a target Euclidean distance set;
  • the seventh determining submodule is configured to determine the pose information of the image acquisition device according to the second image feature if the number of target Euclidean distances included in the target Euclidean distance set is greater than a fifth threshold.
  • the seventh determining submodule includes:
  • the third determining unit is configured to determine the feature point of the key frame image corresponding to the second image feature, and the map coordinates in the map coordinate system corresponding to the preset second map;
  • the fourth determining unit is configured to determine the feature point of the key frame image corresponding to the second image feature and the current coordinates in the current coordinate system where the image acquisition device is located;
  • a fifth determining unit configured to determine a conversion relationship between the current coordinate system and the map coordinate system according to the map coordinates and the current coordinates;
  • the sixth determining unit is configured to determine the position of the image acquisition device in the map coordinate system and the image acquisition device based on the conversion relationship and the current coordinates of the image acquisition device in the current coordinate system The collection orientation relative to the map coordinate system.
  • the above positioning method is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium.
  • the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence or the parts that contribute to related technologies.
  • the computer software products are stored in a storage medium and include several instructions to enable The automatic test line of the device containing the storage medium executes all or part of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), magnetic disk or optical disk and other media that can store program codes.
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the positioning method provided in the foregoing embodiment are implemented.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, such as: multiple units or components can be combined, or It can be integrated into another system, or some features can be ignored or not implemented.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units; they may be located in one place or distributed on multiple network units; Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments of the present application.
  • the functional units in the embodiments of the present application can be all integrated into one processing unit, or each unit can be individually used as a unit, or two or more units can be integrated into one unit;
  • the unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the foregoing program can be stored in a computer readable storage medium.
  • the execution includes The steps of the foregoing method embodiment; and the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a read only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk.
  • ROM Read Only Memory
  • the above-mentioned integrated unit of the present application is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer readable storage medium.
  • the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence or the parts that contribute to related technologies.
  • the computer software products are stored in a storage medium and include several instructions to enable The equipment automatic test line executes all or part of the method described in each embodiment of the present application.
  • the aforementioned storage media include: removable storage devices, ROMs, magnetic disks or optical discs and other media that can store program codes.
  • the target area where the image acquisition device is located is determined according to the network feature information of the current network where the image acquisition device configured to collect the image to be processed is located and the preset first map;
  • the first image feature is to match the second image feature from the image features of the key frame image stored in the preset second map corresponding to the target area; according to the second image feature, determine the image capture device Posture information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

A positioning method, a positioning apparatus, a terminal, and a storage medium, the method comprising: according to network feature information of a current network in which an image collection device used for collecting an image to be processed is located and a preset first map, determining a target region in which the image collection device is located; according to a first image feature of the image to be processed, matching a second image feature from among image features of a key image frame stored in a preset second map corresponding to the target region; and determining pose information of the image collection device according to the second image feature.

Description

定位方法及装置、终端、存储介质Positioning method and device, terminal and storage medium
相关申请的交叉引用Cross-references to related applications
本申请基于申请号为201910922471.0、申请日为2019年9月27日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。This application is filed based on a Chinese patent application with application number 201910922471.0 and an application date of September 27, 2019, and claims the priority of the Chinese patent application. The entire content of the Chinese patent application is hereby incorporated by reference into this application.
技术领域Technical field
本申请涉及室内定位技术,涉及但不限于定位方法及装置、终端、存储介质。This application relates to indoor positioning technology, but is not limited to positioning methods and devices, terminals, and storage media.
背景技术Background technique
在相关技术中,通过步行者航位推算(Pedestrian Dead Reckoning,PDR)进行室内定位,但是采用这种定位技术进行室内定位的精度仅在2米左右,在定位精度上仍然存在提高空间。In related technologies, Pedestrian Dead Reckoning (PDR) is used for indoor positioning, but the accuracy of indoor positioning using this positioning technology is only about 2 meters, and there is still room for improvement in positioning accuracy.
发明内容Summary of the invention
有鉴于此,本申请实施例为解决相关技术中存在的至少一个问题而提供一种定位方法及装置、终端、存储介质。In view of this, the embodiments of the present application provide a positioning method and device, a terminal, and a storage medium in order to solve at least one problem existing in the related art.
本申请实施例的技术方案是这样实现的:The technical solutions of the embodiments of the present application are implemented as follows:
本申请实施例提供了一种定位方法,所述方法包括:The embodiment of the present application provides a positioning method, which includes:
确定所述图像采集设备所处的网络的当前位置的当前网络特征信息;Determining current network feature information of the current location of the network where the image acquisition device is located;
从预设第一地图中,查找与所述当前网络特征信息对应的区域标识;Searching for an area identifier corresponding to the current network feature information from the preset first map;
根据所述区域标识,确定所述图像采集设备所处的目标区域;Determine the target area where the image acquisition device is located according to the area identifier;
采用所述图像采集设备采集待处理图像,并提取所述待处理图像的第一图像特征;Using the image acquisition device to collect an image to be processed, and extract the first image feature of the image to be processed;
从所述目标区域对应的预设第二地图中存储的关键帧图像的图像特征中,匹配出所述第一图像特征相对应的图像特征,得到第二图像特征;Matching the image features corresponding to the first image feature from the image features of the key frame images stored in the preset second map corresponding to the target area to obtain the second image feature;
根据所述第二图像特征,确定所述图像采集设备的位姿信息。According to the second image feature, the pose information of the image acquisition device is determined.
对应地,本申请实施例提供一种定位装置,所述装置包括:第一确定模块、第一查找模块、第二确定模块、第一提取模块、第一匹配模块和第三确定模块,其中:Correspondingly, an embodiment of the present application provides a positioning device, which includes: a first determination module, a first search module, a second determination module, a first extraction module, a first matching module, and a third determination module, wherein:
所述第一确定模块,配置为确定所述图像采集设备所处的网络的当前位置的当前网络特征信息;The first determining module is configured to determine current network feature information of the current location of the network where the image acquisition device is located;
所述第一查找模块,配置为从预设第一地图中,查找与所述当前网络特征信息对应的区域标识;The first search module is configured to search for an area identifier corresponding to the current network feature information from a preset first map;
所述第二确定模块,配置为根据所述区域标识,确定所述图像采集设备所处的目标区域;The second determining module is configured to determine the target area where the image acquisition device is located according to the area identifier;
所述第一提取模块,配置为采用所述图像采集设备采集待处理图像,并提取所述待处理图像的第一图像特征;The first extraction module is configured to use the image acquisition device to collect an image to be processed, and extract the first image feature of the image to be processed;
所述第一匹配模块,配置为从所述目标区域对应的预设第二地图中存储的关键帧图像的图像特征中,匹配出所述第一图像特征相对应的图像特征,得到第二图像特征;The first matching module is configured to match the image features corresponding to the first image features from the image features of the key frame images stored in the preset second map corresponding to the target area to obtain a second image feature;
所述第三确定模块,配置为根据所述第二图像特征,确定所述图像采集设备的位姿信息。The third determining module is configured to determine the pose information of the image acquisition device according to the second image feature.
本申请实施例提供一种终端,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述定位方法中的步骤。An embodiment of the present application provides a terminal, including a memory and a processor, the memory stores a computer program that can run on the processor, and the processor implements the steps in the positioning method when the program is executed.
本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述定位方法中的步骤。The embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned positioning method are realized.
本申请实施例提供一种定位方法及装置、终端、存储介质,其中,首先,确定所述图像采集设备所处的网络的当前位置的当前网络特征信息;从预设第一地图中,查找与所述当前网络特征信息对应的区域标识;根据所述区域标识,确定所述图像采集设备所处的目标区域;然后,采用所述图像采集设备采集待处理图像,并提取所述待处理图像的第一图像特征;从所述目标区域对应的预设第二地图中存储的关键帧图像的图像特征中,匹配出所述第一图像特征相对应的图像特征,得到第二图像特征;最后,根据所述第二图像特征,确定所述图像采集设备的位姿信息;如此,先利用预设第一地图对图 像采集设备进行粗定位,然后利用粗定位的目标区域对应的预设第二地图,基于该预设第二地图中的关键帧图像对该图像采集设备进行精确定位,以得到该图像采集设备的位姿信息,提高了定位精度。The embodiments of the present application provide a positioning method, device, terminal, and storage medium. First, determine the current network feature information of the current location of the network where the image acquisition device is located; The area identification corresponding to the current network feature information; according to the area identification, the target area where the image acquisition device is located is determined; then, the image acquisition device is used to collect the image to be processed, and extract the information of the image to be processed A first image feature; from the image features of the key frame image stored in the preset second map corresponding to the target area, the image feature corresponding to the first image feature is matched to obtain the second image feature; finally, According to the second image feature, determine the pose information of the image capture device; in this way, first use the preset first map to coarsely locate the image capture device, and then use the preset second map corresponding to the target area of the coarse location , To accurately position the image acquisition device based on the key frame image in the preset second map to obtain the pose information of the image acquisition device, which improves the positioning accuracy.
附图说明Description of the drawings
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本申请的实施例,并与说明书一起配置为说明本申请的技术方案。The drawings herein are incorporated into the specification and constitute a part of the specification. These drawings illustrate embodiments that conform to the application and are configured together with the specification to illustrate the technical solution of the application.
图1为本申请实施例定位方法实现流程示意图;FIG. 1 is a schematic diagram of the implementation process of a positioning method according to an embodiment of this application;
图2A为本申请实施例定位方法的另一实现流程示意图;2A is a schematic diagram of another implementation process of the positioning method according to an embodiment of this application;
图2B为本申请实施例定位方法另一实现流程示意图;2B is a schematic diagram of another implementation process of the positioning method according to the embodiment of this application;
图3A为本申请实施例定位方法的又一实现流程示意图;3A is a schematic diagram of another implementation process of the positioning method according to an embodiment of the application;
图3B为本申请实施例定位方法的场景示意图;FIG. 3B is a schematic diagram of a scene of a positioning method according to an embodiment of this application;
图3C为本申请实施例定位方法的另一场景示意图;FIG. 3C is a schematic diagram of another scene of the positioning method according to the embodiment of this application;
图4为本申请实施例比值向量的结构示意图;FIG. 4 is a schematic diagram of the structure of a ratio vector in an embodiment of the application;
图5A为本申请实施例确定匹配帧图像的应用场景图;FIG. 5A is a diagram of an application scenario for determining a matching frame image according to an embodiment of the application;
图5B为本申请实施例确定采集设备的位置信息的结构示意图;5B is a schematic structural diagram of determining location information of a collection device according to an embodiment of the application;
图6为本申请实施例定位装置的组成结构示意图。FIG. 6 is a schematic diagram of the composition structure of a positioning device according to an embodiment of the application.
具体实施方式detailed description
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,所描述的实施例不应视为对本申请的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。In order to make the purpose, technical solutions, and advantages of this application clearer, the application will be further described in detail below in conjunction with the accompanying drawings. The described embodiments should not be regarded as limiting the application. Those of ordinary skill in the art have not made any suggestions. All other embodiments obtained under the premise of creative labor belong to the scope of protection of this application.
本申请实施例提供一种定位方法,图1为本申请实施例定位方法实现流程示意图,如图1所示,所述方法包括以下步骤:The embodiment of the present application provides a positioning method. FIG. 1 is a schematic diagram of the implementation process of the positioning method according to the embodiment of the present application. As shown in FIG. 1, the method includes the following steps:
步骤S101,确定所述图像采集设备所处的网络的当前位置的当前网络特征信息。Step S101: Determine current network feature information of the current location of the network where the image acquisition device is located.
这里,所述网络特征信息可以是该图像采集设备所处网络的信号强度,或者该图像采集设备所处网络的信号强度的分布。Here, the network characteristic information may be the signal strength of the network where the image acquisition device is located, or the distribution of the signal strength of the network where the image acquisition device is located.
步骤S102,从预设第一地图中,查找与所述当前网络特征信息对应的区域标识。Step S102, searching for an area identifier corresponding to the current network feature information from the preset first map.
这里,所述预设第一地图可以理解为无线保真(Wireless Fidelity,WiFi)指纹地图,即所述预设第一地图中存储有每一区域的标识信息和该区域对应的网络的信号强度,或者该区域对应的网络信号强度的分布情况,这样区域的标识信息与网络的信号强度一一对应,基于图像采集设备所处网络的当前网络特征信息,即可在预设第一地图中查找出该当前网络特征信息对应的区域标识。Here, the preset first map may be understood as a wireless fidelity (Wireless Fidelity, WiFi) fingerprint map, that is, the identification information of each area and the signal strength of the network corresponding to the area are stored in the preset first map , Or the distribution of the network signal strength corresponding to the area, so that the identification information of the area corresponds to the signal strength of the network one-to-one. Based on the current network feature information of the network where the image acquisition device is located, you can find it in the preset first map The area identifier corresponding to the current network feature information is output.
步骤S103,根据所述区域标识,确定所述图像采集设备所处的目标区域。Step S103: Determine the target area where the image acquisition device is located according to the area identifier.
这里,基于区域标识可以唯一的确定该图像采集设备所处的目标区域。Here, the target area where the image acquisition device is located can be uniquely determined based on the area identification.
步骤S104,采用所述图像采集设备采集待处理图像,并提取所述待处理图像的第一图像特征。Step S104: Use the image acquisition device to collect an image to be processed, and extract a first image feature of the image to be processed.
这里,第一图像特征包括:所述待处理图像的特征点的描述信息和二维(2 Dimensions,2D)位置信息。在步骤S102中,首先,提取待处理图像的第一图像特征,比如,提取所述待处理图像的特征点;确定所述特征点的描述信息和所述特征点的在所述待处理图像中的2D坐标信息;其中,特征点的描述信息可以理解为是能够唯一标识该特征点的描述子信息。Here, the first image feature includes: description information and two-dimensional (2 Dimensions, 2D) position information of the feature points of the image to be processed. In step S102, first, extract the first image feature of the image to be processed, for example, extract the feature point of the image to be processed; determine the description information of the feature point and the feature point in the image to be processed 2D coordinate information; where the description information of the feature point can be understood as the descriptor information that can uniquely identify the feature point.
步骤S105,从所述目标区域对应的预设第二地图中存储的关键帧图像的图像特征中,匹配出所述第一图像特征相对应的图像特征,得到第二图像特征。Step S105: Match the image features corresponding to the first image feature from the image features of the key frame images stored in the preset second map corresponding to the target area to obtain a second image feature.
这里。所述第二图像特征包括:所述包含有目标区域的标识信息的关键帧图像的特征点的2D坐标信息、三维(3 Dimensions,3D)位置信息和描述信息。所述目标区域对应的预设第二地图,可以理解为全局地图中,标识有目标区域的标识信息的关键帧图像对应的部分全局地图。比如,对于所有的关键帧图像标识该关键帧图像对应的区域的标识信息,那么,确定目标区域之后,即可根据目标区域的标识信息,确定出标识该目标区域的标识信息的关键帧图像的预设第二地图,即预设第二地图中关 键帧图像集合是标识该目标区域的标识信息的关键帧图像集合,和每一样本特征点,在关键帧图像的中所占的比值对应的比值向量集合。所述步骤S102可以理解为,从预设第二地图中存储的关键帧图像的图像特征中,选择与第一图像特征匹配度较高的第二图像特征。Here. The second image feature includes: 2D coordinate information, three-dimensional (3 Dimensions, 3D) position information, and description information of the feature point of the key frame image containing the identification information of the target area. The preset second map corresponding to the target area can be understood as a part of the global map corresponding to the key frame image that identifies the identification information of the target area in the global map. For example, for all the key frame images to identify the identification information of the region corresponding to the key frame image, then after the target area is determined, the key frame image that identifies the identification information of the target area can be determined according to the identification information of the target area. Preset the second map, that is, the set of key frame images in the preset second map is a set of key frame images that identify the identification information of the target area, and each sample feature point corresponds to the ratio of the key frame image Ratio vector collection. The step S102 can be understood as selecting a second image feature that has a higher degree of matching with the first image feature from the image features of the key frame image stored in the preset second map.
步骤S106,根据所述第二图像特征,确定所述图像采集设备的位姿信息。Step S106: Determine the pose information of the image acquisition device according to the second image feature.
这里,所述位姿信息包括图像采集设备的采集朝向和该图像采集设备的位置。基于所述第二图像特征对应的关键帧图像的特征点的3D坐标信息和所述第一图像特征对应的待处理图像的特征点的2D坐标信息,确定所述图像采集设备的位置信息。比如,首先,在图像采集设备所处的三维坐标空间,将待处理图像的特征点的2D坐标信息转换为3D坐标信息,然后,将该3D坐标信息与预设第二地图的三维坐标系中的关键帧图像的特征点的3D坐标信息,进行比对,以确定图像采集设备的位置信息。这样,同时考虑了特征点的2D坐标信息和3D坐标信息,那么当对图像采集设备进行定位时,既可以得到图像采集设备的位置还可以得到该图像采集设备的采集朝向,提高了定位精度。Here, the pose information includes the collection orientation of the image collection device and the position of the image collection device. The location information of the image acquisition device is determined based on the 3D coordinate information of the feature point of the key frame image corresponding to the second image feature and the 2D coordinate information of the feature point of the image to be processed corresponding to the first image feature. For example, first, in the three-dimensional coordinate space where the image acquisition device is located, the 2D coordinate information of the feature points of the image to be processed is converted into 3D coordinate information, and then the 3D coordinate information is combined with the three-dimensional coordinate system of the preset second map The 3D coordinate information of the feature points of the key frame image is compared to determine the position information of the image acquisition device. In this way, considering both the 2D coordinate information and the 3D coordinate information of the feature point, when the image acquisition device is positioned, both the position of the image acquisition device and the acquisition orientation of the image acquisition device can be obtained, which improves the positioning accuracy.
在本申请实施例中,首先,基于预设第一地图对图像采集设备进行粗定位,以得到目标区域;然后,从预设第二地图中的关键帧图像的图像特征中选择与该图像特征匹配的第二图像特征,以实现对图像采集设备的精确定位,这样,先利用预设第一地图进行粗定位,然后利用预设第二地图,基于关键帧图像对图像采集设备进行精确定位,以确定出图像采集设备的位置和采集朝向,从而提高了定位精度。In the embodiment of the present application, first, the image acquisition device is roughly positioned based on the preset first map to obtain the target area; then, the image feature is selected from the image features of the key frame image in the preset second map. The matched second image feature is used to realize the precise positioning of the image acquisition device. In this way, the first preset map is used for coarse positioning, and then the preset second map is used to accurately position the image acquisition device based on the key frame image. To determine the location and collection orientation of the image capture device, thereby improving the positioning accuracy.
本申请实施例提供一种定位方法,图2A为本申请实施例定位方法的另一实现流程示意图,如图2A所示,所述方法包括以下步骤:The embodiment of the present application provides a positioning method. FIG. 2A is a schematic diagram of another implementation process of the positioning method according to the embodiment of the present application. As shown in FIG. 2A, the method includes the following steps:
步骤S201,将当前网络的覆盖范围划分为多个区域。Step S201: Divide the coverage area of the current network into multiple areas.
这里,可以将当前网络的覆盖范围划分为多个网格,如图3B所示,将当前网络的覆盖范围划分为4行7列的网格,每一网格表示表示为一个区域,而且每一区域均有一个能够唯一标识该区域的标识信息,比如,该区域的身份证标识号(Identity document,ID)。Here, the coverage of the current network can be divided into multiple grids. As shown in Figure 3B, the coverage of the current network is divided into grids with 4 rows and 7 columns. An area has identification information that can uniquely identify the area, for example, the identity document (ID) of the area.
步骤S202,确定所述当前网络中的多个无线访问接入点在每一区域内的网络特征信息。Step S202: Determine the network characteristic information of the multiple wireless access points in the current network in each area.
这里,如图3B所示,在该当前网络中有两个无线访问接入点(Wireless Access Point,AP),即AP31和AP 32,所述步骤S202可以理解为,分别确定AP 31和AP 32在每一网格内的信号强度。Here, as shown in Figure 3B, there are two wireless access points (Wireless Access Points, APs) in the current network, namely AP31 and AP32. The step S202 can be understood as determining AP31 and AP32 respectively. The signal strength in each grid.
步骤S203,存储所述每一区域的标识信息和所述每一区域对应的网络特征信息,得到所述预设第一地图。Step S203: Store the identification information of each area and the network feature information corresponding to each area to obtain the preset first map.
这里,每一区域对应的网络特征信息,可以理解为该区域能够检测到的所有AP的信号强度。所述每一区域的标识信息不同。每一区域的标识信息和该区域对应的网络特征信息,一一对应地存储在预设第一地图,所述预设第一地图可以理解为WiFi指纹地图,其中,每一区域的指纹可以是该区域能够检测到的AP的信号强度,如图3B所示,一个网格(即一个区域)的指纹是一个二维的向量ρ=[ρ 1 ρ 2],其中ρ i是来自第i个AP的平均信号强度。 Here, the network feature information corresponding to each area can be understood as the signal strength of all APs that can be detected in the area. The identification information of each area is different. The identification information of each area and the network feature information corresponding to the area are stored in a preset first map in a one-to-one correspondence. The preset first map can be understood as a WiFi fingerprint map, where the fingerprint of each area can be The signal strength of the AP that can be detected in this area, as shown in Figure 3B, the fingerprint of a grid (that is, an area) is a two-dimensional vector ρ=[ρ 1 ρ 2 ], where ρ i is from the i-th The average signal strength of the AP.
上述步骤S201至步骤S203给出了一种创建预设第一地图的方式,在该方式中每一区域的标识信息和该区域能够检测到的网络特征信息一一对应,这样,确定图像采集设备所处网络的网络特征信息,即可在预设第一地图中粗略的确定该图像采集设备所处的区域。The above steps S201 to S203 give a way to create a preset first map. In this way, the identification information of each area corresponds to the network feature information that can be detected in that area. In this way, the image acquisition device is determined The network feature information of the network where it is located can roughly determine the area where the image acquisition device is located in the preset first map.
步骤S204,从所述预设第一地图中存储的网络特征信息中,确定与所述当前网络特征信息相匹配的目标特征信息。Step S204: Determine target feature information that matches the current network feature information from the network feature information stored in the preset first map.
这里,预设第一地图中存储有每一区域对应的网络特征信息,基于当前网络特征信息,在预设第一地图中即可查找到与该当前网络特征信息相似度较高的目标特征信息。Here, the network feature information corresponding to each area is stored in the preset first map. Based on the current network feature information, the target feature information with higher similarity to the current network feature information can be found in the preset first map .
步骤S205,根据所述预设第一地图中存储的所述网络特征信息和区域的标识信息的对应关系,查找所述当前网络特征信息对应的区域标识。Step S205: According to the corresponding relationship between the network feature information and the area identification information stored in the preset first map, search for the area identification corresponding to the current network feature information.
这里,在预设第一地图中网络特征信息与区域的标识信息一一对应,所以当确定目标特征信息之后,可根据设第一地图中存储的所述网络特征信息和区域的标识信息的对应关系,定位到图像采集设 备所处的目标区域,从而实现对图像采集设备的粗定位,比如,确定图像采集设备所在的房间。Here, the network feature information in the preset first map corresponds to the identification information of the area one-to-one, so after the target feature information is determined, the correspondence between the network feature information stored in the first map and the identification information of the area can be set Relationship, locate the target area where the image capture device is located, so as to achieve rough positioning of the image capture device, for example, determine the room where the image capture device is located.
上述步骤S204和步骤S205给出了一种实现“从预设第一地图中,查找与所述当前网络特征信息对应的区域标识”的方式,在该方式中,得到图像采集设备所处网络的当前网络特征信息之后,即根据预设第一地图中网络特征信息和区域的标识信息的一一对应地关系,查找到该图像采集设备所处的目标区域,从而实现了对图像采集设备的粗略定位。The above steps S204 and S205 give a way to realize "from the preset first map, find the area identifier corresponding to the current network feature information", in this way, the network information of the image acquisition device is obtained. After the current network feature information, according to the one-to-one correspondence between the network feature information and the identification information of the area in the preset first map, the target area where the image capture device is located is found, thereby achieving a rough understanding of the image capture device. Positioning.
步骤S206,从样本图像库中,选择满足预设条件的多个关键帧图像,得到关键帧图像集合。Step S206, selecting a plurality of key frame images meeting preset conditions from the sample image library to obtain a key frame image set.
这里,首先,确定该样本图像对应的场景为连续场景还是离散场景,如果是离散场景,过程如下:Here, first, determine whether the scene corresponding to the sample image is a continuous scene or a discrete scene. If it is a discrete scene, the process is as follows:
第一步,从所述样本图像中选择预设数量的角点;所述角点为所述样本图像中与周围预设数量的像素点具有较大差别的像素点;比如,选择150个角点。In the first step, a preset number of corner points are selected from the sample image; the corner points are pixels in the sample image that are significantly different from the preset number of surrounding pixels; for example, 150 corner points are selected point.
第二步,如果采集时间相邻的两个样本图像中包含的相同的角点数量大于等于特定阈值,确定所述样本图像对应的场景为连续场景;两个样本图像的采集时间相邻,还可以理解为是连续的两个样本图像,判断这两个样本图像中包含的相同的角点的数量,数量越大,说明这两个样本图像的相关度越高,也说明这两个样本图像是来自于连续场景的图像。连续场景,比如,单一的室内环境,比如,卧室、客厅或单个会议室等。In the second step, if the number of the same corner points contained in two sample images with adjacent acquisition times is greater than or equal to a certain threshold, it is determined that the scene corresponding to the sample image is a continuous scene; the acquisition times of the two sample images are adjacent, and It can be understood as two consecutive sample images. Determine the number of the same corner points contained in the two sample images. The larger the number, the higher the correlation between the two sample images and the higher the correlation between the two sample images. It is an image from a continuous scene. Continuous scenes, such as a single indoor environment, such as a bedroom, a living room, or a single meeting room.
第三步,如果采集时间相邻的两个样本图像中包含的相同的角点数量小于特定阈值,确定所述样本图像对应的场景为离散场景。这两个样本图像中包含的相同的角点的数量越小,说明这两个样本图像的相关度越低,也说明这两个样本图像是来自于离散场景的图像。离散场景,比如,在多个室内环境下,比如,一栋楼里的多个房间或者一层里的多个会议室等。The third step is to determine that the scene corresponding to the sample image is a discrete scene if the number of the same corner points contained in two sample images adjacent to the acquisition time is less than a certain threshold. The smaller the number of the same corner points contained in the two sample images is, the lower the correlation between the two sample images is, and it also means that the two sample images are images from discrete scenes. Discrete scenes, for example, in multiple indoor environments, such as multiple rooms in a building or multiple conference rooms on a first floor.
然后,如果样本图像对应的场景为离散场景,根据输入的选择指令,从样本图像库中选择关键帧图像;即,如果样本图像属于离散场景,说明多个样本图像对应的不是一个场景,那么用户手动选择关键帧图像,这样,保证了不同的环境下,所选的关键图像的有效性。Then, if the scene corresponding to the sample image is a discrete scene, select the key frame image from the sample image library according to the input selection instruction; that is, if the sample image belongs to a discrete scene, it means that multiple sample images correspond to not one scene, then the user Manually select the key frame image, in this way, to ensure the validity of the selected key image in different environments.
如果样本图像对应的场景为连续场景,根据预设的帧率或视差,从样本图像库中选择关键帧图像;即,如果样本图像属于连续场景,说明多个样本图像对应的是同一个场景,那么通过事先设置预设的帧率或者预设的视差,自动选择满足该预设的帧率或者预设的视差的样本图像作为关键帧图像,这样,既所选的关键图像的有效性,还提高了选择关键帧图像的效率。If the scene corresponding to the sample image is a continuous scene, select the key frame image from the sample image library according to the preset frame rate or parallax; that is, if the sample image belongs to a continuous scene, it means that multiple sample images correspond to the same scene. Then, by setting the preset frame rate or preset parallax in advance, the sample image that meets the preset frame rate or preset parallax is automatically selected as the key frame image. In this way, both the effectiveness of the selected key image and the Improve the efficiency of selecting key frame images.
步骤S207,采用每一关键帧图像对应的区域的标识信息,一一对应地标识每一所述关键帧图像,得到标识的关键帧图像集合。Step S207: Using the identification information of the region corresponding to each key frame image, identify each of the key frame images in a one-to-one correspondence to obtain a set of identified key frame images.
这里,根据关键帧图像的图像内容确定采集该关键帧图像的图像采集设备对应的区域,以及该区域的标识信息,采用该标识信息标识该关键帧图像,这样每一个关键帧图像上均标识有相应区域的标识信息。Here, according to the image content of the key frame image, the area corresponding to the image acquisition device that collects the key frame image is determined, as well as the identification information of the area, and the identification information is used to identify the key frame image, so that each key frame image is marked with The identification information of the corresponding area.
步骤S208,提取每一标识的关键帧图像的图像特征,得到关键图像特征集合。Step S208: Extract the image features of each identified key frame image to obtain a key image feature set.
这里,标识的关键帧的图像特征包括:关键帧图像的特征点的2D坐标信息、3D坐标信息和能够唯一标识该特征点的描述信息。得到关键图像特征集合,以便于从关键图像特征集合中匹配出与第一图像特征高度相似的第二图像特征,从而得到相应的匹配帧图像。Here, the image feature of the identified key frame includes: 2D coordinate information, 3D coordinate information of the feature point of the key frame image, and description information that can uniquely identify the feature point. Obtain the key image feature set, so as to match the second image feature that is highly similar to the first image feature from the key image feature set, so as to obtain the corresponding matching frame image.
步骤S209,确定所述样本特征点集合中每一样本特征点,在标识的关键帧图像的中所占的比值,得到比值向量集合。Step S209: Determine the ratio of each sample feature point in the sample feature point set in the identified key frame image to obtain a ratio vector set.
这里,得到比值向量集合之后,将不同的样本特征点和该比值向量集合存储于预设的词袋模型中,以便于采用预设的词袋模型从关键帧图像中检索出待处理图像的匹配帧图像。所述步骤S209可以通过以下过程实现:Here, after the ratio vector set is obtained, the different sample feature points and the ratio vector set are stored in the preset bag-of-words model, so that the preset bag-of-words model can be used to retrieve the matching of the image to be processed from the key frame image Frame image. The step S209 can be implemented through the following process:
首先,根据样本图像库中包含的样本图像的第一数量和第i个样本特征点在样本图像库中出现的第一次数,确定第一平均次数。第一平均次数用于表明所述第i个样本特征点平均在每一样本图像中出现的次数;比如,样本图像的第一数量为N,第i个样本特征点在样本图像库中出现的第一次数为n i,基于此,即可得到第一平均次数idf(i)。 First, the first average number of times is determined according to the first number of sample images contained in the sample image library and the first number of times that the i-th sample feature point appears in the sample image library. The first average number is used to indicate the average number of times the i-th sample feature point appears in each sample image; for example, the first number of sample images is N, and the i-th sample feature point appears in the sample image library The first number of times is n i , based on this, the first average number of times idf(i) can be obtained.
其次,根据所述第i个样本特征点在第j个关键帧图像中出现的第二次数和所述第j个关键帧图像中包含的样本特征点的第二数量,确定第二平均次数;第二平均次数用于表明所述第i个样本特征点占据第j个关键帧图像中包含的样本特征点的比例;比如,第二次数为
Figure PCTCN2020117156-appb-000001
第二数量为
Figure PCTCN2020117156-appb-000002
即可得到第二平均次数tf(i,I t)。
Secondly, determine the second average number of times according to the second number of occurrences of the i-th sample feature point in the j-th key frame image and the second number of sample feature points contained in the j-th key frame image; The second average number is used to indicate the proportion of the sample feature points contained in the j-th key frame image occupied by the i-th sample feature point; for example, the second number is
Figure PCTCN2020117156-appb-000001
The second quantity is
Figure PCTCN2020117156-appb-000002
Then the second average number tf(i,I t ) can be obtained.
最后,根据所述第一平均次数和所述第二平均次数,得到样本特征点在关键帧图像的中所占的比值,得到所述比值向量集合。比如,将第一平均次数与第二平均次数相乘,即可得到比值向量
Figure PCTCN2020117156-appb-000003
Finally, according to the first average number and the second average number, the ratio of the sample feature points in the key frame image is obtained, and the ratio vector set is obtained. For example, multiply the first average number and the second average number to get the ratio vector
Figure PCTCN2020117156-appb-000003
步骤S210,存储所述比值向量集合和所述关键图像特征集合,得到所述预设第二地图所属的全局地图。Step S210: Store the ratio vector set and the key image feature set to obtain the global map to which the preset second map belongs.
这里,预设第二地图是全局地图的一部分,将标识的关键帧图像对应的比值向量集合和关键图像特征集合存储在预设第二地图中,以便于对图像采集设备进行定位时,采用该比值向量集合与利用预设的词袋模型确定的待处理图像对应的比值向量集合进行比对,以从关键图像特征集合中确定与待处理图像高度相似的匹配帧图像。Here, the preset second map is a part of the global map, and the ratio vector set corresponding to the identified key frame image and the key image feature set are stored in the preset second map, so that when the image acquisition device is located, use this The ratio vector set is compared with the ratio vector set corresponding to the image to be processed determined by using a preset bag-of-words model to determine a matching frame image that is highly similar to the image to be processed from the key image feature set.
上述步骤S206至步骤S210给出了一种创建全局地图的方式,在该方式中,将得到的关键帧图像采用区域的标识信息进行标识,这样得到的全局地图中每一关键帧图像都是标识有对应区域的标识信息的。The above steps S206 to S210 give a way to create a global map. In this way, the obtained key frame image is identified by the identification information of the area, so that each key frame image in the global map obtained is an identification There is identification information of the corresponding area.
步骤S211,从所述全局地图中存储的标识的关键帧图像中,确定标识所述目标区域的标识信息的关键帧图像。Step S211: Determine the key frame image that identifies the identification information of the target area from the identified key frame images stored in the global map.
这里,确定目标区域之后,基于目标区域的标识信息可在全局地图中,找出标识目标区域的标识信息的关键帧图像对应的部分全局地图,即预设第二地图。Here, after the target area is determined, based on the identification information of the target area, a part of the global map corresponding to the key frame image of the identification information of the target area can be found in the global map, that is, the second map is preset.
步骤S212,将标识所述目标区域的标识信息的关键帧图像对应的部分全局地图,作为所述预设第二地图。Step S212: Use a partial global map corresponding to the key frame image that identifies the identification information of the target area as the preset second map.
上述步骤S211和步骤S212给出了确定预设第二地图的方式,在该方式中,在全局地图中查找标识有目标区域的标识信息的关键帧图像,将这一部分标识有目标区域的标识信息的关键帧图像对应的部分全局地图,作为预设第二地图。The above steps S211 and S212 give a way to determine the preset second map. In this way, the key frame image that identifies the identification information of the target area is searched for in the global map, and this part is identified with the identification information of the target area The part of the global map corresponding to the key frame image of is used as the preset second map.
步骤S213,根据待处理图像的第一图像特征,从所述目标区域对应的预设第二地图中存储的关键帧图像的图像特征中,匹配出第二图像特征。Step S213: According to the first image feature of the image to be processed, the second image feature is matched from the image feature of the key frame image stored in the preset second map corresponding to the target area.
这里,所述步骤S213可以通过以下步骤实现:Here, the step S213 can be implemented through the following steps:
第一步,分别确定不同的样本特征点在所述特征点集合中所占的比值,得到第一比值向量。In the first step, the ratios of different sample feature points in the feature point set are respectively determined, and the first ratio vector is obtained.
这里,所述多个样本特征点互不相同。预设的词袋模型中包含多个不同的样本特征点和多个样本特征点在所述关键帧图像中包含的特征点中所占的比值。所述第一比值向量可以是根据样本图像数量、样本特征点在样本图像中出现的次数、样本特征点在待处理图像里出现的次数和待处理图像中出现的样本特征点的总数来确定。Here, the plurality of sample feature points are different from each other. The preset bag-of-words model includes multiple different sample feature points and the ratio of multiple sample feature points among the feature points contained in the key frame image. The first ratio vector may be determined based on the number of sample images, the number of sample feature points appearing in the sample image, the number of sample feature points appearing in the image to be processed, and the total number of sample feature points appearing in the image to be processed.
第二步,获取第二比值向量。The second step is to obtain the second ratio vector.
这里,所述第二比值向量为所述多个样本特征点在所述关键帧图像中包含的特征点中所占的比值;第二比值向量是预先存储在预设的词袋模型中的,所以当需要对待处理图像的图像特征进行匹配时,从预设的词袋模型中获取该第二比值向量。第二比值向量的确定过程与第一比值向量的确定过程类似;而且所述第一比值向量和所述第二比值向量的维数相同。Here, the second ratio vector is the ratio of the multiple sample feature points among the feature points contained in the key frame image; the second ratio vector is pre-stored in a preset bag-of-words model, Therefore, when the image features of the image to be processed need to be matched, the second ratio vector is obtained from the preset bag-of-words model. The determination process of the second ratio vector is similar to the determination process of the first ratio vector; and the dimensions of the first ratio vector and the second ratio vector are the same.
第三步,根据所述第一图像特征、所述第一比值向量和所述第二比值向量,从所述关键帧图像的图像特征中,匹配出第二图像特征。The third step is to match a second image feature from the image features of the key frame image according to the first image feature, the first ratio vector and the second ratio vector.
这里,所述第三步可以通过以下过程实现:Here, the third step can be achieved through the following process:
首先,根据所述第一比值向量和所述第二比值向量,从所述关键帧图像的图像特征中,确定与所述第一图像特征的相似度大于第二阈值的相似图像特征。First, according to the first ratio vector and the second ratio vector, from the image features of the key frame image, determine similar image features whose similarity to the first image feature is greater than a second threshold.
这里,逐一的比较待处理图像的第一比值向量v 1与每一关键帧图像的第二比值向量v 2,采用这两个比值向量进行计算,即可确定每一关键帧图像与待处理图像的相似度,从而筛选出相似度大于等于第二阈值的相似关键帧图像,得到相似关键帧图像集合。 A first vector v ratio here, one by one comparison image to be processed 1 and the ratio of each second vector v 2 of the key frame image, using the ratio of these two vectors is calculated, to determine whether each key frame image and the image to be processed Thus, similar key frame images with similarity greater than or equal to the second threshold are screened out, and a set of similar key frame images is obtained.
其次,确定所述相似图像特征所属的相似关键帧图像,得到相似关键帧图像集合。Secondly, the similar key frame images to which the similar image features belong are determined to obtain a set of similar key frame images.
最后,从所述相似关键帧图像的图像特征中,选择与所述第一图像特征相似度满足预设相似度阈值的第二图像特征。Finally, from the image features of the similar key frame images, a second image feature whose similarity with the first image feature meets a preset similarity threshold is selected.
这里,从相似关键帧图像包含的图像特征中,选择与第一图像特征相似度最高的第二图像特征;比如,首先,确定至少两个所述相似关键帧图像的采集时间之间的时间差,和所述至少两个相似关键帧图像的图像特征分别与所述第一图像特征的相似度差;然后,将所述时间差小于第三阈值,且所述相似度差小于第四阈值的相似关键帧图像进行联合,得到联合帧图像;也就是说,选择的是采集时间靠近,且与待处理图像的相似度靠近的多个相似关键帧图像,说明这些关键帧图像可能是连续的的画面,所以将这样的多个相似关键帧图像联合在一起,组成联合帧图像(也可以成为岛),这样得到多个联合帧图像;最后,从所述联合帧图像的图像特征中,选择与所述第一图像特征相似度满足预设相似度阈值的第二图像特征。比如,先是分别确定多个联合帧图像中包含的每一关键帧图像的图像特征与所述第一图像特征的相似度之和;这样,逐一的确定多个联合帧图像中包含的多个关键帧图像的图像特征与第一图像特征的相似度之和。再,将相似度之和最大的联合帧图像,确定为与所述待处理图像的相似度最高的目标联合帧图像;最后,根据目标联合帧图像的特征点的描述信息和所述待处理图像的特征点的描述信息,从所述目标联合帧图像的图像特征中,选择与所述第一图像特征相似度满足预设相似度阈值的第二图像特征。这样,由于目标联合帧图像的特征点的描述信息和所述待处理图像的特征点的描述信息,分别能够唯一的标识目标联合帧图像的特征点和待处理图像的特征点,所以基于这两个描述信息,可以非常准确的从所述目标联合帧图像的图像特征中,选择与第一图像特征相似度最高的第二图像特征。从而保证了,为待处理图像的第一图像特征匹配第二图像特征的准确度,保证了选择到的第二图像特征与第一图像特征的相似度极高。Here, from the image features contained in the similar key frame images, the second image feature with the highest similarity to the first image feature is selected; for example, first, the time difference between the acquisition times of at least two similar key frame images is determined, And the similarity difference between the image features of the at least two similar key frame images and the first image feature; then, the time difference is smaller than the third threshold, and the similarity difference is smaller than the fourth threshold. Frame images are combined to obtain a joint frame image; that is, multiple similar key frame images that are close in acquisition time and close to the image to be processed are selected, indicating that these key frame images may be continuous pictures. Therefore, such multiple similar key frame images are combined together to form a joint frame image (which can also become an island), so that multiple joint frame images are obtained; finally, from the image characteristics of the joint frame image, select the The second image feature whose first image feature similarity meets the preset similarity threshold. For example, firstly, the sum of the similarity between the image feature of each key frame image contained in the multiple joint frame images and the first image feature is separately determined; in this way, the multiple key frame images contained in the multiple joint frame images are determined one by one. The sum of the similarity between the image feature of the frame image and the first image feature. Then, the joint frame image with the largest sum of similarity is determined as the target joint frame image with the highest similarity to the image to be processed; finally, according to the description information of the feature points of the target joint frame image and the image to be processed The description information of the feature point is selected from the image features of the target joint frame image, and the second image feature whose similarity with the first image feature meets a preset similarity threshold is selected. In this way, since the description information of the feature points of the target joint frame image and the description information of the feature points of the image to be processed can uniquely identify the feature points of the target joint frame image and the feature points of the image to be processed, based on these two This description information can very accurately select the second image feature with the highest similarity to the first image feature from the image features of the target joint frame image. This ensures the accuracy of matching the first image feature of the image to be processed with the second image feature, and ensures that the selected second image feature is extremely similar to the first image feature.
步骤S214,根据所述第二图像特征,确定所述图像采集设备的位姿信息。Step S214: Determine the pose information of the image acquisition device according to the second image feature.
这里,所述步骤S214可以通过以下过程实现:Here, the step S214 can be implemented through the following process:
第一步,将包含所述第二图像特征的图像,确定为所述待处理图像的匹配帧图像。In the first step, the image containing the second image feature is determined as a matching frame image of the image to be processed.
在本申请实施例中,包含该第二图像特征的关键帧图像,说明该关键帧图像与待处理图像非常相似,所以将该关键帧图像作为该待处理图像的匹配帧图像。In the embodiment of the present application, the key frame image containing the second image feature indicates that the key frame image is very similar to the image to be processed, so the key frame image is used as the matching frame image of the image to be processed.
第二步,确定所述匹配帧图像中包含的任意两个特征点之间,小于第一阈值的目标欧式距离,得到目标欧式距离集合。The second step is to determine the target Euclidean distance between any two feature points contained in the matching frame image and which is less than the first threshold to obtain the target Euclidean distance set.
在本申请实施例中,比如,首先,确定匹配帧图像中包含的任意两个特征点之间的欧式距离,然后,从中选择小于第一阈值的欧式距离,作为目标欧式距离,以得到目标欧式距离集合;这是对于待处理图像中的一个特征点进行处理,可得到一个目标欧式距离集合,那么对于待处理图像中的多个特征点进行处理,则可得到多个欧式距离集合。所述小于第一阈值的目标欧式距离,还可以认为是首先从多个欧式距离中确定最小的欧式距离,然后判断该最小的欧式距离是否小于第一阈值,若小于,则确定该最小的欧式距离为目标欧式距离,那么目标欧式距离集合也就是多个欧式距离集合中,欧式距离最小的一个集合。In the embodiment of the present application, for example, first, determine the Euclidean distance between any two feature points contained in the matching frame image, and then select the Euclidean distance less than the first threshold as the target Euclidean distance to obtain the target Euclidean distance. Distance set: This is to process a feature point in the image to be processed to get a target Euclidean distance set, then to process multiple feature points in the image to be processed, you can get multiple Euclidean distance sets. The target Euclidean distance that is less than the first threshold can also be regarded as first determining the smallest Euclidean distance from a plurality of Euclidean distances, and then judging whether the smallest Euclidean distance is less than the first threshold, and if it is less than, then determining the smallest Euclidean distance. The distance is the target Euclidean distance, then the target Euclidean distance set is the set with the smallest Euclidean distance among multiple Euclidean distance sets.
第三步,如果所述目标欧式距离集合中包含的目标欧式距离的数量大于第五阈值,基于所述第二图像特征对应的关键帧图像的特征点的3D坐标信息和所述第一图像特征对应的待处理图像的特征点的2D坐标信息,确定所述图像采集设备的位置信息。In the third step, if the number of target Euclidean distances included in the target Euclidean distance set is greater than the fifth threshold, based on the 3D coordinate information of the feature points of the key frame image corresponding to the second image feature and the first image feature The corresponding 2D coordinate information of the feature point of the image to be processed determines the position information of the image acquisition device.
在本申请实施例中,如果目标欧式距离集合中包含的目标欧式距离的数量大于第五阈值,说明目标欧式距离的数量是足够大,也说明与第一图像特征相匹配的特征点足够多,说明这个关键帧图像与待处理图像的相似度足够高。然后,将关键帧图像的特征点的3D坐标信息和所述第一图像特征对应的待处理图像的特征点的2D坐标信息,作为前端位姿跟踪算法(Perspectives-n-Point,PnP)算法的输入, 先求出待处理图像的当前帧中特征点的2D坐标信息(比如,2D坐标)在当前坐标系下该特征点的3D坐标信息(比如,3D坐标),然后根据地图坐标系下的关键帧图像的特征点的3D坐标信息和当前坐标系下的待处理图像的当前帧中特征点的3D坐标信息,即可求解图像采集设备的位置信息。在该方式中,同时考虑关键帧图像的2D和3D坐标信息,在定位结果上可以同时提供图像采集设备的位置和姿态,所以提高了图像采集设备的定位准确度。In the embodiment of the present application, if the number of target Euclidean distances included in the target Euclidean distance set is greater than the fifth threshold, it indicates that the number of target Euclidean distances is large enough, and it also indicates that there are enough feature points that match the features of the first image. It shows that the similarity between this key frame image and the image to be processed is sufficiently high. Then, the 3D coordinate information of the feature point of the key frame image and the 2D coordinate information of the feature point of the image to be processed corresponding to the first image feature are used as the front-end pose tracking algorithm (Perspectives-n-Point, PnP) algorithm Input, first find out the 2D coordinate information (for example, 2D coordinates) of the feature point in the current frame of the image to be processed in the current coordinate system (for example, 3D coordinates), and then according to the map coordinate system The 3D coordinate information of the feature points of the key frame image and the 3D coordinate information of the feature points in the current frame of the image to be processed in the current coordinate system can solve the position information of the image acquisition device. In this method, considering the 2D and 3D coordinate information of the key frame image at the same time, the position and posture of the image acquisition device can be provided in the positioning result at the same time, so the positioning accuracy of the image acquisition device is improved.
在本申请实施例中,是通过预设第一地图对图像采集设备,进行粗定位,以确定图像采集设备所处的目标区域,然后,基于该目标区域的标识信息加载构建好的预设第二地图,并利用预设的词袋模型检索匹配到待处理图像相对应的匹配帧图像,最后,再将待处理图像的特征点的2D坐标信息和关键帧图像的特征点的3D坐标信息,作为PnP算法的输入,以得到当前图像采集设备在地图中的精确位置和采集朝向,以达到定位目的;这样,通过关键帧图像即可达到定位目的,得到图像采集设备在地图坐标系下的位置和采集朝向,提高了定位结果精度,而且鲁棒性强。In the embodiment of the present application, the image acquisition device is roughly positioned through the preset first map to determine the target area where the image acquisition device is located, and then the constructed preset first map is loaded based on the identification information of the target area. Second map, and use the preset bag-of-words model to retrieve the matching frame image corresponding to the image to be processed. Finally, the 2D coordinate information of the feature point of the image to be processed and the 3D coordinate information of the feature point of the key frame image are combined. As the input of the PnP algorithm, it can obtain the precise position and collection orientation of the current image acquisition device in the map to achieve the positioning purpose; in this way, the positioning purpose can be achieved through the key frame image, and the position of the image acquisition device in the map coordinate system can be obtained. And the acquisition orientation improves the accuracy of the positioning results and has strong robustness.
本申请实施例提供一种定位方法,图2B为本申请实施例定位方法另一实现流程示意图,如图2B所示,所述方法包括以下步骤:The embodiment of this application provides a positioning method. FIG. 2B is a schematic diagram of another implementation process of the positioning method according to the embodiment of this application. As shown in FIG. 2B, the method includes the following steps:
步骤S221,确定所述图像采集设备所处的网络的当前位置的当前网络特征信息;Step S221: Determine current network feature information of the current location of the network where the image acquisition device is located;
步骤S222,从预设第一地图中,查找与所述当前网络特征信息对应的区域标识;Step S222, searching for an area identifier corresponding to the current network feature information from the preset first map;
步骤S223,根据所述区域标识,确定所述图像采集设备所处的目标区域。Step S223: Determine the target area where the image acquisition device is located according to the area identifier.
步骤S224,根据待处理图像的第一图像特征,从目标区域对应的预设第二地图中存储的关键帧图像的图像特征中,匹配出第二图像特征。Step S224: According to the first image feature of the image to be processed, the second image feature is matched from the image feature of the key frame image stored in the preset second map corresponding to the target area.
步骤S225,确定所述第二图像特征对应的关键帧图像的特征点,在所述预设第二地图对应的地图坐标系中的地图坐标。Step S225: Determine the feature point of the key frame image corresponding to the second image feature and the map coordinates in the map coordinate system corresponding to the preset second map.
这里,获取预设第二地图中第二图像特征对应的特征点,在该预设第二地图对应的地图坐标系中的3D坐标。Here, the feature point corresponding to the second image feature in the preset second map is acquired, and the 3D coordinates in the map coordinate system corresponding to the preset second map are acquired.
步骤S226,确定所述第二图像特征对应的关键帧图像的特征点,在图像采集设备中所处的当前坐标系中的当前坐标。Step S226: Determine the feature point of the key frame image corresponding to the second image feature and the current coordinates in the current coordinate system where the image acquisition device is located.
这里,将地图坐标作为PnP算法的输入,得到该特征点在图像采集设备中所处的当前坐标系中的当前坐标。Here, the map coordinates are used as the input of the PnP algorithm, and the current coordinates of the feature point in the current coordinate system of the image acquisition device are obtained.
步骤S227,根据所述地图坐标和所述当前坐标,确定所述当前坐标系相对于所述地图坐标系的转换关系。Step S227: Determine a conversion relationship between the current coordinate system and the map coordinate system according to the map coordinates and the current coordinates.
这里,比较地图坐标和当前坐标,确定图像采集设备在当前坐标系中相对于地图坐标系的旋转向量,和平移向量。Here, the map coordinates and the current coordinates are compared, and the rotation vector and the translation vector of the image acquisition device relative to the map coordinate system in the current coordinate system are determined.
步骤S228,根据所述转换关系和所述图像采集设备在所述当前坐标系中的当前坐标,确定所述图像采集设备在所述地图坐标系中的位置和所述图像采集设备相对于所述地图坐标系的采集朝向。Step S228: Determine the position of the image acquisition device in the map coordinate system and the position of the image acquisition device relative to the current coordinate system according to the conversion relationship and the current coordinates of the image acquisition device in the current coordinate system. The collection orientation of the map coordinate system.
这里,采用旋转向量对图像采集设备的当前坐标进行旋转,确定图像采集设备相对于所述地图坐标系的采集朝向;采用平移向量对图像采集设备的当前坐标进行平移,确定图像采集设备在地图坐标系中的位置。Here, the rotation vector is used to rotate the current coordinates of the image acquisition device to determine the acquisition orientation of the image acquisition device relative to the map coordinate system; the translation vector is used to translate the current coordinates of the image acquisition device to determine that the image acquisition device is in the map coordinates The position in the department.
在本申请实施例中,通过确定第二图特征对应的特征点在当前坐标系中的3D坐标,这样通过比较该特征点在地图坐标系中的3D坐标和在当前坐标系中的3D坐标,确定当前坐标系相对于地图坐标系的旋转关系,然后根据该旋转关系求解图像采集设备的采集朝向和位置。In the embodiment of the present application, by determining the 3D coordinates of the feature point corresponding to the feature of the second map in the current coordinate system, by comparing the 3D coordinates of the feature point in the map coordinate system with the 3D coordinates in the current coordinate system, Determine the rotation relationship of the current coordinate system relative to the map coordinate system, and then solve the acquisition orientation and position of the image acquisition device according to the rotation relationship.
本申请实施例提供一种定位方法,图3A为本申请实施例定位方法的又一实现流程示意图,如图3A所示,所述方法包括以下步骤:The embodiment of the present application provides a positioning method. FIG. 3A is a schematic diagram of another implementation process of the positioning method according to the embodiment of the present application. As shown in FIG. 3A, the method includes the following steps:
步骤S301,加载创建好的预设第一地图,并保存在本地。Step S301: Load the created preset first map and save it locally.
这里,预设第一地图可以理解为WiFi指纹地图,创建预设第一地图(WiFi指纹地图)的过程可以在未连接当前网络的离线阶段实现。在离线阶段,为了采集各个位置上的指纹,首先,构建一个数据库,即在多个区域进行多次测量,得到数据库,该多个区域可以是大于等于网络覆盖的范围内,包含采集待处理图像的图像采集设备所处的区域。比如,构建该数据库的开发人员任意指定的区域。在该 数据库中位置和指纹的对应关系的建立通常在离线阶段进行。如图3B所示,地理区域被一个矩形网格所覆盖,将该地理区域划分为4行7列的网格,AP31和AP32是该网络中的无线访问接入点。AP31和AP32是被部署在这个区域中用于通信的。在利用WiFi指纹定位的过程中,AP所发出的信号强度被用作构建指纹信息。在上述每一个网格点上,通过一段时间的数据采样,得到来自各个AP的平均信号强度。比如,采集的时间大概在5到15分钟,大约每秒采集一次,采集的时候移动设备可能有不同的朝向和角度。如图3B所示,一个网格点的指纹是一个二维的向量ρ=[ρ 1 ρ 2],其中ρ i是来自第i个AP的平均信号强度,当然也可以用平均信号强度样本的分布作为指纹。每一个网格点都对应了一个二维向量(即指纹),由此构建出了一张WiFi指纹地图(即预设第一地图)。在本申请实施例中,假设有N个AP,那么指纹ρ就是一个N维向量。预设第一地图的网格粒度允许非常大,可以到房间级别,因为预设第一地图只用作粗定位。 Here, the preset first map may be understood as a WiFi fingerprint map, and the process of creating the preset first map (WiFi fingerprint map) may be implemented in an offline stage when the current network is not connected. In the offline stage, in order to collect fingerprints at various locations, first, build a database, that is, perform multiple measurements in multiple areas to obtain the database. The multiple areas can be within the network coverage area or more, including the collection of images to be processed The area where the image capture device is located. For example, the area arbitrarily designated by the developer who built the database. The establishment of the corresponding relationship between the location and the fingerprint in the database is usually carried out in the offline stage. As shown in FIG. 3B, the geographic area is covered by a rectangular grid. The geographic area is divided into a grid of 4 rows and 7 columns. AP31 and AP32 are wireless access points in the network. AP31 and AP32 are deployed in this area for communication. In the process of using WiFi fingerprint positioning, the signal strength sent by the AP is used to construct fingerprint information. At each of the above grid points, through a period of data sampling, the average signal strength from each AP is obtained. For example, the collection time is about 5 to 15 minutes, about once every second, and the mobile device may have different orientations and angles during the collection. As shown in Figure 3B, the fingerprint of a grid point is a two-dimensional vector ρ=[ρ 1 ρ 2 ], where ρ i is the average signal strength from the i-th AP. Of course, the average signal strength sample can also be used. Distribution as a fingerprint. Each grid point corresponds to a two-dimensional vector (ie fingerprint), thereby constructing a WiFi fingerprint map (ie, the preset first map). In the embodiment of the present application, assuming that there are N APs, then the fingerprint ρ is an N-dimensional vector. The grid granularity of the preset first map is allowed to be very large, and can reach the room level, because the preset first map is only used for coarse positioning.
步骤S302,从样本图像库中,选择满足预设条件的关键帧图像。Step S302: Select a key frame image that meets a preset condition from the sample image library.
步骤S303,采集过程中实时提取关键帧图像中的图像特征。In step S303, the image features in the key frame image are extracted in real time during the acquisition process.
这里,图像特征提取是对关键帧图像的一种解释和标注的过程。在步骤S303中,需要提取关键帧图像的特征点的2D坐标信息、3D坐标信息和描述信息(即该特征点的描述子信息);其中,关键帧图像的特征点的3D坐标信息是将关键帧图像的特征点的2D坐标信息映射在预设第二地图所处的三维坐标系中得到的。比如,对关键帧图像提取多个2D的特征点,提取数量为150个(150为经验值,特征点数量过少,跟踪失败率高,特征点数量过多,影响算法效率),用于图像跟踪;并对该特征点进行描述子的提取,用于特征点匹配;其次,通过三角化方法计算得到特征点的3D坐标信息(即深度信息),用于确定采集图像采集设备的位置。Here, image feature extraction is a process of interpretation and annotation of key frame images. In step S303, it is necessary to extract the 2D coordinate information, 3D coordinate information, and description information of the feature point of the key frame image (that is, the descriptor sub-information of the feature point); among them, the 3D coordinate information of the feature point of the key frame image is the key The 2D coordinate information of the feature points of the frame image is obtained by mapping in the preset three-dimensional coordinate system where the second map is located. For example, extract multiple 2D feature points from the key frame image, and the number of extraction is 150 (150 is the empirical value, the number of feature points is too small, the tracking failure rate is high, the number of feature points is too large, which affects the efficiency of the algorithm), which is used for images Tracking; and extract the descriptor of the feature point for feature point matching; secondly, the 3D coordinate information (ie depth information) of the feature point is calculated by the triangulation method, which is used to determine the location of the image acquisition device.
步骤S304,采集过程中实时确定每一样本特征点,在关键帧图像的中所占的比值,得到比值向量。Step S304: Determine the ratio of each sample feature point in the key frame image in real time during the acquisition process to obtain a ratio vector.
这里,步骤S304可以理解为,在关键帧图像的采集过程中,针对当前帧图像,实时提取该关键帧图像的的比值向量,如图4所示,用词汇树的形式来描述词袋模型,词袋模型中包括样本图像库41,即词汇树的根结点;样本图像42、43和44,即叶子结点42、43和44;样本特征点1至3为样本图像42中不同的样本特征点,样本特征点4至6为样本图像43中不同的样本特征点,样本特征点7至9为样本图像44中不同的样本特征点。在词袋模型中假设有w种样本特征点,即w为词袋模型的样本图像里提取出来的特征点种类数量。所以词袋模型里一共有w个样本特征点。每个样本特征点会对该关键帧图像进行评分,评分值为0~1的浮点数,这样每个关键帧图像都可以用w维的浮点数来表示,这个w维向量就是词袋模型输出的比值向量
Figure PCTCN2020117156-appb-000004
在评分的过程中,需要获取多个参数,比如,确定样本图像数量N(即第一数量),样本特征点w i在样本图像中出现的次数n i(即第一次数),I t为t时刻采集的图像I,
Figure PCTCN2020117156-appb-000005
为样本特征点w i在时刻采集到的关键帧图像I t里出现的次数(即第二次数),
Figure PCTCN2020117156-appb-000006
为关键帧图像I t里出现的样本特征点总数(即第二数量),通过样本特征点评分,得到每个关键帧图像的w维的浮点数向量,即比值向量,还可以将该比值向量作为预设的词袋模型的特征信息。
Here, step S304 can be understood as, during the acquisition of the key frame image, for the current frame image, the ratio vector of the key frame image is extracted in real time. As shown in Figure 4, the word bag model is described in the form of a vocabulary tree. The bag-of-words model includes sample image database 41, which is the root node of the vocabulary tree; sample images 42, 43 and 44, which are leaf nodes 42, 43 and 44; sample feature points 1 to 3 are different samples in sample image 42 Feature points, sample feature points 4 to 6 are different sample feature points in the sample image 43, and sample feature points 7 to 9 are different sample feature points in the sample image 44. In the bag-of-words model, it is assumed that there are w sample feature points, that is, w is the number of feature points extracted from the sample image of the bag-of-words model. Therefore, there are a total of w sample feature points in the bag-of-words model. Each sample feature point will score the key frame image, and the score value is a floating point number from 0 to 1, so that each key frame image can be represented by a w-dimensional floating point number. This w-dimensional vector is the output of the bag of words model. Ratio vector
Figure PCTCN2020117156-appb-000004
In the process of scoring, multiple parameters need to be obtained, for example, to determine the number of sample images N (ie the first number), the number of times the sample feature point w i appears in the sample image n i (ie the first number), I t Is the image I collected at time t,
Figure PCTCN2020117156-appb-000005
T the number that appears in the key frame image I w i is the sample feature points collected at time (i.e., the number of times a second),
Figure PCTCN2020117156-appb-000006
The total number of sample feature points (i.e., the second number), by the sample feature point scale, floating point vector w to give each keyframe-dimensional image is a key frame image I t appeared, i.e. the ratio of the vector, the vector can also be the ratio As the feature information of the preset bag-of-words model.
上述步骤S301至步骤S304,构建出一张依赖于关键帧图像的离线的预设第二地图,该预设第二地图以二进制格式存储关键帧图像的图像特征(包括:2D坐标信息、2D坐标信息和描述信息,比如,2D坐标、3D坐标、和描述子信息)到本地设备,当需要对图像采集设备进行时,该预设第二地图将被加载使用。In the above steps S301 to S304, an offline preset second map that depends on the key frame image is constructed. The preset second map stores the image characteristics of the key frame image in a binary format (including: 2D coordinate information, 2D coordinate Information and description information, such as 2D coordinates, 3D coordinates, and descriptive sub-information) are sent to the local device. When the image acquisition device needs to be performed, the preset second map will be loaded and used.
步骤S305,采用关键帧图像对应区域的标识信息对关键帧图像进行标注,以使标识的关键帧图像 与预设第一地图相关联,得到全局地图。Step S305: Use the identification information of the region corresponding to the key frame image to label the key frame image, so that the identified key frame image is associated with the preset first map to obtain a global map.
这里,采集过程中将关键帧图像进行标注,标注内容为区域ID,并与WiFi指纹地图相关联关键帧图像的标注内容为区域ID,区域ID与预设第一地图建立时候的网格点是一一对应的。在这种模式下,意味着一个预设第一地图的区域对应一个区域ID,一个区域ID对应着多个关键帧图像。如图3C所示,关键帧图像331和关键帧图像332中标识的标识信息是ID341,ID341为区域33的标识信息;关键帧图像333标识信息是ID342,ID342为区域34的标识信息;关键帧图像334和关键帧图像335总标识的标识信息是ID343,ID343为区域35的标识信息;关键帧图像336标识信息是ID344,ID344为区域36的标识信息。Here, the key frame image is annotated during the collection process, and the annotation content is the area ID, and the annotation content of the key frame image associated with the WiFi fingerprint map is the area ID. The area ID and the grid point when the preset first map is created are corresponding. In this mode, it means that one area of the preset first map corresponds to one area ID, and one area ID corresponds to multiple key frame images. As shown in FIG. 3C, the identification information identified in the key frame image 331 and the key frame image 332 is ID341, ID341 is the identification information of area 33; the identification information of key frame image 333 is ID342, and ID342 is the identification information of area 34; key frame The identification information of the image 334 and the key frame image 335 is ID 343, which is the identification information of the area 35; the identification information of the key frame image 336 is ID 344, and the ID 344 is the identification information of the area 36.
上述步骤S301至步骤S304构建出一张WiFi指纹地图(即预设第一地图),以及一张全局地图,该预设第二地图以二进制格式存储视觉关键帧的特征点信息(包括2D坐标、3D坐标和描述子信息)以及标注信息到本地。在对图像采集设备进行定位的过程中,这两张地图将被分别加载使用。The above steps S301 to S304 construct a WiFi fingerprint map (that is, the preset first map), and a global map, and the preset second map stores the feature point information of the visual key frame in a binary format (including 2D coordinates, 3D coordinates and descriptor information) and label information to the local. In the process of locating the image acquisition device, the two maps will be loaded and used separately.
步骤S306,通过预设第一地图对图像采集设备进行粗定位,得到该图像采集设备所处的目标区域。In step S306, the image acquisition device is roughly positioned by the preset first map to obtain the target area where the image acquisition device is located.
步骤S307,从全局地图中存储的标识的关键帧图像中,确定标识所述目标区域的标识信息的关键帧图像,得到预设第二地图。Step S307: Determine the key frame image that identifies the identification information of the target area from the identified key frame images stored in the global map, and obtain a preset second map.
这里,预设第二地图可以理解为是全局地图的局部地图。Here, the preset second map can be understood as a local map of the global map.
步骤S308,利用图像采集设备进行图像采集,得到待处理图像。In step S308, image acquisition is performed by using the image acquisition device to obtain an image to be processed.
步骤S309,在待处理图像采集过程中,实时提取待处理图像的当前帧中的第一图像特征。Step S309, in the process of acquiring the image to be processed, extract the first image feature in the current frame of the image to be processed in real time.
这里,实时提取待处理图像的当前帧中的第一图像特征与步骤S303的过程类似,但不需要确定待处理图像的3D坐标信息,因为在后续的PnP算法中不需要提供待处理图像的3D坐标信息。Here, extracting the first image feature in the current frame of the image to be processed in real time is similar to the process of step S303, but there is no need to determine the 3D coordinate information of the image to be processed, because there is no need to provide the 3D image of the image to be processed in the subsequent PnP algorithm. Coordinate information.
步骤S310,通过词袋模型检索待处理图像的当前帧在预设第二地图中的匹配帧图像。In step S310, the matching frame image of the current frame of the image to be processed in the preset second map is retrieved through the bag-of-words model.
这里,所述通过词袋模型检索待处理图像的当前帧在预设第二地图中的匹配帧图像,可以理解为利用词袋模型的特征信息即比值向量集合,进行检索待处理图像的当前帧在预设第二地图中的匹配帧图像。Here, the search for the matching frame image of the current frame of the image to be processed in the preset second map through the bag-of-words model can be understood as using the feature information of the bag-of-words model, that is, the ratio vector set to retrieve the current frame of the image to be processed The matching frame image in the preset second map.
所述步骤S310可以通过以下过程实现:The step S310 can be implemented through the following process:
第一步,查找待处理图像的当前帧和每个关键帧图像的相似度,相似度s(v 1,v 2)的计算方式为,首先,确定v 1和v 2,v 1和v 2分别表示词袋模型中包含的每一样本特征点在所述待处理图像的当前帧中所占的第一比值向量,和每一样本特征点在关键帧图像中所占的第二比值向量。基于v 1和v 2,即可确定待处理图像的当前帧和每个关键帧图像的相似度。如果词袋模型中包含w种样本特征点,那么第一比值向量和第二比值向量均为w维的向量。通过采用筛选出关键帧图像中相似度达到第二阈值的相似关键帧图像,成为相似关键帧图像集合。 The first step is to find the similarity between the current frame of the image to be processed and each key frame image. The similarity s(v 1 , v 2 ) is calculated as follows: First, determine v 1 and v 2 , v 1 and v 2 Respectively represent the first ratio vector of each sample feature point contained in the word bag model in the current frame of the image to be processed, and the second ratio vector of each sample feature point in the key frame image. Based on v 1 and v 2 , the similarity between the current frame of the image to be processed and each key frame image can be determined. If the bag-of-words model contains w sample feature points, then the first ratio vector and the second ratio vector are both w-dimensional vectors. The similar key frame images whose similarity reaches the second threshold among the key frame images are filtered out to form a set of similar key frame images.
第二步,在相似关键帧图像集合选取时间戳之差小于第三阈值,且相似度差小于第四阈值的相似关键帧图像联合在一起,得到联合帧图像(或被称为岛)。In the second step, similar key frame images whose time stamp difference is less than the third threshold and similarity difference less than the fourth threshold are selected from the set of similar key frame images to join together to obtain a joint frame image (or called an island).
这里,第二步可以理解为在相似关键帧图像集合选取时间戳靠近,且相似度的匹配分数靠近的相似关键帧图像联合在一起,被成为岛;这样将相似关键帧图像集合就被划分成了多联合帧图像(即多个岛)。联合帧图像中的第一个关键帧图像与最后一个关键帧图像之间的相似度之比非常小,该相似度之比
Figure PCTCN2020117156-appb-000007
通过确定分别表示一前一后两个关键帧图像的与当前帧的待处理图像的相似度
Figure PCTCN2020117156-appb-000008
和s(v t,v t-△t),即可得到。
Here, the second step can be understood as selecting similar key frame images with close timestamps and similarity matching scores in the similar key frame image set, and the similar key frame images are combined together to form an island; in this way, the similar key frame image set is divided into Multiple joint frame images (ie multiple islands) are created. The ratio of the similarity between the first key frame image and the last key frame image in the joint frame image is very small, and the similarity ratio
Figure PCTCN2020117156-appb-000007
By determining the similarity between the two key frame images one before and one after the other, and the image to be processed in the current frame
Figure PCTCN2020117156-appb-000008
And s(v t ,v t-△t ), you can get.
第三步,分别确定多个联合帧图像中包含的每一关键帧图像的图像特征与所述第一图像特征的相 似度之和。The third step is to respectively determine the sum of similarity between the image feature of each key frame image contained in the multiple joint frame images and the first image feature.
第四步,将相似度之和最大的联合帧图像,确定为与所述待处理图像的相似度最高的目标联合帧图像,从所述目标联合帧图像中找出与待处理图像的当前帧相似度最高的匹配帧图像。In the fourth step, the joint frame image with the largest sum of similarity is determined as the target joint frame image with the highest similarity to the image to be processed, and the current frame with the image to be processed is found from the target joint frame image The matching frame image with the highest similarity.
步骤S311,采用PnP算法,确定图像采集设备当前在地图坐标系中的位置和采集朝向。In step S311, the PnP algorithm is used to determine the current position and the acquisition orientation of the image acquisition device in the map coordinate system.
这里,所述步骤S311可以通过以下步骤实现:Here, the step S311 can be implemented through the following steps:
第一步,对待处理图像的当前帧X C的第N个特征点F CN,遍历匹配帧图像X 3的所有特征点,并确定匹配帧图像中任意两个特征点之间的欧式距离。如图5A所示,待处理图像的当前帧Xc51,与该当前帧Xc51匹配的匹配帧图像X352。计算特征点X053和X154之间的欧式距离,得到欧式距离F0501;计算特征点X154和X255之间的欧式距离,得到欧式距离F1502;计算特征点X456和X352之间的欧式距离,得到欧式距离F2503;计算特征点Xc51和X456之间的欧式距离,得到欧式距离F3504。 In the first step, the Nth feature point F CN of the current frame X C of the image to be processed is traversed through all feature points of the matched frame image X 3 and the Euclidean distance between any two feature points in the matched frame image is determined. As shown in FIG. 5A, the current frame Xc51 of the image to be processed is a matching frame image X352 that matches the current frame Xc51. Calculate the Euclidean distance between feature points X053 and X154 to get Euclidean distance F0501; Calculate the Euclidean distance between feature points X154 and X255 to get Euclidean distance F1502; Calculate the Euclidean distance between feature points X456 and X352 to get Euclidean distance F2503 ; Calculate the Euclidean distance between the feature points Xc51 and X456, and get the Euclidean distance F3504.
第二步,选择欧式距离最小的一组(即目标欧式距离集合)进行阈值判断,若小于第一阈值,确定为目标欧式距离,则形成目标欧式距离集合,否则不形成目标欧式距离集合,跳转至第一步,直至遍历X C的所有特征点,进入第三步。比如,如图5A所示,通过比较多个欧式距离,得到一组最小的欧式距离组合{F 1,F 2,F 3}。 The second step is to select the group with the smallest Euclidean distance (that is, the target Euclidean distance set) for threshold judgment. If it is less than the first threshold, it is determined as the target Euclidean distance, and then the target Euclidean distance set is formed. Otherwise, the target Euclidean distance set is not formed. Go to the first step until all the feature points of X C are traversed, and then go to the third step. For example, as shown in Figure 5A, by comparing multiple Euclidean distances, a set of minimum Euclidean distance combinations {F 1 , F 2 , F 3 } are obtained.
第三步,形成目标欧式距离集合,可表示为{F 1,F 2,F 3},若目标欧式距离集合的元素数量大于第五阈值,则进行第四步,否则算法结束,输出匹配帧X 3的位置信息。 The third step is to form the target Euclidean distance set, which can be expressed as {F 1 ,F 2 ,F 3 }. If the number of elements in the target Euclidean distance set is greater than the fifth threshold, proceed to the fourth step, otherwise the algorithm ends and the matching frame is output X 3 location information.
第四步,基于目标欧式距离集合,调用PnP中的函数求解出X C在地图坐标系下的位置信息。其中,PnP算法的过程如下: In the fourth step, based on the target Euclidean distance set, call the function in PnP to find the position information of X C in the map coordinate system. Among them, the process of the PnP algorithm is as follows:
PnP算法的输入是关键帧图像中的特征点的3D坐标和待处理图像的当前帧中特征点的2D坐标,该算法的输出是待处理图像的当前帧在地图坐标系中的位置。The input of the PnP algorithm is the 3D coordinates of the feature points in the key frame image and the 2D coordinates of the feature points in the current frame of the image to be processed, and the output of the algorithm is the position of the current frame of the image to be processed in the map coordinate system.
PnP算法不是直接根据匹配对序列求出图像采集设备位姿矩阵的,而是先求出标注有目标区域的标识信息的关键帧图像中特征点在当前坐标系下3D坐标,然后根据该特征点在地图坐标系下的3D坐标系和当前坐标系下3D坐标,求解当前坐标系相对于地图坐标系的旋转向量和平移向量,然后基于该旋转向量求解图像采集设备的采集朝向,基于该平移向量求解图像采集设备的位置。PnP算法的求解是从余弦定理开始的,设当前坐标系中心为点O,A、B和C为待处理图像的当前帧中三个特征点,如图5B所示:首先,根据余弦定理,确定A、B和C之间的关系;然后,基于a,b和c,确定三角形abc的三个夹角的余弦值,由于A、B和C的2D坐标是已知的,所以w,v,cos<a,c>,cos<b,c>,cos<a,b>都是已知量,基于此,即可得到A、B和C三个特征点在当前三维坐标系下的3D坐标;最后,基于得到A、B和C三个特征点在当前三维坐标系下的3D坐标,通过地图坐标系到当前坐标系的变换,确定采集设备的位置。The PnP algorithm does not directly obtain the pose matrix of the image acquisition device based on the matching pair sequence, but first obtains the 3D coordinates of the feature points in the key frame image marked with the identification information of the target area in the current coordinate system, and then according to the feature points In the 3D coordinate system of the map coordinate system and the 3D coordinate of the current coordinate system, the rotation vector and translation vector of the current coordinate system relative to the map coordinate system are solved, and then the acquisition orientation of the image acquisition device is solved based on the rotation vector, based on the translation vector Solve the location of the image capture device. The solution of the PnP algorithm starts from the law of cosines. Let the center of the current coordinate system be point O, and A, B, and C are three characteristic points in the current frame of the image to be processed, as shown in Figure 5B: First, according to the law of cosines, Determine the relationship between A, B, and C; then, based on a, b, and c, determine the cosine values of the three included angles of the triangle abc. Since the 2D coordinates of A, B, and C are known, w, v ,cos<a,c>,cos<b,c>,cos<a,b> are all known quantities. Based on this, we can get the 3D of the three characteristic points of A, B and C in the current three-dimensional coordinate system Coordinates; Finally, based on the 3D coordinates of the three characteristic points A, B, and C in the current three-dimensional coordinate system, the location of the collection device is determined through the transformation from the map coordinate system to the current coordinate system.
上述步骤S307至步骤S311,融合定位部分主要包括利用预设第一地图的粗定位和基于视觉关键帧图像的精定位。粗定位过程确定用户的大致位置,也确定了要加载的局部视觉地图;精定位是通过单目摄像头,采集当前的待处理图像,加载由粗定位的目标区域选定的预设第二地图,并利用词袋模型检索匹配到相对应的匹配帧图像,最后再通过PnP算法求解图像采集设备当前在地图坐标系中的精确位姿,以达到定位目的。In the above steps S307 to S311, the fusion positioning part mainly includes coarse positioning using the preset first map and fine positioning based on the visual key frame image. The coarse positioning process determines the user's approximate location and also determines the local visual map to be loaded; fine positioning uses a monocular camera to collect the current image to be processed, and load the preset second map selected by the coarse positioning target area. The bag-of-words model is used to retrieve and match the corresponding matching frame images, and finally the PnP algorithm is used to solve the current accurate pose of the image acquisition device in the map coordinate system to achieve the positioning purpose.
在申请实施例中,结合无线室内定位和视觉关键帧图像的室内定位方法,帮助用户实时、高精度地定位自身位置。利用预设第一地图(比如,WiFi指纹地图)对图像采集设备进行粗定位,得到图像 采集设备的大致位置,即目标区域,再通过视觉关键帧图像对应的预设第二地图进行精定位,获取图像采集设备的精确位置和姿态。本申请实施例可以针对大规模的室内场景,结合WiFi指纹地图和视觉关键帧地图,定位精度高,鲁棒性强。In the application embodiment, the indoor positioning method combining wireless indoor positioning and visual key frame images helps users locate their own position in real time and with high accuracy. Use the preset first map (for example, WiFi fingerprint map) to coarsely locate the image capture device to obtain the approximate location of the image capture device, that is, the target area, and then perform fine positioning through the preset second map corresponding to the visual key frame image. Obtain the precise position and posture of the image capture device. The embodiments of the present application can combine WiFi fingerprint maps and visual key frame maps for large-scale indoor scenes, with high positioning accuracy and strong robustness.
本申请实施例提供一种定位装置,该装置包括所包括的各模块、以及各模块所包括的各单元,可以通过计算机设备中的处理器来实现;当然也可通过具体的逻辑电路实现;在实施的过程中,处理器可以为中央处理器、微处理器、数字信号处理器或现场可编程门阵列等。The embodiment of the present application provides a positioning device, which includes each module included and each unit included in each module, which can be implemented by a processor in a computer device; of course, it can also be implemented by a specific logic circuit; In the implementation process, the processor may be a central processing unit, a microprocessor, a digital signal processor, or a field programmable gate array.
图6为本申请实施例定位装置的组成结构示意图,如图6所示,所述装置600包括:第一确定模块601、第一查找模块602、第二确定模块603、第一提取模块604、第一匹配模块605和第三确定模块606,其中:6 is a schematic diagram of the composition structure of the positioning device according to the embodiment of the application. As shown in FIG. 6, the device 600 includes: a first determining module 601, a first searching module 602, a second determining module 603, a first extracting module 604, The first matching module 605 and the third determining module 606, wherein:
所述第一确定模块601,配置为确定所述图像采集设备所处的网络的当前位置的当前网络特征信息;The first determining module 601 is configured to determine current network feature information of the current location of the network where the image acquisition device is located;
所述第一查找模块602,配置为从预设第一地图中,查找与所述当前网络特征信息对应的区域标识;The first search module 602 is configured to search for an area identifier corresponding to the current network feature information from a preset first map;
所述第二确定模块603,配置为根据所述区域标识,确定所述图像采集设备所处的目标区域;The second determining module 603 is configured to determine the target area where the image acquisition device is located according to the area identifier;
所述第一提取模块604,配置为采用所述图像采集设备采集待处理图像,并提取所述待处理图像的第一图像特征;The first extraction module 604 is configured to use the image acquisition device to collect an image to be processed, and extract the first image feature of the image to be processed;
所述第一匹配模块605,配置为从所述目标区域对应的预设第二地图中存储的关键帧图像的图像特征中,匹配出所述第一图像特征相对应的图像特征,得到第二图像特征;The first matching module 605 is configured to match the image features corresponding to the first image feature from the image features of the key frame images stored in the preset second map corresponding to the target area, and obtain the second Image feature
所述第三确定模块606,配置为根据所述第二图像特征,确定所述图像采集设备的位姿信息。The third determining module 606 is configured to determine the pose information of the image acquisition device according to the second image feature.
在上述装置中,所述装置还包括:In the above device, the device further includes:
第一划分模块,配置为将所述当前网络的覆盖范围划分为多个区域;The first dividing module is configured to divide the coverage area of the current network into multiple regions;
第四确定模块,配置为确定所述当前网络中的多个无线访问接入点在每一区域内的网络特征信息;A fourth determining module, configured to determine the network characteristic information of the multiple wireless access points in the current network in each area;
第一存储模块,配置为存储所述每一区域的标识信息和所述每一区域对应的网络特征信息,作为所述预设第一地图;其中,所述每一区域的标识信息不同。The first storage module is configured to store the identification information of each area and the network feature information corresponding to each area as the preset first map; wherein the identification information of each area is different.
在上述装置中,所述第一确定模块601,包括:In the above device, the first determining module 601 includes:
第一确定子模块,配置为从所述预设第一地图中存储的网络特征信息中,确定与所述当前网络特征信息相匹配的目标特征信息;The first determining submodule is configured to determine target feature information that matches the current network feature information from the network feature information stored in the preset first map;
第二确定子模块,配置为根据所述预设第一地图中存储的所述网络特征信息和区域的标识信息的对应关系,查找所述当前网络特征信息对应的区域标识。The second determining sub-module is configured to search for the area identifier corresponding to the current network characteristic information according to the correspondence between the network characteristic information and the area identification information stored in the preset first map.
在上述装置中,所述装置还包括:In the above device, the device further includes:
第二提取模块,配置为提取所述待处理图像的特征点集合;The second extraction module is configured to extract the feature point set of the image to be processed;
第五确定模块,配置为确定所述特征点集合中每一特征点的描述信息和每一所述特征点在所述待处理图像中的二维坐标信息;A fifth determining module, configured to determine the description information of each feature point in the feature point set and the two-dimensional coordinate information of each feature point in the image to be processed;
第六确定模块,配置为将所述描述信息和所述二维坐标信息,确定为所述第一图像特征。The sixth determining module is configured to determine the description information and the two-dimensional coordinate information as the first image feature.
在上述装置中,所述装置还包括:In the above device, the device further includes:
第一选择模块,配置为从样本图像库中,选择满足预设条件的多个关键帧图像,得到关键帧图像集合;The first selection module is configured to select multiple key frame images meeting preset conditions from the sample image library to obtain a set of key frame images;
第一标识模块,配置为采用每一关键帧图像对应的区域的标识信息,一一对应地标识每一所述关键帧图像,得到标识的关键帧图像集合;The first identification module is configured to use identification information of the region corresponding to each key frame image to identify each of the key frame images in a one-to-one correspondence to obtain a set of identified key frame images;
第三提取模块,配置为提取每一标识的关键帧图像的图像特征,得到关键图像特征集合;The third extraction module is configured to extract the image features of each identified key frame image to obtain a key image feature set;
第四提取模块,配置为从所述样本图像库中提取样本图像的特征点,得到包含不同的特征点的样本特征点集合;The fourth extraction module is configured to extract feature points of the sample image from the sample image library to obtain a sample feature point set containing different feature points;
第七确定模块,配置为确定所述样本特征点集合中每一样本特征点,在标识的关键帧图像的中所占的比值,得到比值向量集合;The seventh determining module is configured to determine the ratio of each sample feature point in the sample feature point set in the identified key frame image to obtain a ratio vector set;
第二存储模块,配置为存储所述比值向量集合和所述关键图像特征集合,得到所述预设第二地图所属的全局地图。The second storage module is configured to store the ratio vector set and the key image feature set to obtain the global map to which the preset second map belongs.
在上述装置中,所述第七确定模块,包括:In the above device, the seventh determining module includes:
第三确定子模块,配置为根据所述样本图像库中包含的样本图像的第一数量和第i个样本特征点在样本图像库中出现的第一次数,确定第一平均次数;其中,i为大于等于1的整数;所述第一平均次数配置为表明所述第i个样本特征点平均在每一样本图像中出现的次数;The third determining submodule is configured to determine the first average number of times according to the first number of sample images contained in the sample image library and the first number of times the i-th sample feature point appears in the sample image library; wherein, i is an integer greater than or equal to 1; the first average number is configured to indicate the average number of times the i-th sample feature point appears in each sample image;
第四确定子模块,配置为根据所述第i个样本特征点在第j个关键帧图像中出现的第二次数和所述第j个关键帧图像中包含的样本特征点的第二数量,确定第二平均次数;其中,j为大于等于1的整数;所述第二平均次数用于表明所述第i个样本特征点占据第j个关键帧图像中包含的样本特征点的比例;The fourth determining submodule is configured to, based on the second number of occurrences of the i-th sample feature point in the j-th key frame image and the second number of sample feature points contained in the j-th key frame image, Determine the second average number; where j is an integer greater than or equal to 1; the second average number is used to indicate the ratio of the i-th sample feature point to the sample feature points contained in the j-th key frame image;
第五确定子模块,配置为根据所述第一平均次数和所述第二平均次数,得到样本特征点在关键帧图像的中所占的比值,得到所述比值向量集合。The fifth determining submodule is configured to obtain the ratio of the sample feature points in the key frame image according to the first average number and the second average number, and obtain the ratio vector set.
在上述装置中,所述装置还包括:In the above device, the device further includes:
第八确定模块,配置为从所述全局地图中存储的标识的关键帧图像中,确定标识所述目标区域的标识信息的关键帧图像;An eighth determining module, configured to determine the key frame image that identifies the identification information of the target area from the identified key frame images stored in the global map;
第九确定模块,配置为将标识所述目标区域的标识信息的关键帧图像对应的部分全局地图,作为所述预设第二地图。The ninth determining module is configured to use a partial global map corresponding to the key frame image that identifies the identification information of the target area as the preset second map.
在上述装置中,所述第一匹配模块605,包括:In the above device, the first matching module 605 includes:
第六确定子模块,配置为分别确定不同的样本特征点在所述特征点集合中所占的比值,得到第一比值向量;The sixth determining sub-module is configured to respectively determine the ratios of different sample feature points in the feature point set to obtain the first ratio vector;
第一获取子模块,配置为获取第二比值向量,所述第二比值向量为所述多个样本特征点在所述关键帧图像中包含的特征点中所占的比值;The first obtaining submodule is configured to obtain a second ratio vector, where the second ratio vector is the ratio of the multiple sample feature points among the feature points contained in the key frame image;
第一匹配子模块,配置为根据所述第一比值向量和所述第二比值向量,从标识所述目标区域的标识信息的关键帧图像的图像特征中,匹配出第一图像特征对应的第二图像特征。The first matching submodule is configured to match the first image feature corresponding to the first image feature from the image features of the key frame image identifying the identification information of the target area according to the first ratio vector and the second ratio vector 2. Image features.
在上述装置中,所述第一匹配子模块,包括:In the above device, the first matching submodule includes:
第一确定单元,配置为根据所述第一比值向量和所述第二比值向量,从所述标识所述目标区域的标识信息的关键帧图像的图像特征中,确定与所述第一图像特征的相似度大于第一阈值的相似图像特征;The first determining unit is configured to determine, according to the first ratio vector and the second ratio vector, from the image features of the key frame image of the identification information identifying the target area, determining the same as the first image feature Similar image features whose similarity is greater than the first threshold;
第二确定单元,配置为确定所述相似图像特征所属的相似关键帧图像,得到相似关键帧图像集合;The second determining unit is configured to determine similar key frame images to which the similar image features belong to obtain a set of similar key frame images;
第一选择单元,配置为从所述相似关键帧图像的图像特征中,选择与所述第一图像特征相似度满足预设相似度阈值的第二图像特征。The first selection unit is configured to select, from the image features of the similar key frame images, a second image feature whose similarity with the first image feature meets a preset similarity threshold.
在上述装置中,所述第一选择单元,包括:In the above device, the first selection unit includes:
第一确定子单元,配置为确定至少两个所述相似关键帧图像的采集时间之间的时间差,和所述至少两个相似关键帧图像的图像特征分别与所述第一图像特征的相似度差;The first determining subunit is configured to determine the time difference between the acquisition times of at least two similar key frame images, and the similarity between the image features of the at least two similar key frame images and the first image features, respectively difference;
第一联合子单元,配置为将所述时间差小于第二阈值,且所述相似度差小于第三阈值的相似关键帧图像进行联合,得到联合帧图像;The first joint subunit is configured to combine similar key frame images whose time difference is less than a second threshold and whose similarity difference is less than a third threshold to obtain a joint frame image;
第一选择子单元,配置为从所述联合帧图像的图像特征中,选择与所述第一图像特征相似度满足预设相似度阈值的第二图像特征。The first selection subunit is configured to select, from the image features of the joint frame image, a second image feature whose similarity with the first image feature meets a preset similarity threshold.
在上述装置中,所述第一选择子单元,配置为分别确定多个联合帧图像中包含的每一关键帧图像的图像特征与所述第一图像特征的相似度之和;将相似度之和最大的联合帧图像,确定为与所述待处理图像的相似度最高的目标联合帧图像;根据目标联合帧图像的特征点的描述信息和所述待处理图像的特征点的描述信息,从所述目标联合帧图像的图像特征中,选择与所述第一图像特征相似度满足预设相似度阈值的第二图像特征。In the above device, the first selection subunit is configured to respectively determine the sum of the similarity between the image feature of each key frame image contained in the multiple joint frame images and the first image feature; And the largest joint frame image is determined as the target joint frame image with the highest similarity to the image to be processed; according to the description information of the feature point of the target joint frame image and the description information of the feature point of the image to be processed, Among the image features of the target joint frame image, a second image feature whose similarity with the first image feature meets a preset similarity threshold is selected.
在上述装置中,所述装置还包括:In the above device, the device further includes:
第十确定模块,配置为将包含所述第二图像特征的图像,确定为所述待处理图像的匹配帧图像;A tenth determining module, configured to determine the image containing the second image feature as a matching frame image of the image to be processed;
第十一确定模块,配置为确定所述匹配帧图像中包含的任意两个特征点之间,小于第四阈值的目 标欧式距离,得到目标欧式距离集合;An eleventh determining module, configured to determine a target Euclidean distance between any two feature points contained in the matching frame image and less than a fourth threshold, to obtain a target Euclidean distance set;
对应地,第七确定子模块,配置为如果所述目标欧式距离集合中包含的目标欧式距离的数量大于第五阈值,根据所述第二图像特征,确定所述图像采集设备的位姿信息。Correspondingly, the seventh determining submodule is configured to determine the pose information of the image acquisition device according to the second image feature if the number of target Euclidean distances included in the target Euclidean distance set is greater than a fifth threshold.
在上述装置中,所述第七确定子模块,包括:In the above device, the seventh determining submodule includes:
第三确定单元,配置为确定所述第二图像特征对应的关键帧图像的特征点,在所述预设第二地图对应的地图坐标系中的地图坐标;The third determining unit is configured to determine the feature point of the key frame image corresponding to the second image feature, and the map coordinates in the map coordinate system corresponding to the preset second map;
第四确定单元,配置为确定所述第二图像特征对应的关键帧图像的特征点,在所述图像采集设备中所处的当前坐标系中的当前坐标;The fourth determining unit is configured to determine the feature point of the key frame image corresponding to the second image feature and the current coordinates in the current coordinate system where the image acquisition device is located;
第五确定单元,配置为根据所述地图坐标和所述当前坐标,确定所述当前坐标系相对于所述地图坐标系的转换关系;A fifth determining unit, configured to determine a conversion relationship between the current coordinate system and the map coordinate system according to the map coordinates and the current coordinates;
第六确定单元,配置为根据所述转换关系和所述图像采集设备在所述当前坐标系中的当前坐标,确定所述图像采集设备在所述地图坐标系中的位置和所述图像采集设备相对于所述地图坐标系的采集朝向。The sixth determining unit is configured to determine the position of the image acquisition device in the map coordinate system and the image acquisition device based on the conversion relationship and the current coordinates of the image acquisition device in the current coordinate system The collection orientation relative to the map coordinate system.
以上装置实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本申请装置实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。The description of the above device embodiment is similar to the description of the above method embodiment, and has similar beneficial effects as the method embodiment. For technical details not disclosed in the device embodiments of the present application, please refer to the description of the method embodiments of the present application for understanding.
需要说明的是,本申请实施例中,如果以软件功能模块的形式实现上述的定位方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得包含该存储介质的设备自动测试线执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。It should be noted that, in the embodiments of the present application, if the above positioning method is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium. Based on this understanding, the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence or the parts that contribute to related technologies. The computer software products are stored in a storage medium and include several instructions to enable The automatic test line of the device containing the storage medium executes all or part of the method described in each embodiment of the present application. The aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), magnetic disk or optical disk and other media that can store program codes.
对应地,本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述实施例中提供的定位方法中的步骤。Correspondingly, an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the positioning method provided in the foregoing embodiment are implemented.
这里需要指出的是:以上存储介质和设备实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本申请存储介质和设备实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。It should be pointed out here that the description of the above storage medium and device embodiments is similar to the description of the above method embodiment, and has similar beneficial effects as the method embodiment. For technical details not disclosed in the storage medium and device embodiments of this application, please refer to the description of the method embodiments of this application for understanding.
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。It should be understood that “one embodiment” or “an embodiment” mentioned throughout the specification means that a specific feature, structure, or characteristic related to the embodiment is included in at least one embodiment of the present application. Therefore, the appearances of "in one embodiment" or "in an embodiment" in various places throughout the specification do not necessarily refer to the same embodiment. In addition, these specific features, structures or characteristics can be combined in one or more embodiments in any suitable manner. It should be understood that in the various embodiments of the present application, the size of the sequence number of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not correspond to the embodiments of the present application. The implementation process constitutes any limitation. The serial numbers of the foregoing embodiments of the present application are only for description, and do not represent the advantages and disadvantages of the embodiments.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。It should be noted that in this article, the terms "include", "include" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article or device including a series of elements not only includes those elements, It also includes other elements not explicitly listed, or elements inherent to the process, method, article, or device. If there are no more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other identical elements in the process, method, article, or device that includes the element.
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个***,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。In the several embodiments provided in this application, it should be understood that the disclosed device and method may be implemented in other ways. The device embodiments described above are merely illustrative. For example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, such as: multiple units or components can be combined, or It can be integrated into another system, or some features can be ignored or not implemented. In addition, the coupling, or direct coupling, or communication connection between the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms. of.
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本申请实施例方案的目的。The units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units; they may be located in one place or distributed on multiple network units; Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments of the present application.
另外,在本申请各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, the functional units in the embodiments of the present application can be all integrated into one processing unit, or each unit can be individually used as a unit, or two or more units can be integrated into one unit; The unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。Those of ordinary skill in the art can understand that all or part of the steps in the above method embodiments can be implemented by a program instructing relevant hardware. The foregoing program can be stored in a computer readable storage medium. When the program is executed, the execution includes The steps of the foregoing method embodiment; and the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a read only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk.
或者,本申请上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得设备自动测试线执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。Alternatively, if the above-mentioned integrated unit of the present application is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer readable storage medium. Based on this understanding, the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence or the parts that contribute to related technologies. The computer software products are stored in a storage medium and include several instructions to enable The equipment automatic test line executes all or part of the method described in each embodiment of the present application. The aforementioned storage media include: removable storage devices, ROMs, magnetic disks or optical discs and other media that can store program codes.
以上所述,仅为本申请的实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above are only the implementation manners of this application, but the protection scope of this application is not limited to this. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed in this application. Covered in the scope of protection of this application. Therefore, the protection scope of this application should be subject to the protection scope of the claims.
工业实用性Industrial applicability
本申请实施例中,根据配置为采集待处理图像的图像采集设备所处的当前网络的网络特征信息和预设第一地图,确定所述图像采集设备所处的目标区域;根据待处理图像的第一图像特征,从所述目标区域对应的预设第二地图中存储的关键帧图像的图像特征中,匹配出第二图像特征;根据所述第二图像特征,确定所述图像采集设备的位姿信息。In the embodiment of the present application, the target area where the image acquisition device is located is determined according to the network feature information of the current network where the image acquisition device configured to collect the image to be processed is located and the preset first map; The first image feature is to match the second image feature from the image features of the key frame image stored in the preset second map corresponding to the target area; according to the second image feature, determine the image capture device Posture information.

Claims (20)

  1. 一种定位方法,其中,所述方法包括:A positioning method, wherein the method includes:
    确定图像采集设备在当前网络的当前网络特征信息;Determine the current network characteristic information of the image acquisition device in the current network;
    从预设第一地图中,查找与所述网络特征信息对应的区域标识;Searching for an area identifier corresponding to the network feature information from the preset first map;
    根据所述区域标识,确定所述图像采集设备所处的目标区域;Determine the target area where the image acquisition device is located according to the area identifier;
    通过所述图像采集设备采集待处理图像,并提取所述待处理图像的第一图像特征;Acquiring an image to be processed by the image acquisition device, and extracting a first image feature of the image to be processed;
    从所述目标区域对应的预设第二地图中存储的关键帧图像的图像特征中,匹配出与所述第一图像特征相对应的图像特征,得到第二图像特征;Matching the image features corresponding to the first image features from the image features of the key frame images stored in the preset second map corresponding to the target area to obtain a second image feature;
    根据所述第二图像特征,确定所述图像采集设备的位姿信息。According to the second image feature, the pose information of the image acquisition device is determined.
  2. 根据权利要求1所述的方法,其中,在所述从预设第一地图中,查找与所述当前网络特征信息对应的区域标识之前,所述方法还包括:The method according to claim 1, wherein, before searching for an area identifier corresponding to the current network characteristic information in the preset first map, the method further comprises:
    将所述当前网络的覆盖范围划分为多个区域;Dividing the coverage area of the current network into multiple areas;
    确定所述当前网络中的多个无线访问接入点在每一区域内的网络特征信息;Determining the network characteristic information of the multiple wireless access points in the current network in each area;
    存储每一区域的标识信息和每一区域对应的网络特征信息,作为所述预设第一地图;其中,每一区域的标识信息不同。The identification information of each area and the network feature information corresponding to each area are stored as the preset first map; wherein, the identification information of each area is different.
  3. 根据权利要求1或2所述的方法,其中,所述从预设第一地图中,查找与所述当前网络特征信息对应的区域标识,包括:The method according to claim 1 or 2, wherein the searching for the area identifier corresponding to the current network characteristic information from the preset first map comprises:
    从所述预设第一地图中存储的网络特征信息中,确定与所述当前网络特征信息相匹配的目标特征信息;Determine target feature information that matches the current network feature information from the network feature information stored in the preset first map;
    根据所述预设第一地图中存储的所述网络特征信息和区域的标识信息的对应关系,查找所述当前网络特征信息对应的区域标识。According to the corresponding relationship between the network feature information and the area identification information stored in the preset first map, search for the area identification corresponding to the current network feature information.
  4. 根据权利要求1所述的方法,其中,在所述从所述目标区域对应的预设第二地图中存储的关键帧图像的图像特征中,匹配出所述第一图像特征相对应的图像特征,得到第二图像特征之前,所述方法还包括:The method according to claim 1, wherein, among the image features of the key frame images stored in the preset second map corresponding to the target area, the image feature corresponding to the first image feature is matched Before obtaining the second image feature, the method further includes:
    提取所述待处理图像的特征点集合;Extracting the feature point set of the image to be processed;
    确定所述特征点集合中每一特征点的描述信息和每一所述特征点在所述待处理图像中的二维坐标信息;Determining the description information of each feature point in the feature point set and the two-dimensional coordinate information of each feature point in the image to be processed;
    将所述描述信息和所述二维坐标信息,确定为所述第一图像特征。The description information and the two-dimensional coordinate information are determined as the first image feature.
  5. 根据权利要求1所述的方法,其中,在所述从所述目标区域对应的预设第二地图中存储的关键帧图像的图像特征中,匹配出所述第一图像特征相对应的图像特征,得到第二图像特征之前,所述方法还包括:The method according to claim 1, wherein, among the image features of the key frame images stored in the preset second map corresponding to the target area, the image feature corresponding to the first image feature is matched Before obtaining the second image feature, the method further includes:
    从样本图像库中,选择满足预设条件的多个关键帧图像,得到关键帧图像集合;From the sample image library, select multiple key frame images that meet the preset conditions to obtain a set of key frame images;
    对于所述关键帧图像集合中的每一关键帧图像,采用每一关键帧图像对应的区域的标识信息进行标识,得到标识的关键帧图像集合;For each key frame image in the key frame image set, the identification information of the region corresponding to each key frame image is used for identification to obtain the identified key frame image set;
    提取每一标识的关键帧图像的图像特征,得到关键图像特征集合;Extract the image features of each identified key frame image to obtain a key image feature set;
    从所述样本图像库中提取样本图像的特征点,得到包含不同的特征点的样本特征点集合;Extracting feature points of the sample image from the sample image library to obtain a sample feature point set containing different feature points;
    确定所述样本特征点集合中每一样本特征点,在标识的关键帧图像的中所占的比值,得到比值向量集合;Determine the ratio of each sample feature point in the sample feature point set in the identified key frame image to obtain a ratio vector set;
    存储所述比值向量集合和所述关键图像特征集合,得到所述预设第二地图的全局地图。The ratio vector set and the key image feature set are stored to obtain the global map of the preset second map.
  6. 根据权利要求5所述的方法,其中,所述确定所述样本特征点集合中每一样本特征点,在标识的关键帧图像的中所占的比值,得到比值向量集合,包括:The method according to claim 5, wherein the determining the ratio of each sample feature point in the sample feature point set in the identified key frame image to obtain the ratio vector set comprises:
    根据所述样本图像库中包含的样本图像的第一数量和第i个样本特征点在样本图像库中出现的第一次数,确定第一平均次数;其中,i为大于等于1的整数;所述第一平均次数用于表明所述第i个样本 特征点平均在每一样本图像中出现的次数;Determine the first average number according to the first number of sample images contained in the sample image library and the first number of times the i-th sample feature point appears in the sample image library; where i is an integer greater than or equal to 1; The first average number is used to indicate the average number of times the i-th sample feature point appears in each sample image;
    根据所述第i个样本特征点在第j个关键帧图像中出现的第二次数和所述第j个关键帧图像中包含的样本特征点的第二数量,确定第二平均次数;其中,j为大于等于1的整数;所述第二平均次数用于表明所述第i个样本特征点占据第j个关键帧图像中包含的样本特征点的比例;The second average number of times is determined according to the second number of occurrences of the i-th sample feature point in the j-th key frame image and the second number of sample feature points contained in the j-th key frame image; wherein, j is an integer greater than or equal to 1; the second average number is used to indicate the proportion of the i-th sample feature points occupying the sample feature points contained in the j-th key frame image;
    根据所述第一平均次数和所述第二平均次数,得到样本特征点在关键帧图像的中所占的比值,得到所述比值向量集合。According to the first average number and the second average number, the ratio of the sample feature points in the key frame image is obtained, and the ratio vector set is obtained.
  7. 根据权利要求5所述的方法,其中,在所述根据所述区域标识,确定所述图像采集设备所处的目标区域之后,所述方法还包括:The method according to claim 5, wherein after the determining the target area where the image acquisition device is located according to the area identifier, the method further comprises:
    从所述全局地图中存储的标识的关键帧图像中,确定标识所述目标区域的标识信息的关键帧图像;Determine the key frame image that identifies the identification information of the target area from the identified key frame images stored in the global map;
    将标识所述目标区域的标识信息的关键帧图像对应的部分全局地图,作为所述预设第二地图。The partial global map corresponding to the key frame image that identifies the identification information of the target area is used as the preset second map.
  8. 根据权利要求5至7任一项所述的方法,其中,所述从所述目标区域对应的预设第二地图中存储的关键帧图像的图像特征中,匹配出所述第一图像特征相对应的图像特征,得到第二图像特征,包括:The method according to any one of claims 5 to 7, wherein the first image feature phase is matched from the image features of the key frame images stored in the preset second map corresponding to the target area Corresponding image features to obtain the second image feature, including:
    分别确定不同的样本特征点在所述特征点集合中所占的比值,得到第一比值向量;Respectively determine the ratios of different sample feature points in the feature point set to obtain the first ratio vector;
    获取第二比值向量,所述第二比值向量为所述多个样本特征点在所述关键帧图像中包含的特征点中所占的比值;Acquiring a second ratio vector, where the second ratio vector is the ratio of the multiple sample feature points among the feature points included in the key frame image;
    根据所述第一比值向量和所述第二比值向量,从标识所述目标区域的标识信息的关键帧图像的图像特征中,匹配出第一图像特征对应的第二图像特征。According to the first ratio vector and the second ratio vector, a second image feature corresponding to the first image feature is matched from the image features of the key frame image identifying the identification information of the target area.
  9. 根据权利要求8所述的方法,其中,所述根据所述第一比值向量和所述第二比值向量,从标识所述目标区域的标识信息的关键帧图像的图像特征中,匹配出第一图像特征对应的第二图像特征,包括:8. The method according to claim 8, wherein the first ratio vector and the second ratio vector are matched from the image features of the key frame image identifying the identification information of the target area. The second image feature corresponding to the image feature includes:
    根据所述第一比值向量和所述第二比值向量,从所述标识所述目标区域的标识信息的关键帧图像的图像特征中,确定与所述第一图像特征的相似度大于第一阈值的相似图像特征;According to the first ratio vector and the second ratio vector, from the image features of the key frame image identifying the identification information of the target area, it is determined that the similarity with the first image feature is greater than a first threshold Similar image features of;
    确定所述相似图像特征所属的相似关键帧图像,得到相似关键帧图像集合;Determine the similar key frame images to which the similar image features belong to obtain a set of similar key frame images;
    从所述相似关键帧图像的图像特征中,选择与所述第一图像特征相似度满足预设相似度阈值的第二图像特征。From the image features of the similar key frame images, a second image feature whose similarity with the first image feature meets a preset similarity threshold is selected.
  10. 根据权利要求9所述的方法,其中,所述从所述相似关键帧图像的图像特征中,选择与所述第一图像特征相似度满足预设相似度阈值的第二图像特征,包括:9. The method according to claim 9, wherein the selecting from the image features of the similar key frame images a second image feature whose similarity with the first image feature meets a preset similarity threshold comprises:
    确定至少两个所述相似关键帧图像的采集时间之间的时间差,和所述至少两个相似关键帧图像的图像特征分别与所述第一图像特征的相似度差;Determining the time difference between the acquisition times of at least two similar key frame images, and the similarity difference between the image features of the at least two similar key frame images and the first image feature respectively;
    将所述时间差小于第二阈值,且所述相似度差小于第三阈值的相似关键帧图像进行联合,得到联合帧图像;Combining similar key frame images with the time difference less than the second threshold and the similarity difference less than the third threshold to obtain a combined frame image;
    从所述联合帧图像的图像特征中,选择与所述第一图像特征相似度满足预设相似度阈值的第二图像特征。From the image features of the joint frame image, a second image feature whose similarity with the first image feature meets a preset similarity threshold is selected.
  11. 根据权利要求10所述的方法,其中,所述从所述联合帧图像的图像特征中,选择与所述第一图像特征相似度满足预设相似度阈值的第二图像特征,包括:The method according to claim 10, wherein the selecting from the image features of the joint frame image the second image feature whose similarity with the first image feature satisfies a preset similarity threshold comprises:
    分别确定多个联合帧图像中包含的每一关键帧图像的图像特征与所述第一图像特征的相似度之和;Respectively determining the sum of similarity between the image feature of each key frame image contained in the multiple joint frame images and the first image feature;
    将相似度之和最大的联合帧图像,确定为与所述待处理图像的相似度最高的目标联合帧图像;Determining the joint frame image with the largest sum of similarities as the target joint frame image with the highest similarity to the image to be processed;
    根据目标联合帧图像的特征点的描述信息和所述待处理图像的特征点的描述信息,从所述目标联合帧图像的图像特征中,选择与所述第一图像特征相似度满足预设相似度阈值的第二图像特征。According to the description information of the feature points of the target joint frame image and the description information of the feature points of the image to be processed, from the image features of the target joint frame image, select the first image feature similarity that satisfies the preset similarity The second image feature of the degree threshold.
  12. 根据权利要求1或2所述的方法,其中,在所述根据所述第二图像特征,确定所述图像采集设备的位姿信息之前,所述方法还包括:The method according to claim 1 or 2, wherein, before the determining the pose information of the image acquisition device according to the second image feature, the method further comprises:
    将包含所述第二图像特征的图像,确定为所述待处理图像的匹配帧图像;Determining the image containing the second image feature as a matching frame image of the image to be processed;
    确定所述匹配帧图像中包含的任意两个特征点之间,小于第四阈值的目标欧式距离,得到目标欧式距离集合;Determine the target Euclidean distance between any two feature points contained in the matching frame image and which is less than the fourth threshold, to obtain a target Euclidean distance set;
    对应地,如果所述目标欧式距离集合中包含的目标欧式距离的数量大于第五阈值,根据所述第二图像特征,确定所述图像采集设备的位姿信息。Correspondingly, if the number of target Euclidean distances included in the target Euclidean distance set is greater than a fifth threshold, the pose information of the image acquisition device is determined according to the second image feature.
  13. 根据权利要求12所述的方法,其中,所述根据所述第二图像特征,确定所述图像采集设备的位姿信息,包括:The method according to claim 12, wherein the determining the pose information of the image acquisition device according to the second image feature comprises:
    确定所述第二图像特征对应的关键帧图像的特征点,在所述预设第二地图对应的地图坐标系中的地图坐标;Determining the feature point of the key frame image corresponding to the second image feature and the map coordinates in the map coordinate system corresponding to the preset second map;
    确定所述第二图像特征对应的关键帧图像的特征点,在所述图像采集设备中所处的当前坐标系中的当前坐标;Determining the feature point of the key frame image corresponding to the second image feature and the current coordinates in the current coordinate system where the image acquisition device is located;
    根据所述地图坐标和所述当前坐标,确定所述当前坐标系相对于所述地图坐标系的转换关系;Determine the conversion relationship between the current coordinate system and the map coordinate system according to the map coordinates and the current coordinates;
    根据所述转换关系和所述图像采集设备在所述当前坐标系中的当前坐标,确定所述图像采集设备在所述地图坐标系中的位置和所述图像采集设备相对于所述地图坐标系的采集朝向。According to the conversion relationship and the current coordinates of the image acquisition device in the current coordinate system, determine the position of the image acquisition device in the map coordinate system and the image acquisition device relative to the map coordinate system The collection direction.
  14. 一种定位装置,其中,所述装置包括:第一确定模块、第一查找模块、第二确定模块、第一提取模块、第一匹配模块和第三确定模块,其中:A positioning device, wherein the device comprises: a first determination module, a first search module, a second determination module, a first extraction module, a first matching module, and a third determination module, wherein:
    所述第一确定模块,配置为确定所述图像采集设备所处的网络的当前位置的当前网络特征信息;The first determining module is configured to determine current network feature information of the current location of the network where the image acquisition device is located;
    所述第一查找模块,配置为从预设第一地图中,查找与所述当前网络特征信息对应的区域标识;The first search module is configured to search for an area identifier corresponding to the current network feature information from a preset first map;
    所述第二确定模块,配置为根据所述区域标识,确定所述图像采集设备所处的目标区域;The second determining module is configured to determine the target area where the image acquisition device is located according to the area identifier;
    所述第一提取模块,配置为采用所述图像采集设备采集待处理图像,并提取所述待处理图像的第一图像特征;The first extraction module is configured to use the image acquisition device to collect an image to be processed, and extract the first image feature of the image to be processed;
    所述第一匹配模块,配置为从所述目标区域对应的预设第二地图中存储的关键帧图像的图像特征中,匹配出所述第一图像特征相对应的图像特征,得到第二图像特征;The first matching module is configured to match the image features corresponding to the first image features from the image features of the key frame images stored in the preset second map corresponding to the target area to obtain a second image feature;
    所述第三确定模块,配置为根据所述第二图像特征,确定所述图像采集设备的位姿信息。The third determining module is configured to determine the pose information of the image acquisition device according to the second image feature.
  15. 根据权利要求14所述的装置,其中,所述装置还包括:The device according to claim 14, wherein the device further comprises:
    第一划分模块,配置为将所述当前网络的覆盖范围划分为多个区域;The first dividing module is configured to divide the coverage area of the current network into multiple regions;
    第四确定模块,配置为确定所述当前网络中的多个无线访问接入点在每一区域内的网络特征信息;A fourth determining module, configured to determine the network characteristic information of the multiple wireless access points in the current network in each area;
    第一存储模块,配置为存储所述每一区域的标识信息和所述每一区域对应的网络特征信息,作为所述预设第一地图;其中,所述每一区域的标识信息不同。The first storage module is configured to store the identification information of each area and the network feature information corresponding to each area as the preset first map; wherein the identification information of each area is different.
  16. 根据权利要求14或15所述的装置,其中,所述第一确定模块,包括:The device according to claim 14 or 15, wherein the first determining module comprises:
    第一确定子模块,配置为从所述预设第一地图中存储的网络特征信息中,确定与所述当前网络特征信息相匹配的目标特征信息;The first determining submodule is configured to determine target feature information that matches the current network feature information from the network feature information stored in the preset first map;
    第二确定子模块,配置为根据所述预设第一地图中存储的所述网络特征信息和区域的标识信息的对应关系,查找所述当前网络特征信息对应的区域标识。The second determining sub-module is configured to search for the area identifier corresponding to the current network characteristic information according to the correspondence between the network characteristic information and the area identification information stored in the preset first map.
  17. 根据权利要求14所述的装置,其中,所述装置还包括:The device according to claim 14, wherein the device further comprises:
    第二提取模块,配置为提取所述待处理图像的特征点集合;The second extraction module is configured to extract the feature point set of the image to be processed;
    第五确定模块,配置为确定所述特征点集合中每一特征点的描述信息和每一所述特征点在所述待处理图像中的二维坐标信息;A fifth determining module, configured to determine the description information of each feature point in the feature point set and the two-dimensional coordinate information of each feature point in the image to be processed;
    第六确定模块,配置为将所述描述信息和所述二维坐标信息,确定为所述第一图像特征。The sixth determining module is configured to determine the description information and the two-dimensional coordinate information as the first image feature.
  18. 根据权利要求14所述的装置,其中,所述装置还包括:The device according to claim 14, wherein the device further comprises:
    第一选择模块,配置为从样本图像库中,选择满足预设条件的多个关键帧图像,得到关键帧图像集合;The first selection module is configured to select multiple key frame images meeting preset conditions from the sample image library to obtain a set of key frame images;
    第一标识模块,配置为采用每一关键帧图像对应的区域的标识信息,一一对应地标识每一所述关键帧图像,得到标识的关键帧图像集合;The first identification module is configured to use identification information of the region corresponding to each key frame image to identify each of the key frame images in a one-to-one correspondence to obtain a set of identified key frame images;
    第三提取模块,配置为提取每一标识的关键帧图像的图像特征,得到关键图像特征集合;The third extraction module is configured to extract the image features of each identified key frame image to obtain a key image feature set;
    第四提取模块,配置为从所述样本图像库中提取样本图像的特征点,得到包含不同的特征点的样本特征点集合;The fourth extraction module is configured to extract feature points of the sample image from the sample image library to obtain a sample feature point set containing different feature points;
    第七确定模块,配置为确定所述样本特征点集合中每一样本特征点,在标识的关键帧图像的中所占的比值,得到比值向量集合;The seventh determining module is configured to determine the ratio of each sample feature point in the sample feature point set in the identified key frame image to obtain a ratio vector set;
    第二存储模块,配置为存储所述比值向量集合和所述关键图像特征集合,得到所述预设第二地图所属的全局地图。The second storage module is configured to store the ratio vector set and the key image feature set to obtain the global map to which the preset second map belongs.
  19. 一种终端,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,其中,所述处理器执行所述程序时实现权利要求1至13任一项所述方法中的步骤。A terminal, comprising a memory and a processor, the memory storing a computer program that can run on the processor, wherein the processor implements the method in any one of claims 1 to 13 when the processor executes the program step.
  20. 一种计算机可读存储介质,其上存储有计算机程序,其中,该计算机程序被处理器执行时实现权利要求1至13任一项所述方法中的步骤。A computer-readable storage medium having a computer program stored thereon, wherein the computer program implements the steps in the method of any one of claims 1 to 13 when the computer program is executed by a processor.
PCT/CN2020/117156 2019-09-27 2020-09-23 Positioning method and apparatus, terminal and storage medium WO2021057797A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910922471.0 2019-09-27
CN201910922471.0A CN110645986B (en) 2019-09-27 2019-09-27 Positioning method and device, terminal and storage medium

Publications (1)

Publication Number Publication Date
WO2021057797A1 true WO2021057797A1 (en) 2021-04-01

Family

ID=69011607

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/117156 WO2021057797A1 (en) 2019-09-27 2020-09-23 Positioning method and apparatus, terminal and storage medium

Country Status (2)

Country Link
CN (1) CN110645986B (en)
WO (1) WO2021057797A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114140674A (en) * 2021-10-20 2022-03-04 郑州信大先进技术研究院 Electronic evidence usability identification method combining image processing and data mining technology

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110645986B (en) * 2019-09-27 2023-07-14 Oppo广东移动通信有限公司 Positioning method and device, terminal and storage medium
CN111447553B (en) * 2020-03-26 2021-10-15 云南电网有限责任公司电力科学研究院 WIFI-based visual enhancement SLAM method and device
CN111511017B (en) * 2020-04-09 2022-08-16 Oppo广东移动通信有限公司 Positioning method and device, equipment and storage medium
CN111506687B (en) * 2020-04-09 2023-08-08 北京华捷艾米科技有限公司 Map point data extraction method, device, storage medium and equipment
CN111680596B (en) * 2020-05-29 2023-10-13 北京百度网讯科技有限公司 Positioning true value verification method, device, equipment and medium based on deep learning
CN111623783A (en) * 2020-06-30 2020-09-04 杭州海康机器人技术有限公司 Initial positioning method, visual navigation equipment and warehousing system
CN112362047A (en) * 2020-11-26 2021-02-12 浙江商汤科技开发有限公司 Positioning method and device, electronic equipment and storage medium
CN112529887B (en) * 2020-12-18 2024-02-23 广东赛诺科技股份有限公司 Lazy loading method and system based on GIS map data
CN112509053B (en) * 2021-02-07 2021-06-04 深圳市智绘科技有限公司 Robot pose acquisition method and device and electronic equipment
CN113063424B (en) * 2021-03-29 2023-03-24 湖南国科微电子股份有限公司 Method, device, equipment and storage medium for intra-market navigation
CN113259883B (en) * 2021-05-18 2023-01-31 南京邮电大学 Multi-source information fusion indoor positioning method for mobile phone user
CN113657164B (en) * 2021-07-15 2024-07-02 美智纵横科技有限责任公司 Method, device, cleaning device and storage medium for calibrating target object
CN113808269A (en) * 2021-09-23 2021-12-17 视辰信息科技(上海)有限公司 Map generation method, positioning method, system and computer readable storage medium
CN114427863A (en) * 2022-04-01 2022-05-03 天津天瞳威势电子科技有限公司 Vehicle positioning method and system, automatic parking method and system, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104661300A (en) * 2013-11-22 2015-05-27 高德软件有限公司 Positioning method, device, system and mobile terminal
CN104936283A (en) * 2014-03-21 2015-09-23 中国电信股份有限公司 Indoor positioning method, server and system
US20150371102A1 (en) * 2014-06-18 2015-12-24 Delta Electronics, Inc. Method for recognizing and locating object
CN105372628A (en) * 2015-11-19 2016-03-02 上海雅丰信息科技有限公司 Wi-Fi-based indoor positioning navigation method
CN105828296A (en) * 2016-05-25 2016-08-03 武汉域讯科技有限公司 Indoor positioning method based on convergence of image matching and WI-FI
CN105974357A (en) * 2016-04-29 2016-09-28 北京小米移动软件有限公司 Method and device for positioning terminal
CN108495259A (en) * 2018-03-26 2018-09-04 上海工程技术大学 A kind of gradual indoor positioning server and localization method
CN110645986A (en) * 2019-09-27 2020-01-03 Oppo广东移动通信有限公司 Positioning method and device, terminal and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8311733B2 (en) * 2005-02-15 2012-11-13 The Invention Science Fund I, Llc Interactive key frame image mapping system and method
JP4564564B2 (en) * 2008-12-22 2010-10-20 株式会社東芝 Moving picture reproducing apparatus, moving picture reproducing method, and moving picture reproducing program
US9297881B2 (en) * 2011-11-14 2016-03-29 Microsoft Technology Licensing, Llc Device positioning via device-sensed data evaluation
US20150092048A1 (en) * 2013-09-27 2015-04-02 Qualcomm Incorporated Off-Target Tracking Using Feature Aiding in the Context of Inertial Navigation
CN106934339B (en) * 2017-01-19 2021-06-11 上海博康智能信息技术有限公司 Target tracking and tracking target identification feature extraction method and device
CN108764297B (en) * 2018-04-28 2020-10-30 北京猎户星空科技有限公司 Method and device for determining position of movable equipment and electronic equipment
CN109086350B (en) * 2018-07-13 2021-07-30 哈尔滨工业大学 Mixed image retrieval method based on WiFi
CN109579856A (en) * 2018-10-31 2019-04-05 百度在线网络技术(北京)有限公司 Accurately drawing generating method, device, equipment and computer readable storage medium
CN109658445A (en) * 2018-12-14 2019-04-19 北京旷视科技有限公司 Network training method, increment build drawing method, localization method, device and equipment
CN109948525A (en) * 2019-03-18 2019-06-28 Oppo广东移动通信有限公司 It takes pictures processing method, device, mobile terminal and storage medium
CN109993113B (en) * 2019-03-29 2023-05-02 东北大学 Pose estimation method based on RGB-D and IMU information fusion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104661300A (en) * 2013-11-22 2015-05-27 高德软件有限公司 Positioning method, device, system and mobile terminal
CN104936283A (en) * 2014-03-21 2015-09-23 中国电信股份有限公司 Indoor positioning method, server and system
US20150371102A1 (en) * 2014-06-18 2015-12-24 Delta Electronics, Inc. Method for recognizing and locating object
CN105372628A (en) * 2015-11-19 2016-03-02 上海雅丰信息科技有限公司 Wi-Fi-based indoor positioning navigation method
CN105974357A (en) * 2016-04-29 2016-09-28 北京小米移动软件有限公司 Method and device for positioning terminal
CN105828296A (en) * 2016-05-25 2016-08-03 武汉域讯科技有限公司 Indoor positioning method based on convergence of image matching and WI-FI
CN108495259A (en) * 2018-03-26 2018-09-04 上海工程技术大学 A kind of gradual indoor positioning server and localization method
CN110645986A (en) * 2019-09-27 2020-01-03 Oppo广东移动通信有限公司 Positioning method and device, terminal and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114140674A (en) * 2021-10-20 2022-03-04 郑州信大先进技术研究院 Electronic evidence usability identification method combining image processing and data mining technology
CN114140674B (en) * 2021-10-20 2024-04-16 郑州信大先进技术研究院 Electronic evidence availability identification method combined with image processing and data mining technology

Also Published As

Publication number Publication date
CN110645986A (en) 2020-01-03
CN110645986B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
WO2021057797A1 (en) Positioning method and apparatus, terminal and storage medium
CN109947975B (en) Image search device, image search method, and setting screen used therein
RU2608261C2 (en) Automatic tag generation based on image content
CN107833280B (en) Outdoor mobile augmented reality method based on combination of geographic grids and image recognition
CN107133325B (en) Internet photo geographic space positioning method based on street view map
US9489402B2 (en) Method and system for generating a pictorial reference database using geographical information
Liu et al. Finding perfect rendezvous on the go: accurate mobile visual localization and its applications to routing
WO2020259360A1 (en) Locating method and device, terminal, and storage medium
WO2020259361A1 (en) Map update method and apparatus, and terminal and storage medium
CN101300588A (en) Determining a particular person from a collection
US20230351794A1 (en) Pedestrian tracking method and device, and computer-readable storage medium
US20070070217A1 (en) Image analysis apparatus and image analysis program storage medium
EP2711890A1 (en) Information providing device, information providing method, information providing processing program, recording medium recording information providing processing program, and information providing system
US9288636B2 (en) Feature selection for image based location determination
KR100489890B1 (en) Apparatus and Method to Provide Stereo Video or/and Detailed Information of Geographic Objects
CN104484814A (en) Advertising method and system based on video map
CN104486585A (en) Method and system for managing urban mass surveillance video based on GIS
JPWO2011136341A1 (en) Information providing apparatus, information providing method, information providing processing program, and recording medium on which information providing processing program is recorded
Revaud et al. Did it change? learning to detect point-of-interest changes for proactive map updates
CN105740777B (en) Information processing method and device
Park et al. Estimating the camera direction of a geotagged image using reference images
US20150379040A1 (en) Generating automated tours of geographic-location related features
Liu et al. Robust and accurate mobile visual localization and its applications
US20150134689A1 (en) Image based location determination
Zhang et al. Camera shooting location recommendations for landmarks in geo-space

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20868466

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20868466

Country of ref document: EP

Kind code of ref document: A1