WO2021103027A1 - Positionnement de station de base sur la base de réseaux neuronaux convolutifs - Google Patents

Positionnement de station de base sur la base de réseaux neuronaux convolutifs Download PDF

Info

Publication number
WO2021103027A1
WO2021103027A1 PCT/CN2019/122273 CN2019122273W WO2021103027A1 WO 2021103027 A1 WO2021103027 A1 WO 2021103027A1 CN 2019122273 W CN2019122273 W CN 2019122273W WO 2021103027 A1 WO2021103027 A1 WO 2021103027A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
recall
grid
feature information
cnn
Prior art date
Application number
PCT/CN2019/122273
Other languages
English (en)
Inventor
Yu Lin
Buyi YIN
Zhaoyang FENG
Juhua Chen
Weihuan SHU
Original Assignee
Beijing Didi Infinity Technology And Development Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology And Development Co., Ltd. filed Critical Beijing Didi Infinity Technology And Development Co., Ltd.
Priority to PCT/CN2019/122273 priority Critical patent/WO2021103027A1/fr
Publication of WO2021103027A1 publication Critical patent/WO2021103027A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/0252Radio frequency fingerprinting
    • G01S5/02521Radio frequency fingerprinting using a radio-map
    • G01S5/02523Details of interaction of receiver with radio-map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Definitions

  • This present disclosure generally relates to systems and methods for positioning services, and in particular, to systems and methods for base station positioning based on convolutional neural networks (CNNs) .
  • CNNs convolutional neural networks
  • Positioning services are becoming more and more important with the popularity of location-based services, and the requirements for positioning accuracy are getting higher and higher. For example, for online taxi platforms, the driver would need to know the location of the passengers before picking up them.
  • GPS Global Positioning System
  • NLP Network Localization Service
  • the NLP includes WiFi positioning and base station (that is, cellular network) positioning, which may be used when WiFi positioning is unavailable or inaccurate.
  • Base station positioning is an indispensable component of positioning services. Early studies of base station positioning are mostly focused on hardware dependent technologies, which cannot be applied on a large scale. In recent years, the ever-increasing density of base stations offers the possibility of applying fingerprint-based positioning techniques.
  • the fingerprint-based positioning technology is an empirical method that matches the fingerprint information collected on the device in real-time with the fingerprint database collected offline to identity the device location. With the maturity of machine learning algorithm, through the powerful learning ability of machine learning, any information collected can be fully utilized as features, and fingerprint-based positioning technology is further improved by using machine learning. Using the fingerprint database collected offline; the entire geospatial space is divided into a large number of very small geographic grids.
  • the grid closest to the real location is obtained as the location of the user or terminal through the classic recall, sort, smooth machine-learning framework, and the positioning accuracy is improved.
  • this so-called Geo-block Ranking method has limitations. For example, it cannot describe the local correlation of the grid in space. Further, the addition of the smoothing process leads to the inconsistency between the optimization goal and the goal. Hence, it is desired to improve current base station positioning method to increase its efficiency and accuracy.
  • Embodiments of the disclosure address the above problems by providing a Convolutional neural network (CNN) -based positioning method, which is different from the current method by modeling the positioning problem as object detection in geospatial, and directly predicting the position information by using the improved deep CNNs.
  • CNN Convolutional neural network
  • Embodiments of the disclosure provide a computer-implemented method for base station positioning based on a convolutional neural network.
  • An exemplary computer-implemented method includes acquiring, by a positioning server, feature information that is received from one or more base stations at different locations in the area of interest; generating, by the positioning server, a feature input that includes of a plurality of feature maps based on the feature information; training, by the positioning server, a convolutional neutral network (CNN) based on the feature input; and determining, by the positioning server, a position of a terminal device using the trained CNN.
  • CNN convolutional neutral network
  • the computer-implemented method further includes dividing the area of interest into a number of grids to obtained a geographic grid set; generate a recall grid set through a predetermined recall strategy based on the geographic grid set, wherein the recall grid set includes a number of recall grids; and for each recall grid, collect feature information received from the one or more base stations.
  • the computer-implemented method further includes determining a center grid of the geographic grid set; and recalling a number of grids surrounding the center grid through a predetermined recall strategy, wherein a ground truth point is located inside the recalled number of grids.
  • the computer-implemented method further includes for each recall grid, collecting feature information received from the one or more base stations; for each recall grid, determining a feature value associated with each piece of collected feature information; and generating a feature input that includes a number of feature maps, wherein each feature map is represented by a matrix that is formed by the feature value corresponding to each recall grid, and wherein each feature map corresponds to a type of feature information.
  • a number of feature maps generated by the positioning server is determined by the number of types of feature information collected by the positioning server.
  • the computer-implemented method further includes inputting the number of feature maps into the CNN; and outputting a bias value pair from the CNN, wherein the bias value pair is latitude and a longitude offset relative to the center grid.
  • the computer-implemented method further includes minimizing a loss function representing a distance between the ground truth point and a prediction point.
  • the computer-implemented method further includes determining a level of confidence of a positioning result based on the sparseness of the feature map.
  • the computer-implemented method further includes acquiring feature information of a terminal device; and predicting a position of the terminal device based on the feature information using the trained CNN.
  • Embodiments of the disclosure provide systems and methods for base station positioning based on a convolutional neural network.
  • An exemplary system includes a communication interface, a memory and a processor.
  • the processor is configured to acquire feature information that is received from one or more base stations at different locations in the area of interest, generate a feature input that includes a number of feature maps based on the feature information, train a convolutional neutral network (CNN) based on the number of feature maps; and determine a position of a terminal device using the trained CNN.
  • CNN convolutional neutral network
  • Embodiments of the disclosure further provide a non-transitory computer- readable medium having instructions stored thereon that, when executed by a processor, causes the processor to perform a computer-implemented method for base station positioning based on a convolutional neural network.
  • An exemplary computer-implemented method includes acquiring feature information that is received from one or more base stations at different locations in the area of interest; generating a feature input that includes a number of feature maps based on the feature information; training a convolutional neutral network (CNN) based on the number of feature maps; and determining a position of a terminal device using the trained CNN.
  • CNN convolutional neutral network
  • FIG. 1 is a schematic diagram illustrating an exemplary system for base station positioning, according to some embodiments of the disclosure.
  • FIG. 2 is a block diagram of an exemplary system for base station positioning, according to some embodiments of the disclosure.
  • FIG. 3 illustrates an exemplary feature map that carries the feature information and is used to train the CNN, according to some embodiments of the disclosure.
  • FIG. 4 illustrates an exemplary process of building a feature map, according to some embodiments of the disclosure.
  • FIG. 5 illustrates an exemplary feature map for one test data displaying the feature value of a type of feature information, according to some embodiments of the disclosure.
  • FIG. 6 illustrates an exemplary CNN that is trained by the feature map according to some embodiments of the disclosure.
  • FIG. 7 is a flowchart of an exemplary process for base station positioning using a trained CNN, according to some embodiments of the disclosure.
  • FIG. 8 is a flowchart of an exemplary process for positioning a terminal device using a neural network model, according to some embodiments of the disclosure.
  • the flowcharts used in the present disclosure illustrate operations that systems implement according to some embodiments in the present disclosure. It is to be expressly understood, the operations of the flowchart may or may not be implemented in order. Conversely, the operations may be implemented in inverted order, or simultaneously. Moreover, one or more other operations may be added to the flowcharts. One or more operations may be removed from the flowcharts.
  • system and method in the present disclosure is described primarily in regard to image classification, it should also be understood that this is only one exemplary embodiment.
  • the system or method of the present disclosure may be applied to any other kind of deep learning tasks.
  • FIG. 1 is a schematic diagram illustrating an exemplary system for base station positioning, according to some embodiments of the disclosure.
  • Terminal devices 102 may include any electronic device that can scan access points (APs) 104 and communicate with one or more components included in system 100.
  • terminal devices 102 may include a smart phone, a laptop, a tablet, a wearable device, a drone, or the like.
  • terminal devices 102 may scan nearby APs 104.
  • APs 104 may include devices that transmit signals for communication with terminal devices.
  • APs 104 may include WiFi APs, a base station, Bluetooth APs, or the like.
  • multiple nearby base stations may be scanned by the terminal device 102.
  • the terminal device 102 may be connected to one of the scanned based stations, which is referred to as the main station.
  • Other scanned base station (s) may be referred to as neighboring base stations.
  • the terminal device 102 may receive feature information from the main station and the neighboring base station (s) .
  • the number of scanned neighboring base stations may be 3, 5, 6, 7, or the like.
  • the request query may indicate whether each of the scanned base station (s) is a main base station or a neighboring station.
  • each terminal device 102 may receive feature information from APs 104 and generate a fingerprint.
  • the fingerprint stores feature information, such as identifications (e.g., Cell-Id of a base station) , Received Signal Strength Indication (RSSI) , Round Trip Time (RTT) , or the like of APs 104, from different APs 14 at different location in the area of interest. This is usually constructed once in an offline phase. In some embodiments, constructing of the fingerprint my be done in process called war driving, wherein cars drive the area of interest continuously scanning for cell towers and recording the cell tower ID, RSSI, and GPS location.
  • war driving wherein cars drive the area of interest continuously scanning for cell towers and recording the cell tower ID, RSSI, and GPS location.
  • Positioning server 106 may be an internal server of system 100 or an external server. Positioning server 106 may be associated with a database 108 that stores fingerprints that have been acquired at various reference positions.
  • the database 108 is configured to store feature information collected at every preselected reference position, along with its location information. The location information of the reference position must be stored together with corresponding feature information so as to be able to locate the position.
  • the information stored in the database 108 is used for comparison with fingerprint of the terminal device 102 to search out the information in the database has the highest similarity.
  • the information in database that has been searched out includes location information, which is retrieved and provided for the position of the terminal device 102. For example, during the online phase, the received feature information at an unknown location is compared with the fingerprint stored in the database 108, and the closest location in the fingerprint is returned as the estimated location.
  • system 100 may train a neural network model based on the feature information associated with existing devices in a training stage, and position a terminal device based on predicted positions associated with the terminal device using the neural network model in a positioning stage.
  • the neural network model is a convolutional neural network (CNN) model.
  • CNN is a type of machine learning algorithm that can be trained by supervised learning.
  • the architecture of a CNN model includes a stack of distinct layers that transform the input into the output. Examples of the different layers may include one or more convolutional layers, pooling or subsampling layers, fully connected layers, and/or final loss layers. Each layer may connect with at least one upstream layer and at least one downstream layer.
  • the input may be considered as an input layer, and the output may be considered as the final output layer.
  • CNN models with a large number of intermediate layers are referred to as deep CNN models.
  • some deep CNN models may include more than 20 to 30 layers, and other deep CNN models may even include more than a few hundred layers.
  • Examples of deep CNN models include AlexNet, VGGNet, GoogLeNet, ResNet, etc.
  • Embodiments of the disclosure employ the powerful learning capabilities of CNN models, and particularly deep CNN models, for positioning a terminal device based on feature information of APs scanned by the terminal device.
  • a CNN model used by embodiments of the disclosure may refer to any neural network model formulated, adapted, or modified based on a framework of convolutional neural network.
  • a CNN model according to embodiments of the disclosure may selectively include intermediate layers between the input and output layers, such as one or more deconvolution layers, and/or up-sampling or up-pooling layers.
  • training a CNN model refers to determining one or more parameters of at least one layer in the CNN model.
  • a convolutional layer of a CNN model may include at least one filter or kernel.
  • One or more parameters, such as kernel weights, size, shape, and structure, of the at least one filter may be determined by e.g., a backpropagation-based training process.
  • the training process uses at least one set of training parameters.
  • Each set of training parameters may include a set of feature signals and a supervised signal.
  • the feature signals may include feature information associated with APs 104 that scanned by a terminal device
  • the supervised signal may include a true position of the terminal device.
  • a terminal device may be positioned accurately by the trained CNN model based on feature information of the APs 104 scanned by terminal device.
  • FIG. 2 is a block diagram of an exemplary system 200 for base station positioning, according to some embodiments of the disclosure.
  • system 200 may include a communication interface 202, a processor 204 that includes a feature information receiving unit 206, a feature map generation unit 208, a model generation unit 210, a position determination unit 212, and a memory 214.
  • System 200 may include the above-mentioned components to perform the training stage.
  • system 200 may include more or less of the components shown in FIG. 2. For example, when a neutral network model for positioning is pre-trained and provided, the system 200 may not include the feature map generation unit 208 and model generation unit 210 anymore.
  • the above components can be functional hardware units (e.g., portions of an integrated circuit) designed for use with other components or a part of a program (stored on a computer readable medium) that performs a particular function.
  • functional hardware units e.g., portions of an integrated circuit
  • Communication interface 202 is in communication with terminal device 102 and processor 204.
  • the processor 204 may be configured to acquire feature information transmitted by each of a number of terminal devices.
  • each terminal device 102 may scan APs 104 and transmit the feature information associated with the APs 104 to the feature information receiving unit 206 via communication interface 202.
  • the feature information may be sent to the feature map generating unit 208 to generate one or more feature maps based on the feature information. Subsequently, the generated feature maps may be sent from the feature map generating unit 208 to the model generation unit 210.
  • communication interface 202 may further receive a ground truth position of each terminal device 102 and transmit the ground truth position to processor 204. It is contemplated that, terminal devices in the training stage may be referred to as existing devices for clarity. The ground truth of the existing device may be determined by a GPS positioning unit (not shown) embedded within the existing device.
  • the positioning determination unit 212 may determine predicted positions of the terminal devices 102.
  • the predicted positions of the terminal devices may be referred to as hypothetical positions in the training stage for clarity. Therefore, in the training stage, processor 204 may receive the one or more feature maps, ground truth positions and corresponding hypothetical positions associated with existing devices, for training a neural network model at the model generation unit 210.
  • FIG. 3 illustrates an exemplary feature input 300 that carries the feature information and is used to train the CNN, according to some embodiments of the disclosure.
  • a feature input consists of N-channel feature maps, which carry complete information about the APs, such as acquisition, RSSI, distance, and so on.
  • a feature map (e.g., 302a, 302b in FIG. 3, referring as 302 hereinafter) may be constructed based on feature information collected from a recall grid set (e.g., 304a, 304b in FIG. 3, referring as 304 hereinafter) , which is obtained from a geographic grid set through a recall strategy.
  • the details of how to generate a recall grid set from the area of interest 310 will be described in more detail in FIG. 7.
  • the feature input 300 may be represented by wherein f c, i is the c-th feature value of feature information corresponding to each grid g i in a grid set G i .
  • the matrix form may be expressed as follows:
  • Each M ⁇ M grid can be understood as a graph, so that may form a feature map that includes C features (that is, C types of different feature information) .
  • each of the feature map 302 is a 2-dimensional graph with a size of M ⁇ M, and therefore, feature input 300 (constructed by feature maps 300a, 300b...300i (not shown) ) is a 3-dimensional array with C (channel number) 2D feature map of size M ⁇ M.
  • the feature map 302 includes a number of feature values corresponding a number of features associated with each grid.
  • a feature map may correspond to a feature f h, i , which represents the collection heat of the i-th grid.
  • the collection heat is the total number of acquisitions on the grid in the past months, reflecting to some extent whether the grid is reachable and the frequency of access in the previous months.
  • a feature map may correspond to f p. i , which is the matching probability of the i-th grid.
  • the RSSI matching probability measures how close the signal in the terminal is to the signal in the grid.
  • the continuous RSSI value is discretized into 7 values s ⁇ ⁇ 0, 1, 2, 3, 4, 5, 6 ⁇ , and the collection count of each discrete value s in the i-th grid is h i, s .
  • the matching probability is calculated according to the RSSI discrete values t (t ⁇ ⁇ 0, 1, 2, 3, 4, 5, 6 ⁇ , ) in the request query.
  • the specific calculation formula is as follows:
  • the feature map may include some of or all of the following features:
  • the feature map may include other features, which will not be limited to the description of the present disclosure.
  • FIG. 4 illustrates an exemplary process 400 of generating a feature map, according to some embodiments of the disclosure.
  • Process 400 may include steps S402-S408 as below.
  • the feature map generation unit 206 may generate the recall grid set G r (G r ⁇ G) through a recall strategy.
  • the recall grid set G r includes M ⁇ M small grids and constitutes an graph, where each grid in the recall grid set G r represents a pixel of the graph, and the value of each pixel corresponding to the feature value of the feature information collected in the grid.
  • the recall grid set G r is obtained through a predetermined recall strategy.
  • the recall goal is to include the grid wherein the ground truth is located in the recall grid set G r . Therefore to determine a recall strategy, the feature map generation unit 206 may first determine a center grid g center ⁇ G r , and then recall M ⁇ M grids near the center grid g center . Longitude and latitude calculation of the center grid is chosen according to:
  • G K is the grid set closest to the nearest base station cluster center with the size K.
  • the feature map generation unit 206 may compare the coverage of the ground truth in the recall grid set G r through multiple experiments to select the best strategy with the highest coverage C, according to:
  • t v is the ground truth of the v-th test data
  • N is the count of all test data
  • G r is the recall grid set of the v-th test data.
  • the feature map generation unit 206 may collect feature information corresponding to each grid.
  • step S408 the feature map generation unit 206 may build a number of feature maps using the collected feature information.
  • the feature maps generated in this step may form a feature input representing by a C-channel matrix Details about values of the matrix have been described in FIG. 3 and will not be repeated herein.
  • the feature map 500 includes 12 ⁇ 12 grids, and each grid includes a value representing the feature information of collection heat in that grid.
  • a first grid 502a has a feature value of “1, ”
  • a second grid 502b has a feature value of “2”
  • a third grid 502c has a feature value of “3”
  • a fourth grid 502d has a feature value of “0” .
  • a feature value of “0” shows that there is no value of collection heat is received in that grid.
  • the system is required to, besides returning an accurate position of a terminal device, evaluating the positioning result in a form of confidence level.
  • some of the feature values in the feature map 500 are “0” , and the sparseness of the feature map may lead to the incompleteness of the feature carried by the feature map. Therefore, the ratio of the non-zero elements and the zero elements in the feature maps may contribute to the error rate of the prediction, and may be used to determine the confidence level.
  • FIG. 6 illustrates an exemplary CNN 600 that is trained by the feature map according to some embodiments of the disclosure.
  • the feature map input to the CNN 600 is a 3D array with 42 channels 2D feature maps of size 12X12.
  • the model generation unit 208 may generate a CNN 600 that includes one or more convolutional layers 602 (e.g., convolutional layers 602a, 602b, and 603c in FIG. 6) .
  • Each convolutional layer 602 may have a number of parameters, such as the width ( ‘W” ) and height ( “H” ) determined by the upper input layer (e.g., the size of the input of the convolutional layer 602a) , and the number of filters or kernels ( “N” ) in the layer and their sizes. Due to the large diameter of the recall area, a CNN with many different sizes of convolution kernel may effectively extract features from different receptive fields.
  • the CNN 600 may use different sizes of convolution kernel to extract features in the first convolutional layer.
  • the CNN 600 may include three different kernel sizes: the size of filters of convolutional layers 602a is 3X3, the size of filters of convolutional layers 602b is 5X5, the size of filters of convolutional layers 602c is 7X7.
  • the size of filters may be referred to as the depth of the convolutional layer.
  • the input of each convolutional layer 602 is convolved with one filter across its width and height and produces a new feature image corresponding to that filter.
  • the convolution is performed for all filters of each convolutional layer, and the resulting feature images are stacked along the depth dimension.
  • the output of a preceding convolutional layer can be used as input to the next convolutional layer.
  • CNN 600 of model generation unit 208 may further include one or more pooling layers 604 (e.g. pooling layers 604a and 604b in FIG. 6) .
  • Pooling layer 604 can be added between two successive convolutional layers 602 in CNN 600.
  • a pooling layer operates independently on every depth slice of the input (e.g., a feature image from a previous convolutional layer) , and reduces its spatial dimension by performing a form of non-linear down-sampling.
  • the function of the pooling layers is to progressively reduce the spatial dimension of the extracted feature image to reduce the amount of parameters and computation in the network, and hence to also control over-fitting.
  • the number and placement of the pooling layers may be determined based on various factors, such as the design of the convolutional network architecture, the size of the input, the size of convolutional layers 602, and/or application of CNN 600.
  • Max pooling may partition a feature image of the input into a set of overlapping or non-overlapping sub-regions with a predetermined stride. For each sub-region, max pooling outputs the maximum. This downsamples every feature image of the input along both its width and its height while the depth dimension remains unchanged.
  • Other suitable functions may be used for implementing the pooling layers, such as average pooling or even L2-norm pooling.
  • CNN may further include another set of convolutional layer 602b and pooling layer 604b. As shown in FIG. 6, the CNN may use max pooling of size 2 after each convolution. It is contemplated that more sets of convolutional layers and pooling layers may be provided. As shown in FIG. 6, after the max pooling layers, the CNN may generate a feature vector of size 1 ⁇ 2048.
  • some global features are introduced into the CNN training stage. These global features come from the user request, such as the signal strength of the base station in the request and the number of neighboring base station. These features are not different in each grid and are redundant, so that they are discretized as input of the first fully connected layer. As show in FIG. 6, the feature vector (size of 1 ⁇ 201) formed by the discretized global features is concatenated with the feature vector (size of 1 ⁇ 2048) generated from the max pooling layers to construct a feature vector (size of 1 ⁇ 2249) as the input of the fully connected layers.
  • one or more fully-connected layers 606 may be added after the convolutional layers and/or the pooling layers.
  • the fully-connected layers have a full connection with all feature images of the previous layer.
  • a fully-connected layer may take the output of the last convolutional layer or the last pooling layer as the input in vector form.
  • the CNN may include three fully connected layers, with a count of nodes of 1000 (606a) , and 64 (606b) , and 2 (606c) , respectively.
  • the output vector of fully-connected layer 606c is a vector of size 1 ⁇ 2, representing the longitude and latitude offsets of predict grid relative to the center grid.
  • the goal of the training process is that the longitude and latitude offsets of predict grid conforms to the supervised signal (i.e., the true value of the position of the grid) .
  • the supervised signals are used as constraints to improve the accuracy of CNN 600.
  • the output of the CNN is the offsets of the longitude and latitude relative to the center grid g center in the recall grid set G r .
  • the latitude and longitude of the center grid g center plus the offsets ⁇ lon, ⁇ lat is the positioning latitude and longitude as the final positioning result.
  • a loss layer (not shown) may be included in CNN 600.
  • the loss layer may be the last layer in CNN 600.
  • the loss layer may determine how the network training penalizes the deviation between the predicted position and the benchmark position (i.e., the GPS position) .
  • the loss layer may be implemented by various suitable loss functions. For example, a softmax function may be used as the final loss layer.
  • a loss function that fits the specific positioning problem may be designed as:
  • ⁇ lon p , ⁇ lat p are the longitude and latitude offsets of predict grid relative to the center grid.
  • ⁇ lon l , ⁇ lat l are the longitude and latitude offsets of ground truth grid relative to the center grid.
  • the loss function represents the distance between the ground truth point and the prediction point, and the minimization of the loss function is equivalent to minimizing the error distance, which is consistent with the positioning target.
  • model generation unit 208 may generate a neural network model for positioning a terminal device.
  • the generated neural network model may be stored to memory 214.
  • Memory 214 may be implemented as any type of volatile or non-volatile memory devices, or a combination thereof, such as a static random access memory (SRAM) , an electrically erasable programmable read-only memory (EEPROM) , an erasable programmable read-only memory (EPROM) , a programmable read-only memory (PROM) , a read-only memory (ROM) , a magnetic memory, a flash memory, or a magnetic or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory a magnetic memory
  • flash memory or a magnetic or optical disk.
  • FIG. 7 is a flowchart of an exemplary process 700 for base station positioning using a trained CNN, according to some embodiments of the disclosure.
  • Process 700 may include steps S702-S708as below.
  • the positioning server may acquire feature information that is received from one or more base stations.
  • the feature information may be received at different locations in the area of interest. As described in FIG. 4, each grid may contain a number of pieces of feature information.
  • the feature information includes feature information associated with the scanned APs, such as identifications (e.g., Cell_Id of the base station) , Received Signal Strength Indication (RSSI) , Round Trip Time (RTT) , or the like of APs 104.
  • the feature information may also include other types of information, such as numbers of the passengers and drivers located in a grid.
  • the positioning server may divide the area of interest into a number of grids to obtained a geographic grid set, obtain a recall grid set through a predetermined recall strategy based on the geographic grid set, wherein the recall grid set includes a number of recall grids; and for each recall grid, collecting feature information received from the one or more base stations.
  • the positioning server may determine a center grid of the geographic grid set, and recall a number of grids surrounding the center grid through a predetermined recall strategy, wherein a ground truth point is located inside the recalled number of grids.
  • the positioning server may generate a feature input that includes a number of feature maps based on the feature information.
  • a feature map carries complete information about the base stations affecting the area of interest.
  • Each feature value of the feature information corresponds to a grid included in the feature map, and therefore in the case there is C features, the positioning server may obtain a feature input, which is a 3Darray with C (channel number) 2D feature map.
  • the number of feature maps generated by the positioning server is determined by the number of types of feature information collected by the positioning server.
  • the positioning server may collect feature information received from the one or more base stations for each recall grid, determine a feature value of each piece of collected feature information for each recall grid, and generate a number of feature maps, wherein each feature map is represented by a matrix that is formed by the feature value of corresponding to each recall grid, and wherein each feature map corresponds to a type of feature information.
  • the positioning server in addition to return an accurate positioning result, the positioning server also requires the system to evaluate the confidence level of the positioning result, which may be used as a basis for other related services.
  • the number of captured features may affect the final positioning result (that is, the more feature captured, the more accurate the result may be) . Therefore, in such embodiments, a positioning confidence level is determined based on the sparsity of the feature distribution in each grid. Specifically, in each channel, the percentage of non-void feature is used as a feature to form a feature vector. A GBDT tree is trained using this feature vector to regress an error distance between the predicted position and the true position. Subsequently, the positioning result confidence level is determined by mapping the predicted distance, according to the equation:
  • the positioning server may further acquire benchmark positions of the existing devices.
  • a benchmark position is a known position of the existing device.
  • the benchmark position may be previously verified as conform to the true position of the existing device.
  • the benchmark position may be determined by GPS signals received by the existing device.
  • the benchmark position may also be determined by other positioning methods, as long as the accuracy of the positioning results meets the predetermined requirements.
  • a benchmark position may be a current address provided by the user of the existing device.
  • the positioning server may train the neural network model using the generated feature input.
  • the neural network model may be a CNN.
  • the output of the CNN is a bias value pair, which is the latitude and the longitude offset relative to the center grid. The latitude and the longitude of the center grid plus the offsets if the final positioning latitude and the longitude.
  • the positioning server may input the feature input into the CNN; and output a bias value pair from the CNN, wherein the bias value pair is latitude and a longitude offset relative to the center grid.
  • the neural network model may be applied for positioning a terminal device.
  • FIG. 8 is a flowchart of an exemplary process 800 for positioning a terminal device using a neural network model, according to some embodiments of the disclosure.
  • Process 800 may be implemented by the same positioning server that implements process 700 or a different positioning server, and may include steps S802-S804.
  • the positioning server may acquire a set of feature information associated with the terminal device.
  • the feature information in the positioning stage may be similarly acquired as the feature information in the training stage.
  • the positioning server may determine a position of the terminal device using the neural network model.
  • the neural network model may output estimated coordinates of the terminal device.
  • the positioning server may further generate an image may be based on the estimated coordinates, and indicate the position of the terminal device on the image. For example, the position of the terminal device may be marked in the resulting image, such as by indicating its latitude and longitude.
  • the computer-readable medium may include volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other types of computer-readable medium or computer-readable storage devices.
  • the computer-readable medium may be the storage device or the memory module having the computer instructions stored thereon, as disclosed.
  • the computer-readable medium may be a disc or a flash drive having the computer instructions stored thereon.
  • the computer-readable medium may include volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other types of computer-readable medium or computer-readable storage devices.
  • the computer-readable medium may be the storage device or the memory module having the computer instructions stored thereon, as disclosed.
  • the computer-readable medium may be a disc or a flash drive having the computer instructions stored thereon.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)

Abstract

Sont divulgués ici des systèmes et procédés permettant de positionner une station de base sur la base d'un réseau neuronal convolutif. Le procédé peut consister à acquérir, par un serveur de positionnement, des informations de caractéristique qui sont reçues d'une ou plusieurs stations de base à différents emplacements dans la zone d'intérêt ; à générer une entrée de caractéristique qui comprend une pluralité de cartes de caractéristiques sur la base des informations de caractéristique ; à former un réseau neuronal convolutif (CNN) sur la base de l'entrée de caractéristique ; et à déterminer une position d'un dispositif terminal à l'aide du CNN formé.
PCT/CN2019/122273 2019-11-30 2019-11-30 Positionnement de station de base sur la base de réseaux neuronaux convolutifs WO2021103027A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/122273 WO2021103027A1 (fr) 2019-11-30 2019-11-30 Positionnement de station de base sur la base de réseaux neuronaux convolutifs

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/122273 WO2021103027A1 (fr) 2019-11-30 2019-11-30 Positionnement de station de base sur la base de réseaux neuronaux convolutifs

Publications (1)

Publication Number Publication Date
WO2021103027A1 true WO2021103027A1 (fr) 2021-06-03

Family

ID=76129901

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/122273 WO2021103027A1 (fr) 2019-11-30 2019-11-30 Positionnement de station de base sur la base de réseaux neuronaux convolutifs

Country Status (1)

Country Link
WO (1) WO2021103027A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115175306A (zh) * 2022-06-24 2022-10-11 国网河南省电力公司经济技术研究院 一种基于卷积神经网络的电力物联网室内定位方法
CN117405127A (zh) * 2023-11-02 2024-01-16 深圳市天丽汽车电子科技有限公司 一种基于车载5g天线的导航方法、***、设备及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108594170A (zh) * 2018-04-04 2018-09-28 合肥工业大学 一种基于卷积神经网络识别技术的wifi室内定位方法
WO2019036860A1 (fr) * 2017-08-21 2019-02-28 Beijing Didi Infinity Technology And Development Co., Ltd. Positionnement d'un dispositif terminal sur la base d'un apprentissage profond
CN109743683A (zh) * 2018-12-03 2019-05-10 北京航空航天大学 一种采用深度学习融合网络模型确定手机用户位置的方法
CN110166991A (zh) * 2019-01-08 2019-08-23 腾讯大地通途(北京)科技有限公司 用于定位电子设备的方法、设备、装置以及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019036860A1 (fr) * 2017-08-21 2019-02-28 Beijing Didi Infinity Technology And Development Co., Ltd. Positionnement d'un dispositif terminal sur la base d'un apprentissage profond
CN108594170A (zh) * 2018-04-04 2018-09-28 合肥工业大学 一种基于卷积神经网络识别技术的wifi室内定位方法
CN109743683A (zh) * 2018-12-03 2019-05-10 北京航空航天大学 一种采用深度学习融合网络模型确定手机用户位置的方法
CN110166991A (zh) * 2019-01-08 2019-08-23 腾讯大地通途(北京)科技有限公司 用于定位电子设备的方法、设备、装置以及存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115175306A (zh) * 2022-06-24 2022-10-11 国网河南省电力公司经济技术研究院 一种基于卷积神经网络的电力物联网室内定位方法
CN115175306B (zh) * 2022-06-24 2024-05-07 国网河南省电力公司经济技术研究院 一种基于卷积神经网络的电力物联网室内定位方法
CN117405127A (zh) * 2023-11-02 2024-01-16 深圳市天丽汽车电子科技有限公司 一种基于车载5g天线的导航方法、***、设备及介质
CN117405127B (zh) * 2023-11-02 2024-06-11 深圳市天丽汽车电子科技有限公司 一种基于车载5g天线的导航方法、***、设备及介质

Similar Documents

Publication Publication Date Title
CN110322453B (zh) 基于位置注意力和辅助网络的3d点云语义分割方法
CN106851571B (zh) 一种基于决策树的快速KNN室内WiFi定位方法
CN109165540B (zh) 一种基于先验候选框选择策略的行人搜索方法和装置
CN110892760B (zh) 基于深度学习定位终端设备
CN109614935A (zh) 车辆定损方法及装置、存储介质及电子设备
CN107038717A (zh) 一种基于立体栅格自动分析3d点云配准误差的方法
US9430872B2 (en) Performance prediction for generation of point clouds from passive imagery
US11676375B2 (en) System and process for integrative computational soil mapping
WO2021103027A1 (fr) Positionnement de station de base sur la base de réseaux neuronaux convolutifs
CN115457492A (zh) 目标检测方法、装置、计算机设备及存储介质
CN111475746B (zh) 兴趣点位置挖掘方法、装置、计算机设备和存储介质
CN112634369A (zh) 空间与或图模型生成方法、装置、电子设备和存储介质
CN112393735B (zh) 定位方法及装置、存储介质、电子装置
CN116310852A (zh) 双时相遥感影像无监督分类与变化检测方法及***
US11754704B2 (en) Synthetic-aperture-radar image processing device and image processing method
Mukhtar et al. Machine learning-enabled localization in 5g using lidar and rss data
CN112862730A (zh) 点云特征增强方法、装置、计算机设备和存储介质
CN116310899A (zh) 基于YOLOv5改进的目标检测方法及装置、训练方法
Wang et al. Joint visual and wireless signal feature based approach for high-precision indoor localization
CN115457202B (zh) 一种三维模型更新的方法、装置及存储介质
CN113194401B (zh) 一种基于生成式对抗网络的毫米波室内定位方法及***
Nie et al. Joint access point fuzzy rough set reduction and multisource information fusion for indoor Wi-Fi positioning
CN115497075A (zh) 基于改进型卷积神经网络交通目标检测方法及相关装置
CN114882115A (zh) 车辆位姿的预测方法和装置、电子设备和存储介质
CN114863201A (zh) 三维检测模型的训练方法、装置、计算机设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19954608

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19954608

Country of ref document: EP

Kind code of ref document: A1