WO2021103027A1 - Base station positioning based on convolutional neural networks - Google Patents

Base station positioning based on convolutional neural networks Download PDF

Info

Publication number
WO2021103027A1
WO2021103027A1 PCT/CN2019/122273 CN2019122273W WO2021103027A1 WO 2021103027 A1 WO2021103027 A1 WO 2021103027A1 CN 2019122273 W CN2019122273 W CN 2019122273W WO 2021103027 A1 WO2021103027 A1 WO 2021103027A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
recall
grid
feature information
cnn
Prior art date
Application number
PCT/CN2019/122273
Other languages
French (fr)
Inventor
Yu Lin
Buyi YIN
Zhaoyang FENG
Juhua Chen
Weihuan SHU
Original Assignee
Beijing Didi Infinity Technology And Development Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology And Development Co., Ltd. filed Critical Beijing Didi Infinity Technology And Development Co., Ltd.
Priority to PCT/CN2019/122273 priority Critical patent/WO2021103027A1/en
Publication of WO2021103027A1 publication Critical patent/WO2021103027A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/0252Radio frequency fingerprinting
    • G01S5/02521Radio frequency fingerprinting using a radio-map
    • G01S5/02523Details of interaction of receiver with radio-map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Definitions

  • This present disclosure generally relates to systems and methods for positioning services, and in particular, to systems and methods for base station positioning based on convolutional neural networks (CNNs) .
  • CNNs convolutional neural networks
  • Positioning services are becoming more and more important with the popularity of location-based services, and the requirements for positioning accuracy are getting higher and higher. For example, for online taxi platforms, the driver would need to know the location of the passengers before picking up them.
  • GPS Global Positioning System
  • NLP Network Localization Service
  • the NLP includes WiFi positioning and base station (that is, cellular network) positioning, which may be used when WiFi positioning is unavailable or inaccurate.
  • Base station positioning is an indispensable component of positioning services. Early studies of base station positioning are mostly focused on hardware dependent technologies, which cannot be applied on a large scale. In recent years, the ever-increasing density of base stations offers the possibility of applying fingerprint-based positioning techniques.
  • the fingerprint-based positioning technology is an empirical method that matches the fingerprint information collected on the device in real-time with the fingerprint database collected offline to identity the device location. With the maturity of machine learning algorithm, through the powerful learning ability of machine learning, any information collected can be fully utilized as features, and fingerprint-based positioning technology is further improved by using machine learning. Using the fingerprint database collected offline; the entire geospatial space is divided into a large number of very small geographic grids.
  • the grid closest to the real location is obtained as the location of the user or terminal through the classic recall, sort, smooth machine-learning framework, and the positioning accuracy is improved.
  • this so-called Geo-block Ranking method has limitations. For example, it cannot describe the local correlation of the grid in space. Further, the addition of the smoothing process leads to the inconsistency between the optimization goal and the goal. Hence, it is desired to improve current base station positioning method to increase its efficiency and accuracy.
  • Embodiments of the disclosure address the above problems by providing a Convolutional neural network (CNN) -based positioning method, which is different from the current method by modeling the positioning problem as object detection in geospatial, and directly predicting the position information by using the improved deep CNNs.
  • CNN Convolutional neural network
  • Embodiments of the disclosure provide a computer-implemented method for base station positioning based on a convolutional neural network.
  • An exemplary computer-implemented method includes acquiring, by a positioning server, feature information that is received from one or more base stations at different locations in the area of interest; generating, by the positioning server, a feature input that includes of a plurality of feature maps based on the feature information; training, by the positioning server, a convolutional neutral network (CNN) based on the feature input; and determining, by the positioning server, a position of a terminal device using the trained CNN.
  • CNN convolutional neutral network
  • the computer-implemented method further includes dividing the area of interest into a number of grids to obtained a geographic grid set; generate a recall grid set through a predetermined recall strategy based on the geographic grid set, wherein the recall grid set includes a number of recall grids; and for each recall grid, collect feature information received from the one or more base stations.
  • the computer-implemented method further includes determining a center grid of the geographic grid set; and recalling a number of grids surrounding the center grid through a predetermined recall strategy, wherein a ground truth point is located inside the recalled number of grids.
  • the computer-implemented method further includes for each recall grid, collecting feature information received from the one or more base stations; for each recall grid, determining a feature value associated with each piece of collected feature information; and generating a feature input that includes a number of feature maps, wherein each feature map is represented by a matrix that is formed by the feature value corresponding to each recall grid, and wherein each feature map corresponds to a type of feature information.
  • a number of feature maps generated by the positioning server is determined by the number of types of feature information collected by the positioning server.
  • the computer-implemented method further includes inputting the number of feature maps into the CNN; and outputting a bias value pair from the CNN, wherein the bias value pair is latitude and a longitude offset relative to the center grid.
  • the computer-implemented method further includes minimizing a loss function representing a distance between the ground truth point and a prediction point.
  • the computer-implemented method further includes determining a level of confidence of a positioning result based on the sparseness of the feature map.
  • the computer-implemented method further includes acquiring feature information of a terminal device; and predicting a position of the terminal device based on the feature information using the trained CNN.
  • Embodiments of the disclosure provide systems and methods for base station positioning based on a convolutional neural network.
  • An exemplary system includes a communication interface, a memory and a processor.
  • the processor is configured to acquire feature information that is received from one or more base stations at different locations in the area of interest, generate a feature input that includes a number of feature maps based on the feature information, train a convolutional neutral network (CNN) based on the number of feature maps; and determine a position of a terminal device using the trained CNN.
  • CNN convolutional neutral network
  • Embodiments of the disclosure further provide a non-transitory computer- readable medium having instructions stored thereon that, when executed by a processor, causes the processor to perform a computer-implemented method for base station positioning based on a convolutional neural network.
  • An exemplary computer-implemented method includes acquiring feature information that is received from one or more base stations at different locations in the area of interest; generating a feature input that includes a number of feature maps based on the feature information; training a convolutional neutral network (CNN) based on the number of feature maps; and determining a position of a terminal device using the trained CNN.
  • CNN convolutional neutral network
  • FIG. 1 is a schematic diagram illustrating an exemplary system for base station positioning, according to some embodiments of the disclosure.
  • FIG. 2 is a block diagram of an exemplary system for base station positioning, according to some embodiments of the disclosure.
  • FIG. 3 illustrates an exemplary feature map that carries the feature information and is used to train the CNN, according to some embodiments of the disclosure.
  • FIG. 4 illustrates an exemplary process of building a feature map, according to some embodiments of the disclosure.
  • FIG. 5 illustrates an exemplary feature map for one test data displaying the feature value of a type of feature information, according to some embodiments of the disclosure.
  • FIG. 6 illustrates an exemplary CNN that is trained by the feature map according to some embodiments of the disclosure.
  • FIG. 7 is a flowchart of an exemplary process for base station positioning using a trained CNN, according to some embodiments of the disclosure.
  • FIG. 8 is a flowchart of an exemplary process for positioning a terminal device using a neural network model, according to some embodiments of the disclosure.
  • the flowcharts used in the present disclosure illustrate operations that systems implement according to some embodiments in the present disclosure. It is to be expressly understood, the operations of the flowchart may or may not be implemented in order. Conversely, the operations may be implemented in inverted order, or simultaneously. Moreover, one or more other operations may be added to the flowcharts. One or more operations may be removed from the flowcharts.
  • system and method in the present disclosure is described primarily in regard to image classification, it should also be understood that this is only one exemplary embodiment.
  • the system or method of the present disclosure may be applied to any other kind of deep learning tasks.
  • FIG. 1 is a schematic diagram illustrating an exemplary system for base station positioning, according to some embodiments of the disclosure.
  • Terminal devices 102 may include any electronic device that can scan access points (APs) 104 and communicate with one or more components included in system 100.
  • terminal devices 102 may include a smart phone, a laptop, a tablet, a wearable device, a drone, or the like.
  • terminal devices 102 may scan nearby APs 104.
  • APs 104 may include devices that transmit signals for communication with terminal devices.
  • APs 104 may include WiFi APs, a base station, Bluetooth APs, or the like.
  • multiple nearby base stations may be scanned by the terminal device 102.
  • the terminal device 102 may be connected to one of the scanned based stations, which is referred to as the main station.
  • Other scanned base station (s) may be referred to as neighboring base stations.
  • the terminal device 102 may receive feature information from the main station and the neighboring base station (s) .
  • the number of scanned neighboring base stations may be 3, 5, 6, 7, or the like.
  • the request query may indicate whether each of the scanned base station (s) is a main base station or a neighboring station.
  • each terminal device 102 may receive feature information from APs 104 and generate a fingerprint.
  • the fingerprint stores feature information, such as identifications (e.g., Cell-Id of a base station) , Received Signal Strength Indication (RSSI) , Round Trip Time (RTT) , or the like of APs 104, from different APs 14 at different location in the area of interest. This is usually constructed once in an offline phase. In some embodiments, constructing of the fingerprint my be done in process called war driving, wherein cars drive the area of interest continuously scanning for cell towers and recording the cell tower ID, RSSI, and GPS location.
  • war driving wherein cars drive the area of interest continuously scanning for cell towers and recording the cell tower ID, RSSI, and GPS location.
  • Positioning server 106 may be an internal server of system 100 or an external server. Positioning server 106 may be associated with a database 108 that stores fingerprints that have been acquired at various reference positions.
  • the database 108 is configured to store feature information collected at every preselected reference position, along with its location information. The location information of the reference position must be stored together with corresponding feature information so as to be able to locate the position.
  • the information stored in the database 108 is used for comparison with fingerprint of the terminal device 102 to search out the information in the database has the highest similarity.
  • the information in database that has been searched out includes location information, which is retrieved and provided for the position of the terminal device 102. For example, during the online phase, the received feature information at an unknown location is compared with the fingerprint stored in the database 108, and the closest location in the fingerprint is returned as the estimated location.
  • system 100 may train a neural network model based on the feature information associated with existing devices in a training stage, and position a terminal device based on predicted positions associated with the terminal device using the neural network model in a positioning stage.
  • the neural network model is a convolutional neural network (CNN) model.
  • CNN is a type of machine learning algorithm that can be trained by supervised learning.
  • the architecture of a CNN model includes a stack of distinct layers that transform the input into the output. Examples of the different layers may include one or more convolutional layers, pooling or subsampling layers, fully connected layers, and/or final loss layers. Each layer may connect with at least one upstream layer and at least one downstream layer.
  • the input may be considered as an input layer, and the output may be considered as the final output layer.
  • CNN models with a large number of intermediate layers are referred to as deep CNN models.
  • some deep CNN models may include more than 20 to 30 layers, and other deep CNN models may even include more than a few hundred layers.
  • Examples of deep CNN models include AlexNet, VGGNet, GoogLeNet, ResNet, etc.
  • Embodiments of the disclosure employ the powerful learning capabilities of CNN models, and particularly deep CNN models, for positioning a terminal device based on feature information of APs scanned by the terminal device.
  • a CNN model used by embodiments of the disclosure may refer to any neural network model formulated, adapted, or modified based on a framework of convolutional neural network.
  • a CNN model according to embodiments of the disclosure may selectively include intermediate layers between the input and output layers, such as one or more deconvolution layers, and/or up-sampling or up-pooling layers.
  • training a CNN model refers to determining one or more parameters of at least one layer in the CNN model.
  • a convolutional layer of a CNN model may include at least one filter or kernel.
  • One or more parameters, such as kernel weights, size, shape, and structure, of the at least one filter may be determined by e.g., a backpropagation-based training process.
  • the training process uses at least one set of training parameters.
  • Each set of training parameters may include a set of feature signals and a supervised signal.
  • the feature signals may include feature information associated with APs 104 that scanned by a terminal device
  • the supervised signal may include a true position of the terminal device.
  • a terminal device may be positioned accurately by the trained CNN model based on feature information of the APs 104 scanned by terminal device.
  • FIG. 2 is a block diagram of an exemplary system 200 for base station positioning, according to some embodiments of the disclosure.
  • system 200 may include a communication interface 202, a processor 204 that includes a feature information receiving unit 206, a feature map generation unit 208, a model generation unit 210, a position determination unit 212, and a memory 214.
  • System 200 may include the above-mentioned components to perform the training stage.
  • system 200 may include more or less of the components shown in FIG. 2. For example, when a neutral network model for positioning is pre-trained and provided, the system 200 may not include the feature map generation unit 208 and model generation unit 210 anymore.
  • the above components can be functional hardware units (e.g., portions of an integrated circuit) designed for use with other components or a part of a program (stored on a computer readable medium) that performs a particular function.
  • functional hardware units e.g., portions of an integrated circuit
  • Communication interface 202 is in communication with terminal device 102 and processor 204.
  • the processor 204 may be configured to acquire feature information transmitted by each of a number of terminal devices.
  • each terminal device 102 may scan APs 104 and transmit the feature information associated with the APs 104 to the feature information receiving unit 206 via communication interface 202.
  • the feature information may be sent to the feature map generating unit 208 to generate one or more feature maps based on the feature information. Subsequently, the generated feature maps may be sent from the feature map generating unit 208 to the model generation unit 210.
  • communication interface 202 may further receive a ground truth position of each terminal device 102 and transmit the ground truth position to processor 204. It is contemplated that, terminal devices in the training stage may be referred to as existing devices for clarity. The ground truth of the existing device may be determined by a GPS positioning unit (not shown) embedded within the existing device.
  • the positioning determination unit 212 may determine predicted positions of the terminal devices 102.
  • the predicted positions of the terminal devices may be referred to as hypothetical positions in the training stage for clarity. Therefore, in the training stage, processor 204 may receive the one or more feature maps, ground truth positions and corresponding hypothetical positions associated with existing devices, for training a neural network model at the model generation unit 210.
  • FIG. 3 illustrates an exemplary feature input 300 that carries the feature information and is used to train the CNN, according to some embodiments of the disclosure.
  • a feature input consists of N-channel feature maps, which carry complete information about the APs, such as acquisition, RSSI, distance, and so on.
  • a feature map (e.g., 302a, 302b in FIG. 3, referring as 302 hereinafter) may be constructed based on feature information collected from a recall grid set (e.g., 304a, 304b in FIG. 3, referring as 304 hereinafter) , which is obtained from a geographic grid set through a recall strategy.
  • the details of how to generate a recall grid set from the area of interest 310 will be described in more detail in FIG. 7.
  • the feature input 300 may be represented by wherein f c, i is the c-th feature value of feature information corresponding to each grid g i in a grid set G i .
  • the matrix form may be expressed as follows:
  • Each M ⁇ M grid can be understood as a graph, so that may form a feature map that includes C features (that is, C types of different feature information) .
  • each of the feature map 302 is a 2-dimensional graph with a size of M ⁇ M, and therefore, feature input 300 (constructed by feature maps 300a, 300b...300i (not shown) ) is a 3-dimensional array with C (channel number) 2D feature map of size M ⁇ M.
  • the feature map 302 includes a number of feature values corresponding a number of features associated with each grid.
  • a feature map may correspond to a feature f h, i , which represents the collection heat of the i-th grid.
  • the collection heat is the total number of acquisitions on the grid in the past months, reflecting to some extent whether the grid is reachable and the frequency of access in the previous months.
  • a feature map may correspond to f p. i , which is the matching probability of the i-th grid.
  • the RSSI matching probability measures how close the signal in the terminal is to the signal in the grid.
  • the continuous RSSI value is discretized into 7 values s ⁇ ⁇ 0, 1, 2, 3, 4, 5, 6 ⁇ , and the collection count of each discrete value s in the i-th grid is h i, s .
  • the matching probability is calculated according to the RSSI discrete values t (t ⁇ ⁇ 0, 1, 2, 3, 4, 5, 6 ⁇ , ) in the request query.
  • the specific calculation formula is as follows:
  • the feature map may include some of or all of the following features:
  • the feature map may include other features, which will not be limited to the description of the present disclosure.
  • FIG. 4 illustrates an exemplary process 400 of generating a feature map, according to some embodiments of the disclosure.
  • Process 400 may include steps S402-S408 as below.
  • the feature map generation unit 206 may generate the recall grid set G r (G r ⁇ G) through a recall strategy.
  • the recall grid set G r includes M ⁇ M small grids and constitutes an graph, where each grid in the recall grid set G r represents a pixel of the graph, and the value of each pixel corresponding to the feature value of the feature information collected in the grid.
  • the recall grid set G r is obtained through a predetermined recall strategy.
  • the recall goal is to include the grid wherein the ground truth is located in the recall grid set G r . Therefore to determine a recall strategy, the feature map generation unit 206 may first determine a center grid g center ⁇ G r , and then recall M ⁇ M grids near the center grid g center . Longitude and latitude calculation of the center grid is chosen according to:
  • G K is the grid set closest to the nearest base station cluster center with the size K.
  • the feature map generation unit 206 may compare the coverage of the ground truth in the recall grid set G r through multiple experiments to select the best strategy with the highest coverage C, according to:
  • t v is the ground truth of the v-th test data
  • N is the count of all test data
  • G r is the recall grid set of the v-th test data.
  • the feature map generation unit 206 may collect feature information corresponding to each grid.
  • step S408 the feature map generation unit 206 may build a number of feature maps using the collected feature information.
  • the feature maps generated in this step may form a feature input representing by a C-channel matrix Details about values of the matrix have been described in FIG. 3 and will not be repeated herein.
  • the feature map 500 includes 12 ⁇ 12 grids, and each grid includes a value representing the feature information of collection heat in that grid.
  • a first grid 502a has a feature value of “1, ”
  • a second grid 502b has a feature value of “2”
  • a third grid 502c has a feature value of “3”
  • a fourth grid 502d has a feature value of “0” .
  • a feature value of “0” shows that there is no value of collection heat is received in that grid.
  • the system is required to, besides returning an accurate position of a terminal device, evaluating the positioning result in a form of confidence level.
  • some of the feature values in the feature map 500 are “0” , and the sparseness of the feature map may lead to the incompleteness of the feature carried by the feature map. Therefore, the ratio of the non-zero elements and the zero elements in the feature maps may contribute to the error rate of the prediction, and may be used to determine the confidence level.
  • FIG. 6 illustrates an exemplary CNN 600 that is trained by the feature map according to some embodiments of the disclosure.
  • the feature map input to the CNN 600 is a 3D array with 42 channels 2D feature maps of size 12X12.
  • the model generation unit 208 may generate a CNN 600 that includes one or more convolutional layers 602 (e.g., convolutional layers 602a, 602b, and 603c in FIG. 6) .
  • Each convolutional layer 602 may have a number of parameters, such as the width ( ‘W” ) and height ( “H” ) determined by the upper input layer (e.g., the size of the input of the convolutional layer 602a) , and the number of filters or kernels ( “N” ) in the layer and their sizes. Due to the large diameter of the recall area, a CNN with many different sizes of convolution kernel may effectively extract features from different receptive fields.
  • the CNN 600 may use different sizes of convolution kernel to extract features in the first convolutional layer.
  • the CNN 600 may include three different kernel sizes: the size of filters of convolutional layers 602a is 3X3, the size of filters of convolutional layers 602b is 5X5, the size of filters of convolutional layers 602c is 7X7.
  • the size of filters may be referred to as the depth of the convolutional layer.
  • the input of each convolutional layer 602 is convolved with one filter across its width and height and produces a new feature image corresponding to that filter.
  • the convolution is performed for all filters of each convolutional layer, and the resulting feature images are stacked along the depth dimension.
  • the output of a preceding convolutional layer can be used as input to the next convolutional layer.
  • CNN 600 of model generation unit 208 may further include one or more pooling layers 604 (e.g. pooling layers 604a and 604b in FIG. 6) .
  • Pooling layer 604 can be added between two successive convolutional layers 602 in CNN 600.
  • a pooling layer operates independently on every depth slice of the input (e.g., a feature image from a previous convolutional layer) , and reduces its spatial dimension by performing a form of non-linear down-sampling.
  • the function of the pooling layers is to progressively reduce the spatial dimension of the extracted feature image to reduce the amount of parameters and computation in the network, and hence to also control over-fitting.
  • the number and placement of the pooling layers may be determined based on various factors, such as the design of the convolutional network architecture, the size of the input, the size of convolutional layers 602, and/or application of CNN 600.
  • Max pooling may partition a feature image of the input into a set of overlapping or non-overlapping sub-regions with a predetermined stride. For each sub-region, max pooling outputs the maximum. This downsamples every feature image of the input along both its width and its height while the depth dimension remains unchanged.
  • Other suitable functions may be used for implementing the pooling layers, such as average pooling or even L2-norm pooling.
  • CNN may further include another set of convolutional layer 602b and pooling layer 604b. As shown in FIG. 6, the CNN may use max pooling of size 2 after each convolution. It is contemplated that more sets of convolutional layers and pooling layers may be provided. As shown in FIG. 6, after the max pooling layers, the CNN may generate a feature vector of size 1 ⁇ 2048.
  • some global features are introduced into the CNN training stage. These global features come from the user request, such as the signal strength of the base station in the request and the number of neighboring base station. These features are not different in each grid and are redundant, so that they are discretized as input of the first fully connected layer. As show in FIG. 6, the feature vector (size of 1 ⁇ 201) formed by the discretized global features is concatenated with the feature vector (size of 1 ⁇ 2048) generated from the max pooling layers to construct a feature vector (size of 1 ⁇ 2249) as the input of the fully connected layers.
  • one or more fully-connected layers 606 may be added after the convolutional layers and/or the pooling layers.
  • the fully-connected layers have a full connection with all feature images of the previous layer.
  • a fully-connected layer may take the output of the last convolutional layer or the last pooling layer as the input in vector form.
  • the CNN may include three fully connected layers, with a count of nodes of 1000 (606a) , and 64 (606b) , and 2 (606c) , respectively.
  • the output vector of fully-connected layer 606c is a vector of size 1 ⁇ 2, representing the longitude and latitude offsets of predict grid relative to the center grid.
  • the goal of the training process is that the longitude and latitude offsets of predict grid conforms to the supervised signal (i.e., the true value of the position of the grid) .
  • the supervised signals are used as constraints to improve the accuracy of CNN 600.
  • the output of the CNN is the offsets of the longitude and latitude relative to the center grid g center in the recall grid set G r .
  • the latitude and longitude of the center grid g center plus the offsets ⁇ lon, ⁇ lat is the positioning latitude and longitude as the final positioning result.
  • a loss layer (not shown) may be included in CNN 600.
  • the loss layer may be the last layer in CNN 600.
  • the loss layer may determine how the network training penalizes the deviation between the predicted position and the benchmark position (i.e., the GPS position) .
  • the loss layer may be implemented by various suitable loss functions. For example, a softmax function may be used as the final loss layer.
  • a loss function that fits the specific positioning problem may be designed as:
  • ⁇ lon p , ⁇ lat p are the longitude and latitude offsets of predict grid relative to the center grid.
  • ⁇ lon l , ⁇ lat l are the longitude and latitude offsets of ground truth grid relative to the center grid.
  • the loss function represents the distance between the ground truth point and the prediction point, and the minimization of the loss function is equivalent to minimizing the error distance, which is consistent with the positioning target.
  • model generation unit 208 may generate a neural network model for positioning a terminal device.
  • the generated neural network model may be stored to memory 214.
  • Memory 214 may be implemented as any type of volatile or non-volatile memory devices, or a combination thereof, such as a static random access memory (SRAM) , an electrically erasable programmable read-only memory (EEPROM) , an erasable programmable read-only memory (EPROM) , a programmable read-only memory (PROM) , a read-only memory (ROM) , a magnetic memory, a flash memory, or a magnetic or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory a magnetic memory
  • flash memory or a magnetic or optical disk.
  • FIG. 7 is a flowchart of an exemplary process 700 for base station positioning using a trained CNN, according to some embodiments of the disclosure.
  • Process 700 may include steps S702-S708as below.
  • the positioning server may acquire feature information that is received from one or more base stations.
  • the feature information may be received at different locations in the area of interest. As described in FIG. 4, each grid may contain a number of pieces of feature information.
  • the feature information includes feature information associated with the scanned APs, such as identifications (e.g., Cell_Id of the base station) , Received Signal Strength Indication (RSSI) , Round Trip Time (RTT) , or the like of APs 104.
  • the feature information may also include other types of information, such as numbers of the passengers and drivers located in a grid.
  • the positioning server may divide the area of interest into a number of grids to obtained a geographic grid set, obtain a recall grid set through a predetermined recall strategy based on the geographic grid set, wherein the recall grid set includes a number of recall grids; and for each recall grid, collecting feature information received from the one or more base stations.
  • the positioning server may determine a center grid of the geographic grid set, and recall a number of grids surrounding the center grid through a predetermined recall strategy, wherein a ground truth point is located inside the recalled number of grids.
  • the positioning server may generate a feature input that includes a number of feature maps based on the feature information.
  • a feature map carries complete information about the base stations affecting the area of interest.
  • Each feature value of the feature information corresponds to a grid included in the feature map, and therefore in the case there is C features, the positioning server may obtain a feature input, which is a 3Darray with C (channel number) 2D feature map.
  • the number of feature maps generated by the positioning server is determined by the number of types of feature information collected by the positioning server.
  • the positioning server may collect feature information received from the one or more base stations for each recall grid, determine a feature value of each piece of collected feature information for each recall grid, and generate a number of feature maps, wherein each feature map is represented by a matrix that is formed by the feature value of corresponding to each recall grid, and wherein each feature map corresponds to a type of feature information.
  • the positioning server in addition to return an accurate positioning result, the positioning server also requires the system to evaluate the confidence level of the positioning result, which may be used as a basis for other related services.
  • the number of captured features may affect the final positioning result (that is, the more feature captured, the more accurate the result may be) . Therefore, in such embodiments, a positioning confidence level is determined based on the sparsity of the feature distribution in each grid. Specifically, in each channel, the percentage of non-void feature is used as a feature to form a feature vector. A GBDT tree is trained using this feature vector to regress an error distance between the predicted position and the true position. Subsequently, the positioning result confidence level is determined by mapping the predicted distance, according to the equation:
  • the positioning server may further acquire benchmark positions of the existing devices.
  • a benchmark position is a known position of the existing device.
  • the benchmark position may be previously verified as conform to the true position of the existing device.
  • the benchmark position may be determined by GPS signals received by the existing device.
  • the benchmark position may also be determined by other positioning methods, as long as the accuracy of the positioning results meets the predetermined requirements.
  • a benchmark position may be a current address provided by the user of the existing device.
  • the positioning server may train the neural network model using the generated feature input.
  • the neural network model may be a CNN.
  • the output of the CNN is a bias value pair, which is the latitude and the longitude offset relative to the center grid. The latitude and the longitude of the center grid plus the offsets if the final positioning latitude and the longitude.
  • the positioning server may input the feature input into the CNN; and output a bias value pair from the CNN, wherein the bias value pair is latitude and a longitude offset relative to the center grid.
  • the neural network model may be applied for positioning a terminal device.
  • FIG. 8 is a flowchart of an exemplary process 800 for positioning a terminal device using a neural network model, according to some embodiments of the disclosure.
  • Process 800 may be implemented by the same positioning server that implements process 700 or a different positioning server, and may include steps S802-S804.
  • the positioning server may acquire a set of feature information associated with the terminal device.
  • the feature information in the positioning stage may be similarly acquired as the feature information in the training stage.
  • the positioning server may determine a position of the terminal device using the neural network model.
  • the neural network model may output estimated coordinates of the terminal device.
  • the positioning server may further generate an image may be based on the estimated coordinates, and indicate the position of the terminal device on the image. For example, the position of the terminal device may be marked in the resulting image, such as by indicating its latitude and longitude.
  • the computer-readable medium may include volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other types of computer-readable medium or computer-readable storage devices.
  • the computer-readable medium may be the storage device or the memory module having the computer instructions stored thereon, as disclosed.
  • the computer-readable medium may be a disc or a flash drive having the computer instructions stored thereon.
  • the computer-readable medium may include volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other types of computer-readable medium or computer-readable storage devices.
  • the computer-readable medium may be the storage device or the memory module having the computer instructions stored thereon, as disclosed.
  • the computer-readable medium may be a disc or a flash drive having the computer instructions stored thereon.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)

Abstract

Systems and methods for base station positioning based on a convolutional neural network are disclosed. The method may include acquiring, by a positioning server, feature information that is received from one or more base stations at different locations in the area of interest; generating a feature input that includes of a plurality of feature maps based on the feature information; training a convolutional neural network (CNN) based on the feature input; and determining a position of a terminal device using the trained CNN.

Description

BASE STATION POSITIONING BASED ON CONVOLUTIONAL NEURAL NETWORKS TECHNICAL FIELD
This present disclosure generally relates to systems and methods for positioning services, and in particular, to systems and methods for base station positioning based on convolutional neural networks (CNNs) .
BACKGROUND
Positioning services are becoming more and more important with the popularity of location-based services, and the requirements for positioning accuracy are getting higher and higher. For example, for online taxi platforms, the driver would need to know the location of the passengers before picking up them. Currently, one of the most common methods to obtain the location of a mobile terminal is through Global Positioning System (GPS) , of which the positioning accuracy is about five meters. For scenarios wherein GPS is unavailable or inaccurate, the Network Localization Service (NLP) is needed. The NLP includes WiFi positioning and base station (that is, cellular network) positioning, which may be used when WiFi positioning is unavailable or inaccurate.
Base station positioning is an indispensable component of positioning services. Early studies of base station positioning are mostly focused on hardware dependent technologies, which cannot be applied on a large scale. In recent years, the ever-increasing density of base stations offers the possibility of applying fingerprint-based positioning techniques. The fingerprint-based positioning technology is an empirical method that matches the fingerprint information collected on the device in real-time with the fingerprint database collected offline to identity the device location. With the maturity of machine learning algorithm, through the powerful learning ability of machine learning, any information collected can be fully utilized as features, and  fingerprint-based positioning technology is further improved by using machine learning. Using the fingerprint database collected offline; the entire geospatial space is divided into a large number of very small geographic grids. The grid closest to the real location is obtained as the location of the user or terminal through the classic recall, sort, smooth machine-learning framework, and the positioning accuracy is improved. However, this so-called Geo-block Ranking method has limitations. For example, it cannot describe the local correlation of the grid in space. Further, the addition of the smoothing process leads to the inconsistency between the optimization goal and the goal. Hence, it is desired to improve current base station positioning method to increase its efficiency and accuracy.
Embodiments of the disclosure address the above problems by providing a Convolutional neural network (CNN) -based positioning method, which is different from the current method by modeling the positioning problem as object detection in geospatial, and directly predicting the position information by using the improved deep CNNs.
SUMMARY
Embodiments of the disclosure provide a computer-implemented method for base station positioning based on a convolutional neural network. An exemplary computer-implemented method includes acquiring, by a positioning server, feature information that is received from one or more base stations at different locations in the area of interest; generating, by the positioning server, a feature input that includes of a plurality of feature maps based on the feature information; training, by the positioning server, a convolutional neutral network (CNN) based on the feature input; and determining, by the positioning server, a position of a terminal device using the trained CNN.
In some embodiments, to acquire feature information that is from one or more base stations at different locations in the area of interest, the computer-implemented method further includes dividing the area of interest into a number of grids to obtained a geographic grid set; generate a recall grid set through a predetermined recall strategy based on the geographic grid set, wherein the recall grid set includes a number of recall grids; and for each recall grid, collect feature information received from the one or more base stations.
In some embodiments, to generate a recall grid set through a predetermined recall strategy, the computer-implemented method further includes determining a center grid of the geographic grid set; and recalling a number of grids surrounding the center grid through a predetermined recall strategy, wherein a ground truth point is located inside the recalled number of grids.
In some embodiments, to generate a number of feature maps, the computer-implemented method further includes for each recall grid, collecting feature information received from the one or more base stations; for each recall grid, determining a feature value associated with each piece of collected feature information; and generating a feature input that includes a number of feature maps, wherein each feature map is represented by a matrix that is formed by the feature  value corresponding to each recall grid, and wherein each feature map corresponds to a type of feature information.
In some embodiments, a number of feature maps generated by the positioning server is determined by the number of types of feature information collected by the positioning server.
In some embodiments, to train a CNN based on the number of feature maps, the computer-implemented method further includes inputting the number of feature maps into the CNN; and outputting a bias value pair from the CNN, wherein the bias value pair is latitude and a longitude offset relative to the center grid.
In some embodiments, the computer-implemented method further includes minimizing a loss function representing a distance between the ground truth point and a prediction point.
In some embodiments, the computer-implemented method further includes determining a level of confidence of a positioning result based on the sparseness of the feature map.
In some embodiments, to determining a position of a terminal device using the trained CNN, the computer-implemented method further includes acquiring feature information of a terminal device; and predicting a position of the terminal device based on the feature information using the trained CNN.
Embodiments of the disclosure provide systems and methods for base station positioning based on a convolutional neural network. An exemplary system includes a communication interface, a memory and a processor. The processor is configured to acquire feature information that is received from one or more base stations at different locations in the area of interest, generate a feature input that includes a number of feature maps based on the feature information, train a convolutional neutral network (CNN) based on the number of feature maps; and determine a position of a terminal device using the trained CNN.
Embodiments of the disclosure further provide a non-transitory computer- readable medium having instructions stored thereon that, when executed by a processor, causes the processor to perform a computer-implemented method for base station positioning based on a convolutional neural network. An exemplary computer-implemented method includes acquiring feature information that is received from one or more base stations at different locations in the area of interest; generating a feature input that includes a number of feature maps based on the feature information; training a convolutional neutral network (CNN) based on the number of feature maps; and determining a position of a terminal device using the trained CNN.
Additional features will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following and the accompanying drawings or may be learned by production or operation of the examples. The features of the present disclosure may be realized and attained by practice or use of various aspects of the methodologies, instrumentalities, and combinations set forth in the detailed examples discussed below.
BRIEF DESCRIPTION OF THE DRAWINGS
The present disclosure is further described in terms of exemplary embodiments. These exemplary embodiments are described in detail with reference to the drawings. These embodiments are non-limiting exemplary embodiments, in which like reference numerals represent similar structures throughout the several views of the drawings, and wherein:
FIG. 1 is a schematic diagram illustrating an exemplary system for base station positioning, according to some embodiments of the disclosure.
FIG. 2 is a block diagram of an exemplary system for base station positioning, according to some embodiments of the disclosure.
FIG. 3 illustrates an exemplary feature map that carries the feature information and is used to train the CNN, according to some embodiments of the disclosure.
FIG. 4 illustrates an exemplary process of building a feature map, according to some embodiments of the disclosure.
FIG. 5 illustrates an exemplary feature map for one test data displaying the feature value of a type of feature information, according to some embodiments of the disclosure.
FIG. 6 illustrates an exemplary CNN that is trained by the feature map according to some embodiments of the disclosure.
FIG. 7 is a flowchart of an exemplary process for base station positioning using a trained CNN, according to some embodiments of the disclosure.
FIG. 8 is a flowchart of an exemplary process for positioning a terminal device using a neural network model, according to some embodiments of the disclosure.
DETAILED DESCRIPTION
The following description is presented to enable any person skilled in the art to make and use the present disclosure, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present disclosure is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a, ” “an, ” and “the” may be intended to include the plural forms as well,  unless the context clearly indicates otherwise. It will be further understood that the terms “includes, ” “comprising, ” “includes, ” and/or “including” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
These and other features, and characteristics of the present disclosure, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, may become more apparent upon consideration of the following description with reference to the accompanying drawing (s) , all of which form a part of this specification. It is to be expressly understood, however, that the drawing (s) are for the purpose of illustration and description only and are not intended to limit the scope of the present disclosure. It is understood that the drawings are not to scale.
The flowcharts used in the present disclosure illustrate operations that systems implement according to some embodiments in the present disclosure. It is to be expressly understood, the operations of the flowchart may or may not be implemented in order. Conversely, the operations may be implemented in inverted order, or simultaneously. Moreover, one or more other operations may be added to the flowcharts. One or more operations may be removed from the flowcharts.
Moreover, while the system and method in the present disclosure is described primarily in regard to image classification, it should also be understood that this is only one exemplary embodiment. The system or method of the present disclosure may be applied to any other kind of deep learning tasks.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. Wherein ever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
FIG. 1 is a schematic diagram illustrating an exemplary system for base station positioning, according to some embodiments of the disclosure. Terminal devices 102 may include any electronic device that can scan access points (APs) 104 and communicate with one or more components included in system 100. For example, terminal devices 102 may include a smart phone, a laptop, a tablet, a wearable device, a drone, or the like.
As shown in FIG. 1, terminal devices 102 may scan nearby APs 104. APs 104 may include devices that transmit signals for communication with terminal devices. For example, APs 104 may include WiFi APs, a base station, Bluetooth APs, or the like. In some embodiments, multiple nearby base stations may be scanned by the terminal device 102. The terminal device 102 may be connected to one of the scanned based stations, which is referred to as the main station. Other scanned base station (s) may be referred to as neighboring base stations. The terminal device 102 may receive feature information from the main station and the neighboring base station (s) . For example, the number of scanned neighboring base stations may be 3, 5, 6, 7, or the like. The request query may indicate whether each of the scanned base station (s) is a main base station or a neighboring station.
By scanning nearby APs 104, each terminal device 102 may receive feature information from APs 104 and generate a fingerprint. The fingerprint stores feature information, such as identifications (e.g., Cell-Id of a base station) , Received Signal Strength Indication (RSSI) , Round Trip Time (RTT) , or the like of APs 104, from different APs 14 at different location in the area of interest. This is usually constructed once in an offline phase. In some embodiments, constructing of the fingerprint my be done in process called war driving, wherein cars drive the area of interest continuously scanning for cell towers and recording the cell tower ID, RSSI, and GPS location.
The fingerprint may be transmitted to positioning server 106 and used to acquire predicted positions of its corresponding terminal device from the positioning  server 106. Positioning server 106 may be an internal server of system 100 or an external server. Positioning server 106 may be associated with a database 108 that stores fingerprints that have been acquired at various reference positions. The database 108 is configured to store feature information collected at every preselected reference position, along with its location information. The location information of the reference position must be stored together with corresponding feature information so as to be able to locate the position. The information stored in the database 108 is used for comparison with fingerprint of the terminal device 102 to search out the information in the database has the highest similarity. The information in database that has been searched out includes location information, which is retrieved and provided for the position of the terminal device 102. For example, during the online phase, the received feature information at an unknown location is compared with the fingerprint stored in the database 108, and the closest location in the fingerprint is returned as the estimated location.
Consistent with embodiments of the disclosure, system 100 may train a neural network model based on the feature information associated with existing devices in a training stage, and position a terminal device based on predicted positions associated with the terminal device using the neural network model in a positioning stage.
In some embodiments, the neural network model is a convolutional neural network (CNN) model. CNN is a type of machine learning algorithm that can be trained by supervised learning. The architecture of a CNN model includes a stack of distinct layers that transform the input into the output. Examples of the different layers may include one or more convolutional layers, pooling or subsampling layers, fully connected layers, and/or final loss layers. Each layer may connect with at least one upstream layer and at least one downstream layer. The input may be considered as an input layer, and the output may be considered as the final output layer.
To increase the performance and learning capabilities of CNN models, the number of different layers can be selectively increased. The number of intermediate distinct layers from the input layer to the output layer can become very large, thereby increasing the complexity of the architecture of the CNN model. CNN models with a large number of intermediate layers are referred to as deep CNN models. For example, some deep CNN models may include more than 20 to 30 layers, and other deep CNN models may even include more than a few hundred layers. Examples of deep CNN models include AlexNet, VGGNet, GoogLeNet, ResNet, etc.
Embodiments of the disclosure employ the powerful learning capabilities of CNN models, and particularly deep CNN models, for positioning a terminal device based on feature information of APs scanned by the terminal device.
As used herein, a CNN model used by embodiments of the disclosure may refer to any neural network model formulated, adapted, or modified based on a framework of convolutional neural network. For example, a CNN model according to embodiments of the disclosure may selectively include intermediate layers between the input and output layers, such as one or more deconvolution layers, and/or up-sampling or up-pooling layers.
As used herein, “training” a CNN model refers to determining one or more parameters of at least one layer in the CNN model. For example, a convolutional layer of a CNN model may include at least one filter or kernel. One or more parameters, such as kernel weights, size, shape, and structure, of the at least one filter may be determined by e.g., a backpropagation-based training process.
Consistent with the disclosed embodiments, to train a CNN model, the training process uses at least one set of training parameters. Each set of training parameters may include a set of feature signals and a supervised signal. As a non-limiting example, the feature signals may include feature information associated with APs 104 that scanned by a terminal device, and the supervised signal may include a true position of the terminal device. And a terminal device may be positioned  accurately by the trained CNN model based on feature information of the APs 104 scanned by terminal device.
FIG. 2 is a block diagram of an exemplary system 200 for base station positioning, according to some embodiments of the disclosure. As shown in FIG. 2, system 200 may include a communication interface 202, a processor 204 that includes a feature information receiving unit 206, a feature map generation unit 208, a model generation unit 210, a position determination unit 212, and a memory 214. System 200 may include the above-mentioned components to perform the training stage. In some embodiments, system 200 may include more or less of the components shown in FIG. 2. For example, when a neutral network model for positioning is pre-trained and provided, the system 200 may not include the feature map generation unit 208 and model generation unit 210 anymore. It is contemplated that, the above components (and any corresponding sub-modules or sub-units) can be functional hardware units (e.g., portions of an integrated circuit) designed for use with other components or a part of a program (stored on a computer readable medium) that performs a particular function.
Communication interface 202 is in communication with terminal device 102 and processor 204. The processor 204 may be configured to acquire feature information transmitted by each of a number of terminal devices. In some embodiments, for example, each terminal device 102 may scan APs 104 and transmit the feature information associated with the APs 104 to the feature information receiving unit 206 via communication interface 202.
In some embodiments where the neutral network model is not pre-trained, after the feature information is received at the feature information receiving unit 206, the feature information may be sent to the feature map generating unit 208 to generate one or more feature maps based on the feature information. Subsequently, the generated feature maps may be sent from the feature map generating unit 208 to the model generation unit 210.
Furthermore, in the training stage, communication interface 202 may further receive a ground truth position of each terminal device 102 and transmit the ground truth position to processor 204. It is contemplated that, terminal devices in the training stage may be referred to as existing devices for clarity. The ground truth of the existing device may be determined by a GPS positioning unit (not shown) embedded within the existing device.
After the feature information is processed by the one or more components of processor 204, the positioning determination unit 212 may determine predicted positions of the terminal devices 102. The predicted positions of the terminal devices may be referred to as hypothetical positions in the training stage for clarity. Therefore, in the training stage, processor 204 may receive the one or more feature maps, ground truth positions and corresponding hypothetical positions associated with existing devices, for training a neural network model at the model generation unit 210.
FIG. 3 illustrates an exemplary feature input 300 that carries the feature information and is used to train the CNN, according to some embodiments of the disclosure. A feature input consists of N-channel feature maps, which carry complete information about the APs, such as acquisition, RSSI, distance, and so on.
A feature map (e.g., 302a, 302b in FIG. 3, referring as 302 hereinafter) may be constructed based on feature information collected from a recall grid set (e.g., 304a, 304b in FIG. 3, referring as 304 hereinafter) , which is obtained from a geographic grid set through a recall strategy. The details of how to generate a recall grid set from the area of interest 310 will be described in more detail in FIG. 7. The feature input 300 may be represented by
Figure PCTCN2019122273-appb-000001
wherein f c, i is the c-th feature value of feature information corresponding to each grid g i in a grid set G i. The
Figure PCTCN2019122273-appb-000002
matrix form may be expressed as follows:
Figure PCTCN2019122273-appb-000003
Each M×M grid can be understood as a graph, so that
Figure PCTCN2019122273-appb-000004
may form a feature map that includes C features (that is, C types of different feature information) . As illustrated in FIG. 3, each of the feature map 302 is a 2-dimensional graph with a size of M×M, and therefore, feature input 300 (constructed by feature maps 300a, 300b…300i (not shown) ) is a 3-dimensional array with C (channel number) 2D feature map of size M×M.
The feature map 302 includes a number of feature values corresponding a number of features associated with each grid. In some embodiments, for example, a feature map may correspond to a feature f h, i , which represents the collection heat of the i-th grid. The collection heat is the total number of acquisitions on the grid in the past months, reflecting to some extent whether the grid is reachable and the frequency of access in the previous months.
As another example, in some embodiments, a feature map may correspond to f p. i, which is the matching probability of the i-th grid. The RSSI matching probability measures how close the signal in the terminal is to the signal in the grid. The continuous RSSI value is discretized into 7 values s ∈ {0, 1, 2, 3, 4, 5, 6} , and the collection count of each discrete value s in the i-th grid is h i, s. The matching probability is calculated according to the RSSI discrete values t (t∈ {0, 1, 2, 3, 4, 5, 6} , ) in the request query. The specific calculation formula is as follows:
Figure PCTCN2019122273-appb-000005
In some embodiments, the feature map may include some of or all of the following features:
(1) The distance from the grid to the center of the main base station;
(2) The sum of the heat collected by the neighboring base station;
(3) Whether the grid is within the radius of the main base station;
(4) The sum of RSSI matching probabilities of neighboring base stations;
(5) The distance from the grid to the center of the front base station;
(6) Whether the grid is within the radius of the front base station;
(7) The sum of the heat collected by the front base station;
(8) Number of passengers collected per grid;
(9) The distance from the grid to the recall center;
(10) GPS acquisition count on the grid;
(11) GPS acquisition user count on the grid
Ins some embodiments, the feature map may include other features, which will not be limited to the description of the present disclosure.
FIG. 4 illustrates an exemplary process 400 of generating a feature map, according to some embodiments of the disclosure. Process 400 may include steps S402-S408 as below.
In step S402, the feature map generation unit 206 may divide an entire geospatial into a large number of small geographic grid set G, as G = {g 1, g 2…g i…g D} , wherein G represents all geographic sets, and each grid is N×N (in meters) .
In step S404, the feature map generation unit 206 may generate the recall grid set G r (G r∈G) through a recall strategy. The recall grid set G r includes M×M small grids and constitutes an graph, where each grid in the recall grid set G r represents a pixel of the graph, and the value of each pixel corresponding to the feature value of the feature information collected in the grid.
In some embodiments, the recall grid set G r is obtained through a predetermined recall strategy. In the present disclosure, the recall goal is to include the grid wherein the ground truth is located in the recall grid set G r. Therefore to determine a recall strategy, the feature map generation unit 206 may first determine a center grid g center∈G r, and then recall M×M grids near the center grid g center. Longitude and latitude calculation of the center grid is chosen according to:
Figure PCTCN2019122273-appb-000006
Figure PCTCN2019122273-appb-000007
Wherein
Figure PCTCN2019122273-appb-000008
G K is the grid set closest to the nearest base station cluster center with the size K.
Next, the feature map generation unit 206 may compare the coverage of the ground truth in the recall grid set G r through multiple experiments to select the best strategy with the highest coverage C, according to:
Figure PCTCN2019122273-appb-000009
Wherein t v is the ground truth of the v-th test data, N is the count of all test data, G r, v is the recall grid set of the v-th test data.
In step S406, the feature map generation unit 206 may collect feature information corresponding to each grid.
In step S408, the feature map generation unit 206 may build a number of feature maps using the collected feature information. The feature maps generated in this step may form a feature input representing by a C-channel matrix
Figure PCTCN2019122273-appb-000010
Figure PCTCN2019122273-appb-000011
Details about values of the matrix
Figure PCTCN2019122273-appb-000012
have been described in FIG. 3 and will not be repeated herein.
FIG. 5 illustrates an exemplary feature map 500 for one test data displaying the feature value of a type of feature information, according to some embodiments of the disclosure. Specifically, FIG. 5 illustrates for one test data displaying the feature value of collection heat when M=12.
As shown in FIG. 5, the feature map 500 includes 12×12 grids, and each grid includes a value representing the feature information of collection heat in that grid. For example, a first grid 502a has a feature value of “1, ” a second grid 502b has a feature value of “2” , a third grid 502c has a feature value of “3” , a fourth grid 502d has a feature value of “0” . A feature value of “0” shows that there is no value of collection heat is received in that grid.
In some embodiment, the system is required to, besides returning an accurate position of a terminal device, evaluating the positioning result in a form of confidence level. As shown in FIG. 5, some of the feature values in the feature map 500 are “0” , and the sparseness of the feature map may lead to the incompleteness of the feature carried by the feature map. Therefore, the ratio of the non-zero elements and the zero elements in the feature maps may contribute to the error rate of the prediction, and may be used to determine the confidence level.
FIG. 6 illustrates an exemplary CNN 600 that is trained by the feature map according to some embodiments of the disclosure. For illustration purpose only, the feature map input to the CNN 600 is a 3D array with 42 channels 2D feature maps of size 12X12.
In some embodiments, the model generation unit 208 may generate a CNN 600 that includes one or more convolutional layers 602 (e.g.,  convolutional layers  602a, 602b, and 603c in FIG. 6) . Each convolutional layer 602 may have a number of parameters, such as the width ( ‘W” ) and height ( “H” ) determined by the upper input layer (e.g., the size of the input of the convolutional layer 602a) , and the number of filters or kernels ( “N” ) in the layer and their sizes. Due to the large diameter of the recall area, a CNN with many different sizes of convolution kernel may effectively extract features from different receptive fields. Therefore, the CNN 600 may use different sizes of convolution kernel to extract features in the first convolutional layer. For example, as shown in FIG. 6, the CNN 600 may include three different kernel sizes: the size of filters of convolutional layers 602a is 3X3, the size of filters of convolutional layers 602b is 5X5, the size of filters of convolutional layers 602c is 7X7. The size of filters may be referred to as the depth of the convolutional layer. The input of each convolutional layer 602 is convolved with one filter across its width and height and produces a new feature image corresponding to that filter. The convolution is performed for all filters of each convolutional layer, and the resulting feature images are stacked along the depth dimension. The output of a  preceding convolutional layer can be used as input to the next convolutional layer.
In some embodiments, CNN 600 of model generation unit 208 may further include one or more pooling layers 604 ( e.g. pooling layers  604a and 604b in FIG. 6) . Pooling layer 604 can be added between two successive convolutional layers 602 in CNN 600. A pooling layer operates independently on every depth slice of the input (e.g., a feature image from a previous convolutional layer) , and reduces its spatial dimension by performing a form of non-linear down-sampling. As shown in FIG. 6, the function of the pooling layers is to progressively reduce the spatial dimension of the extracted feature image to reduce the amount of parameters and computation in the network, and hence to also control over-fitting. The number and placement of the pooling layers may be determined based on various factors, such as the design of the convolutional network architecture, the size of the input, the size of convolutional layers 602, and/or application of CNN 600.
Various non-linear functions can be used to implement the pooling layers. For example, max pooling may be used. Max pooling may partition a feature image of the input into a set of overlapping or non-overlapping sub-regions with a predetermined stride. For each sub-region, max pooling outputs the maximum. This downsamples every feature image of the input along both its width and its height while the depth dimension remains unchanged. Other suitable functions may be used for implementing the pooling layers, such as average pooling or even L2-norm pooling.
CNN may further include another set of convolutional layer 602b and pooling layer 604b. As shown in FIG. 6, the CNN may use max pooling of size 2 after each convolution. It is contemplated that more sets of convolutional layers and pooling layers may be provided. As shown in FIG. 6, after the max pooling layers, the CNN may generate a feature vector of size 1×2048.
In some embodiments, some global features are introduced into the CNN training stage. These global features come from the user request, such as the  signal strength of the base station in the request and the number of neighboring base station. These features are not different in each grid and are redundant, so that they are discretized as input of the first fully connected layer. As show in FIG. 6, the feature vector (size of 1×201) formed by the discretized global features is concatenated with the feature vector (size of 1×2048) generated from the max pooling layers to construct a feature vector (size of 1×2249) as the input of the fully connected layers.
As another non-limiting example, one or more fully-connected layers 606 (e.g., fully  connected layers  606a, 606b, and 606c in FIG. 6) may be added after the convolutional layers and/or the pooling layers. The fully-connected layers have a full connection with all feature images of the previous layer. For example, a fully-connected layer may take the output of the last convolutional layer or the last pooling layer as the input in vector form.
For example, as shown in FIG. 6, the CNN may include three fully connected layers, with a count of nodes of 1000 (606a) , and 64 (606b) , and 2 (606c) , respectively.
The output vector of fully-connected layer 606c is a vector of size 1×2, representing the longitude and latitude offsets of predict grid relative to the center grid. The goal of the training process is that the longitude and latitude offsets of predict grid conforms to the supervised signal (i.e., the true value of the position of the grid) . The supervised signals are used as constraints to improve the accuracy of CNN 600.
As such, an improved CNN network is obtained through a large number of experiments as the prediction model:
Figure PCTCN2019122273-appb-000013
The output of the CNN is the offsets of the longitude and latitude relative to the center grid g centerin the recall grid set G r.
Δlon = ∑ iw×x i+b eq. (7)
Δlat = ∑ iw×x i+b eq. (8)
In the final output, the latitude and longitude of the center grid g center plus the offsets Δlon, Δlat is the positioning latitude and longitude as the final positioning result.
As a further non-limiting example, a loss layer (not shown) may be included in CNN 600. The loss layer may be the last layer in CNN 600. During the training of CNN 600, the loss layer may determine how the network training penalizes the deviation between the predicted position and the benchmark position (i.e., the GPS position) . The loss layer may be implemented by various suitable loss functions. For example, a softmax function may be used as the final loss layer.
In some embodiments, a loss function that fits the specific positioning problem may be designed as:
Figure PCTCN2019122273-appb-000014
Wherein Δlon p, Δlat p are the longitude and latitude offsets of predict grid relative to the center grid. Δlon l, Δlat l are the longitude and latitude offsets of ground truth grid relative to the center grid. The loss function represents the distance between the ground truth point and the prediction point, and the minimization of the loss function is equivalent to minimizing the error distance, which is consistent with the positioning target.
With reference back to FIG. 2, based on at least one set of training parameters, model generation unit 208 may generate a neural network model for positioning a terminal device. The generated neural network model may be stored to memory 214. Memory 214 may be implemented as any type of volatile or non-volatile memory devices, or a combination thereof, such as a static random access memory (SRAM) , an electrically erasable programmable read-only memory (EEPROM) , an erasable programmable read-only memory (EPROM) , a programmable read-only memory (PROM) , a read-only memory (ROM) , a magnetic memory, a flash memory, or a magnetic or optical disk.
FIG. 7 is a flowchart of an exemplary process 700 for base station positioning using a trained CNN, according to some embodiments of the disclosure. Process 700 may include steps S702-S708as below.
In step S702, the positioning server may acquire feature information that is received from one or more base stations. The feature information may be received at different locations in the area of interest. As described in FIG. 4, each grid may contain a number of pieces of feature information. In some embodiments, the feature information includes feature information associated with the scanned APs, such as identifications (e.g., Cell_Id of the base station) , Received Signal Strength Indication (RSSI) , Round Trip Time (RTT) , or the like of APs 104. In some embodiments, the feature information may also include other types of information, such as numbers of the passengers and drivers located in a grid.
In some embodiments, to acquire the feature information, the positioning server may divide the area of interest into a number of grids to obtained a geographic grid set, obtain a recall grid set through a predetermined recall strategy based on the geographic grid set, wherein the recall grid set includes a number of recall grids; and for each recall grid, collecting feature information received from the one or more base stations.
In some embodiments, to obtain a recall grid set, the positioning server may determine a center grid of the geographic grid set, and recall a number of grids surrounding the center grid through a predetermined recall strategy, wherein a ground truth point is located inside the recalled number of grids.
In step S704, the positioning server may generate a feature input that includes a number of feature maps based on the feature information. A feature map carries complete information about the base stations affecting the area of interest. Each feature value of the feature information corresponds to a grid included in the feature map, and therefore in the case there is C features, the positioning server may obtain a feature input, which is a 3Darray with C (channel number) 2D feature  map. In some embodiments, the number of feature maps generated by the positioning server is determined by the number of types of feature information collected by the positioning server.
In some embodiments, to generate a number of feature maps, the positioning server may collect feature information received from the one or more base stations for each recall grid, determine a feature value of each piece of collected feature information for each recall grid, and generate a number of feature maps, wherein each feature map is represented by a matrix that is formed by the feature value of corresponding to each recall grid, and wherein each feature map corresponds to a type of feature information.
In some embodiments, in addition to return an accurate positioning result, the positioning server also requires the system to evaluate the confidence level of the positioning result, which may be used as a basis for other related services. The number of captured features may affect the final positioning result (that is, the more feature captured, the more accurate the result may be) . Therefore, in such embodiments, a positioning confidence level is determined based on the sparsity of the feature distribution in each grid. Specifically, in each channel, the percentage of non-void feature is used as a feature to form a feature vector. A GBDT tree is trained using this feature vector to regress an error distance between the predicted position and the true position. Subsequently, the positioning result confidence level is determined by mapping the predicted distance, according to the equation:
Figure PCTCN2019122273-appb-000015
In some embodiments, the positioning server may further acquire benchmark positions of the existing devices. A benchmark position is a known position of the existing device. The benchmark position may be previously verified as conform to the true position of the existing device. In some embodiments, the benchmark position may be determined by GPS signals received by the existing device. The benchmark position may also be determined by other positioning methods, as long  as the accuracy of the positioning results meets the predetermined requirements. For example, a benchmark position may be a current address provided by the user of the existing device.
In step S706, the positioning server may train the neural network model using the generated feature input. In some embodiments, the neural network model may be a CNN. Consistent with embodiments of the disclosure, the output of the CNN is a bias value pair, which is the latitude and the longitude offset relative to the center grid. The latitude and the longitude of the center grid plus the offsets if the final positioning latitude and the longitude.
In some embodiments, to train a CNN based on the number of feature maps, the positioning server may input the feature input into the CNN; and output a bias value pair from the CNN, wherein the bias value pair is latitude and a longitude offset relative to the center grid.
After the neural network model is trained by the positioning server, in step S708, the neural network model may be applied for positioning a terminal device.
FIG. 8 is a flowchart of an exemplary process 800 for positioning a terminal device using a neural network model, according to some embodiments of the disclosure. Process 800 may be implemented by the same positioning server that implements process 700 or a different positioning server, and may include steps S802-S804.
In step S802, the positioning server may acquire a set of feature information associated with the terminal device. The feature information in the positioning stage may be similarly acquired as the feature information in the training stage.
In step S804, the positioning server may determine a position of the terminal device using the neural network model. In some embodiments, the neural network model may output estimated coordinates of the terminal device. In some other embodiments, the positioning server may further generate an image may be based on the estimated coordinates, and indicate the position of the terminal device on the  image. For example, the position of the terminal device may be marked in the resulting image, such as by indicating its latitude and longitude.
Another aspect of the disclosure is directed to a non-transitory computer-readable medium storing instructions which, when executed, cause one or more processors to perform the methods, as discussed above. The computer-readable medium may include volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other types of computer-readable medium or computer-readable storage devices. For example, the computer-readable medium may be the storage device or the memory module having the computer instructions stored thereon, as disclosed. In some embodiments, the computer-readable medium may be a disc or a flash drive having the computer instructions stored thereon.
It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed system and related methods. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed positioning system and related methods. Although the embodiments describe training a neural network model based on an image containing training parameters, it is contemplated that the image is merely an exemplary data structure of training parameters and any suitable data structure may be used as well.
It is intended that the specification and examples be considered as exemplary only, with a true scope being indicated by the following claims and their equivalents.
Another aspect of the disclosure is directed to a non-transitory computer-readable medium storing instructions which, when executed, cause one or more processors to perform the methods, as discussed above. The computer-readable medium may include volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other types of computer-readable medium or computer-readable storage devices. For example, the computer-readable medium  may be the storage device or the memory module having the computer instructions stored thereon, as disclosed. In some embodiments, the computer-readable medium may be a disc or a flash drive having the computer instructions stored thereon.
It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed system and related methods. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed system and related methods.
It is intended that the specification and examples be considered as exemplary only, with a true scope being indicated by the following claims and their equivalents.

Claims (20)

  1. A computer-implemented method for base station positioning based on a convolutional neural network, comprising:
    acquiring, by a positioning server, feature information that is received from one or more base stations at different locations in the area of interest;
    generating, by the positioning server, a feature input that includes a plurality of feature maps based on the feature information;
    training, by the positioning server, a convolutional neutral network (CNN) based on the feature input; and
    determining, by the positioning server, a position of a terminal device using the trained CNN.
  2. The computer-implemented method of claim1, wherein acquiring feature information that is from one or more base stations at different locations in the area of interest comprises:
    dividing the area of interest into a plurality of grids to obtained a geographic grid set;
    generating a recall grid set through a predetermined recall strategy based on the geographic grid set, wherein the recall grid set comprises a plurality of recall grids; and
    for each recall grid, collecting feature information received from the one or more base stations.
  3. The computer-implemented method of claim 2, wherein generating a recall grid set through a predetermined recall strategy comprises:
    determining a center grid of the geographic grid set; and
    recalling a plurality of grids surrounding the center grid through a predetermined recall strategy, wherein a ground truth point is located inside the recalled plurality of grids.
  4. The computer-implemented method of claim 2, wherein generating a feature input that includes a plurality of feature maps based on the feature information comprises:
    for each recall grid, collecting feature information received from the one or more base stations;
    for each recall grid, determining a feature value associated with each piece of collected feature information;
    generating a plurality of feature maps, wherein each feature map is represented by a matrix that is formed by the feature value corresponding to each recall grid, and wherein each feature map corresponds to a type of feature information; and
    generating a feature input, wherein the feature input comprises the plurality of feature maps.
  5. The computer-implemented method of claim 4, wherein a number of feature maps generated by the positioning server is determined by the number of types of feature information collected by the positioning server.
  6. The computer-implemented method of claim 4, wherein training a CNN based on the plurality of feature maps comprises:
    inputting the feature input into the CNN; and
    outputting a bias value pair from the CNN, wherein the bias value pair is a latitude and a longitude offset relative to the center grid.
  7. The computer-implemented method of claim 6, further comprising:
    minimizing a loss function representing a distance between the ground truth point and a prediction point.
  8. The computer-implemented method of claim 6, further comprising:
    determining a level of confidence of a positioning result based on the sparseness of the feature map.
  9. The computer-implemented method of claim 6, wherein determining a position of a terminal device using the trained CNN comprises:
    acquiring feature information of a terminal device; and
    predicting a position of the terminal device based on the feature information using the trained CNN.
  10. A system for base station positioning based on a convolutional neural network, comprising:
    a memory;
    a communication interface in communication with a terminal device: and
    a processor configured to:
    acquiring feature information that is received from one or more base stations at different locations in the area of interest;
    generating a feature input that includes of a plurality of feature maps based on the feature information;
    training a convolutional neutral network (CNN) based on the feature input; and
    determining a position of a terminal device using the trained CNN.
  11. The system of claim 10, wherein acquiring feature information that is from one or more base stations at different locations in the area of interest comprises:
    dividing the area of interest into a plurality of grids to obtained a geographic grid set;
    generating a recall grid set through a predetermined recall strategy based on the geographic grid set, wherein the recall grid set includes a plurality of recall grids; and
    for each recall grid, collecting feature information received from the one or more base stations.
  12. The system of claim 11, wherein generating a recall grid set through a predetermined recall strategy comprises:
    determining a center grid of the geographic grid set; and
    recalling a plurality of grids surrounding the center grid through a predetermined recall strategy, wherein a ground truth point is located inside the recalled plurality of grids.
  13. The system of claim 11, wherein generating a feature input that includes of a plurality of feature maps based on the feature information comprises:
    for each recall grid, collecting feature information received from the one or more base stations;
    for each recall grid, determining a feature value associated with each piece of collected feature information;
    generating a plurality of feature maps, wherein each feature map is represented by a matrix that is formed by the feature value corresponding to each recall grid, and wherein each feature map corresponds to a type of feature information; and
    generating a feature input, wherein the feature input comprises the plurality of feature maps.
  14. The system of claim 13, wherein a number of feature maps generated by the processor is determined by the number of types of feature information collected by the processor.
  15. The system of claim 13, wherein training a CNN based on the feature input comprises:
    inputting the feature input into the CNN; and
    outputting a bias value pair from the CNN, wherein the bias value pair is a latitude and a longitude offset relative to the center grid.
  16. The system of claim 15, wherein the processor is further configured to minimize a loss function representing a distance between the ground truth point and a prediction point.
  17. The system of claim 15, wherein the processor is further configured to determine a level of confidence of a positioning result based on the sparseness of the feature map.
  18. The system of claim 15, wherein determining a position of a terminal device using the trained CNN comprises:
    acquiring feature information of a terminal device; and
    predicting a position of the terminal device based on the feature information using the trained CNN.
  19. A non-transitory computer-readable medium that stores a set of instructions, when executed by at least one processor of a positioning system, cause the positioning system to perform a method for positioning a terminal device, the method comprising:
    acquiring feature information that is received from one or more base stations at different locations in the area of interest;
    generating a feature input that includes of a plurality of feature maps based on the feature information;
    training a convolutional neutral network (CNN) based on the feature input; and
    determining a position of a terminal device using the trained CNN.
  20. The non-transitory computer-readable medium of claim 19, wherein acquiring feature information that is from one or more base stations at different locations in the area of interest comprises:
    dividing the area of interest into a plurality of grids to obtained a geographic grid set;
    generating a recall grid set through a predetermined recall strategy based on the geographic grid set, wherein the recall grid set includes a plurality of recall grids; and
    for each recall grid, collecting feature information received from the one or more base stations.
PCT/CN2019/122273 2019-11-30 2019-11-30 Base station positioning based on convolutional neural networks WO2021103027A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/122273 WO2021103027A1 (en) 2019-11-30 2019-11-30 Base station positioning based on convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/122273 WO2021103027A1 (en) 2019-11-30 2019-11-30 Base station positioning based on convolutional neural networks

Publications (1)

Publication Number Publication Date
WO2021103027A1 true WO2021103027A1 (en) 2021-06-03

Family

ID=76129901

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/122273 WO2021103027A1 (en) 2019-11-30 2019-11-30 Base station positioning based on convolutional neural networks

Country Status (1)

Country Link
WO (1) WO2021103027A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115175306A (en) * 2022-06-24 2022-10-11 国网河南省电力公司经济技术研究院 Electric power Internet of things indoor positioning method based on convolutional neural network
CN117405127A (en) * 2023-11-02 2024-01-16 深圳市天丽汽车电子科技有限公司 Navigation method, system, equipment and medium based on vehicle-mounted 5G antenna

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108594170A (en) * 2018-04-04 2018-09-28 合肥工业大学 A kind of WIFI indoor orientation methods based on convolutional neural networks identification technology
WO2019036860A1 (en) * 2017-08-21 2019-02-28 Beijing Didi Infinity Technology And Development Co., Ltd. Positioning a terminal device based on deep learning
CN109743683A (en) * 2018-12-03 2019-05-10 北京航空航天大学 A method of mobile phone user position is determined using deep learning converged network model
CN110166991A (en) * 2019-01-08 2019-08-23 腾讯大地通途(北京)科技有限公司 For the method for Positioning Electronic Devices, unit and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019036860A1 (en) * 2017-08-21 2019-02-28 Beijing Didi Infinity Technology And Development Co., Ltd. Positioning a terminal device based on deep learning
CN108594170A (en) * 2018-04-04 2018-09-28 合肥工业大学 A kind of WIFI indoor orientation methods based on convolutional neural networks identification technology
CN109743683A (en) * 2018-12-03 2019-05-10 北京航空航天大学 A method of mobile phone user position is determined using deep learning converged network model
CN110166991A (en) * 2019-01-08 2019-08-23 腾讯大地通途(北京)科技有限公司 For the method for Positioning Electronic Devices, unit and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115175306A (en) * 2022-06-24 2022-10-11 国网河南省电力公司经济技术研究院 Electric power Internet of things indoor positioning method based on convolutional neural network
CN115175306B (en) * 2022-06-24 2024-05-07 国网河南省电力公司经济技术研究院 Indoor positioning method of electric power Internet of things based on convolutional neural network
CN117405127A (en) * 2023-11-02 2024-01-16 深圳市天丽汽车电子科技有限公司 Navigation method, system, equipment and medium based on vehicle-mounted 5G antenna
CN117405127B (en) * 2023-11-02 2024-06-11 深圳市天丽汽车电子科技有限公司 Navigation method, system, equipment and medium based on vehicle-mounted 5G antenna

Similar Documents

Publication Publication Date Title
CN110322453B (en) 3D point cloud semantic segmentation method based on position attention and auxiliary network
CN106851571B (en) Decision tree-based rapid KNN indoor WiFi positioning method
Wu et al. A fast and resource efficient method for indoor positioning using received signal strength
CN109165540B (en) Pedestrian searching method and device based on prior candidate box selection strategy
CN110892760B (en) Positioning terminal equipment based on deep learning
CN107038717A (en) A kind of method that 3D point cloud registration error is automatically analyzed based on three-dimensional grid
US9430872B2 (en) Performance prediction for generation of point clouds from passive imagery
CN109614935A (en) Car damage identification method and device, storage medium and electronic equipment
US11676375B2 (en) System and process for integrative computational soil mapping
CN113589306B (en) Positioning method, positioning device, electronic equipment and storage medium
WO2021103027A1 (en) Base station positioning based on convolutional neural networks
CN112634369A (en) Space and or graph model generation method and device, electronic equipment and storage medium
CN111475746B (en) Point-of-interest mining method, device, computer equipment and storage medium
CN112393735B (en) Positioning method and device, storage medium and electronic device
US11754704B2 (en) Synthetic-aperture-radar image processing device and image processing method
Mukhtar et al. Machine learning-enabled localization in 5g using lidar and rss data
CN112862730A (en) Point cloud feature enhancement method and device, computer equipment and storage medium
CN116310899A (en) YOLOv 5-based improved target detection method and device and training method
CN115457202B (en) Method, device and storage medium for updating three-dimensional model
Wang et al. Joint visual and wireless signal feature based approach for high-precision indoor localization
CN113194401B (en) Millimeter wave indoor positioning method and system based on generative countermeasure network
Nie et al. Joint access point fuzzy rough set reduction and multisource information fusion for indoor Wi-Fi positioning
CN114863201A (en) Training method and device of three-dimensional detection model, computer equipment and storage medium
CN113269678A (en) Fault point positioning method for contact network transmission line
Önen et al. Occupancy grid mapping for automotive driving exploiting clustered sparsity

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19954608

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19954608

Country of ref document: EP

Kind code of ref document: A1