CN113029146A - Navigation action prediction model training method, navigation action generation method and device - Google Patents

Navigation action prediction model training method, navigation action generation method and device Download PDF

Info

Publication number
CN113029146A
CN113029146A CN202110230215.2A CN202110230215A CN113029146A CN 113029146 A CN113029146 A CN 113029146A CN 202110230215 A CN202110230215 A CN 202110230215A CN 113029146 A CN113029146 A CN 113029146A
Authority
CN
China
Prior art keywords
navigation action
road section
prediction model
training
navigation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110230215.2A
Other languages
Chinese (zh)
Inventor
于志杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Bailong Mayun Technology Co ltd
Original Assignee
Beijing Bailong Mayun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Bailong Mayun Technology Co ltd filed Critical Beijing Bailong Mayun Technology Co ltd
Priority to CN202110230215.2A priority Critical patent/CN113029146A/en
Publication of CN113029146A publication Critical patent/CN113029146A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/343Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

A navigation action prediction model training method, a navigation action generation method and a navigation action generation device are disclosed. Constructing a training sample set, wherein the training sample set comprises one or more training samples, the training samples are image data of an entrance road section and an exit road section at a road junction, and marks of the training samples are used for representing navigation actions from the entrance road section to the exit road section; and training a navigation action prediction model based on the training sample set. Therefore, the navigation action prediction model which can ensure prediction accuracy and evaluate the accuracy of the prediction result can be obtained.

Description

Navigation action prediction model training method, navigation action generation method and device
Technical Field
The disclosure relates to the field of road navigation, and in particular to a navigation action prediction model training method, a navigation action generation method and a navigation action generation device.
Background
Navigation action generation is an important component of navigation services.
The navigation action is a steering action that should be performed at the intersection, such as turning left, going straight, and the like.
The traditional navigation action generation method is mainly used for intercepting an incoming road section and an outgoing road section and determining the navigation action to be executed at a crossing by calculating the incoming road section and the outgoing road section. Since the road network is very complex, the generated navigation action needs to be verified to obtain a reliable navigation action.
The navigation action generation method has the following problems: 1. the key road section for determining the navigation action is probably positioned outside the intercepted road section, so that the accuracy of the generated navigation action cannot be ensured; 2. whether the generated navigation action is accurate or not depends on the calculation rule, if the calculation rule is not set properly, an error navigation action can be caused, and a calculation result error easily occurs at the boundary condition of the calculation rule; 3. the generated navigation action is verified by relying on human participation, and the labor cost is high.
Therefore, a new navigation action generation method is needed to solve at least one of the above problems of the conventional navigation action generation method.
Disclosure of Invention
One technical problem to be solved by the present disclosure is to provide a new navigation action generation method to solve at least one of the above problems of the conventional navigation action generation method.
According to a first aspect of the present disclosure, there is provided a method of training a navigation action prediction model, comprising: constructing a training sample set, wherein the training sample set comprises one or more training samples, the training samples are image data of an entrance road section and an exit road section at a road junction, and marks of the training samples are used for representing navigation actions from the entrance road section to the exit road section; and training a navigation action prediction model based on the training sample set.
According to a second aspect of the present disclosure, there is provided a navigation action generation method, including: acquiring a prediction sample, wherein the prediction sample is image data of an entrance road section and an exit road section at a road junction; and inputting the prediction sample into a pre-trained navigation action prediction model to obtain a prediction result output by the navigation action prediction model, wherein the prediction result is used for representing the navigation action from the driving-in road section to the driving-out road section.
According to a third aspect of the present disclosure, there is provided a navigation action generation method, including: generating a first navigation action based on an included angle between an entering road section and an exiting road section at the intersection; inputting image data including the driving-in road section and the driving-out road section into a pre-trained navigation action prediction model to obtain a second navigation action; comparing whether the first navigation action is consistent with the second navigation action; and if the first navigation action is consistent or basically consistent with the second navigation action, taking the first navigation action or the second navigation action as the navigation action from the entrance road section to the exit road section.
According to a fourth aspect of the present disclosure, there is provided an apparatus for training a navigation action prediction model, comprising: the construction module is used for constructing a training sample set, the training sample set comprises one or more training samples, the training samples are image data of an entrance road section and an exit road section at a road junction, and marks of the training samples are used for representing navigation actions from the entrance road section to the exit road section; and the training module is used for training the navigation action prediction model based on the training sample set.
According to a fifth aspect of the present disclosure, there is provided a navigation action generating apparatus including: the acquisition module is used for acquiring a prediction sample, wherein the prediction sample comprises image data of an entrance road section and an exit road section at a road junction; and the prediction module is used for inputting the prediction sample into a pre-trained navigation action prediction model to obtain a prediction result output by the navigation action prediction model, and the prediction result is used for representing the navigation action from the driving-in road section to the driving-out road section.
According to a sixth aspect of the present disclosure, there is provided a navigation action generating apparatus including: the first generation module is used for generating a first navigation action based on an included angle between an entering road section and an exiting road section at the intersection; the second generation module is used for inputting image data containing the driving-in road section and the driving-out road section into a pre-trained navigation action prediction model to obtain a second navigation action; the comparison module is used for comparing whether the first navigation action is consistent with the second navigation action; and the determining module is used for taking the first navigation action or the second navigation action as the navigation action from the driving-in road section to the driving-out road section if the first navigation action is consistent or basically consistent with the second navigation action.
According to a seventh aspect of the present disclosure, there is provided a computing device comprising: a processor; and a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any of the first to third aspects as described above.
According to an eighth aspect of the present disclosure, there is provided a non-transitory machine-readable storage medium having stored thereon executable code which, when executed by a processor of an electronic device, causes the processor to perform the method of any of the first to third aspects described above.
Therefore, the navigation action prediction model for predicting the navigation action is trained on the basis of the image data of the driving-in road section and the driving-out road section at the intersection, the navigation action prediction model obtained through training predicts the navigation action in a picture analysis mode, and compared with other types of data, the picture contains almost all features required for determining the navigation action, so that the prediction accuracy of the navigation action prediction model obtained through training is high, and the accuracy of the prediction result of the navigation action prediction model can be evaluated.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in greater detail exemplary embodiments thereof with reference to the attached drawings, in which like reference numerals generally represent like parts throughout.
Fig. 1 shows a schematic flow diagram of a method of training a navigation action prediction model according to one embodiment of the present disclosure.
Fig. 2A shows a schematic view of an incoming route section and an outgoing route section.
Fig. 2B shows a schematic diagram of the training sample constructed for fig. 2A.
Fig. 3 shows a schematic diagram of the operational steps that the method of fig. 1 may also comprise.
Fig. 4 shows a schematic flow chart of a navigation action generation method according to an embodiment of the present disclosure.
Fig. 5 shows a schematic flow chart of a navigation action generation method according to another embodiment of the present disclosure.
Fig. 6 is a schematic structural diagram of an apparatus for training a navigation action prediction model according to an embodiment of the present disclosure.
Fig. 7 shows a schematic structural diagram of a navigation action generation apparatus according to an embodiment of the present disclosure.
Fig. 8 shows a schematic structural diagram of a navigation action generation apparatus according to another embodiment of the present disclosure.
FIG. 9 shows a schematic structural diagram of a computing device, according to one embodiment of the present disclosure.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 shows a schematic flow diagram of a method of training a navigation action prediction model according to one embodiment of the present disclosure. The method shown in fig. 1 may be implemented in software by means of a computer program, and the method shown in fig. 1 may also be performed by means of a specifically configured computing device.
Referring to fig. 1, in step S110, a training sample set is constructed, where the training sample set includes one or more training samples, the training samples are image data of an intersection including an incoming road segment and an outgoing road segment, and a mark of each training sample is used to represent a navigation action from the incoming road segment to the outgoing road segment.
An intersection, that is, an intersection, refers to a place where two or more roads meet. The driving-in section refers to a section of road that passes before reaching the intersection position, that is, a section of road behind the intersection position in the road traveling direction. The driven-out section refers to a section of road that passes after reaching the intersection position, that is, a section of road located ahead of the intersection position in the road traveling direction.
The road segments connected to the intersection may typically include a plurality of other road segments in addition to the incoming road segments and outgoing road segments. The irregular intersection shown in fig. 2A is an example, and includes two other road segments connected to the intersection in addition to the incoming road segment and the outgoing road segment shown in the drawing.
In order to avoid interference with the model training due to the fact that the training samples contain information useless for model prediction, the training samples can be image data of the intersection containing the incoming road segment and the outgoing road segment and not containing other road segments at the intersection. That is, the image data does not include other links connected to the intersection, except for the incoming link and the outgoing link at the intersection.
As an example, the prediction sample may be image data including an incoming road segment, an outgoing road segment, and a road portion (i.e., an intersection) where the incoming road segment and the outgoing road segment are connected. For example, the training sample constructed for the irregular intersection shown in fig. 2A may be a road image containing an incoming road segment, an outgoing road segment, and an intersection where the incoming road segment and the outgoing road segment are connected as shown in fig. 2B.
In constructing the training sample set, image data including an incoming road segment and an outgoing road segment may be constructed in one or more ways and added to the training sample set as training samples. For example, the training sample set may be constructed in one or more of, but is not limited to, the following exemplary manners.
As an example, a road image including an incoming link and an outgoing link may be extracted from the map data, and the road image may be added as a training sample to the training sample set. The image capturing method includes the steps that a road image containing an incoming road section and an outgoing road section can be captured from existing map data in a picture capturing mode, and under the condition that other road sections exist at intersections connected with the incoming road section and the outgoing road section, the road image containing only the incoming road section, the outgoing road section and the intersections connected with the incoming road section and the outgoing road section can be captured. The map data may be road image data provided by various map software, such as a live-action road image or a simulated road image.
As an example, image data obtained by photographing a road including an incoming link and an outgoing link may also be added to the training sample set as a training sample. For example, an image of a road captured by a camera provided at a road intersection may be acquired, and image data including an incoming link and an outgoing link may be obtained by cutting out the image of the road. In the case where there are other links at the intersection connected to the incoming link and the outgoing link, an image portion including only the incoming link, the outgoing link, and the intersection connected to the incoming link and the outgoing link may be cut out from the road image as a training sample.
As an example, a picture including the incoming road segment and the outgoing road segment may be generated based on the incoming road segment information and the outgoing road segment information, and the picture may be added as a training sample to the training sample set. The information of the incoming road section and the information of the outgoing road section may be any form of information capable of representing the incoming road section and the outgoing road section, such as but not limited to text description information of the incoming road section and the outgoing road section.
The labels of the training samples are used to characterize the true navigation action from the incoming road segment to the outgoing road segment. The labels of the training samples may be obtained by, but are not limited to, human labeling. For example, a plurality of training samples (that is, image data including an incoming road segment and an outgoing road segment) may be packaged into an annotation task, and delivered to an annotation platform, and issued by the annotation platform to annotate a person, or a plurality of training samples may be directly delivered to an annotation person for annotation.
In step S120, a navigation action prediction model is trained based on the training sample set.
Training samples (namely image data including the driving-in road section and the driving-out road section) can be used as model input, marks of the training samples (namely navigation actions) are used as model output, the navigation action prediction model is trained in a supervised learning mode, the specific training process is mature technology in the field, and details are not repeated here.
From the perspective of model structure, the navigation action prediction model may be any machine learning model suitable for processing image data, such as, but not limited to, a deep learning model based on a deep learning algorithm. Alternatively, the navigation action prediction model may be a deep learning model based on a residual network algorithm that is highly distinctive in the image domain, i.e., a residual network model.
The deep learning algorithm is a complex machine learning algorithm, and mainly learns more useful characteristics by constructing a machine learning model with a plurality of hidden layers and massive training data, so that the accuracy of classification or prediction is finally improved.
The residual error network is one of mainstream models in the image field, is a convolution neural network, and is formed by stacking residual error modules, so that the influence of gradient disappearance is effectively relieved, and the number of layers of the network model can be greatly increased.
From the perspective of the model prediction mechanism, the navigation action prediction model can be considered as a multi-classification model. The input of the navigation action prediction model may be image data (i.e., prediction samples described below) including an incoming road segment and an outgoing road segment, and the navigation action prediction model may process the input to obtain probability values corresponding to different navigation actions and output the navigation action with the highest probability value as a prediction result of the prediction sample, i.e., the predicted navigation action from the incoming road segment to the outgoing road segment.
As an example, when training starts, parameters of the navigation action prediction model may be initialized randomly, in each iteration training process, one or more training samples may be input into the navigation action prediction model, an obtained prediction result is compared with a flag of the training sample to obtain a value of a loss function, then the navigation action prediction model is adjusted with the loss reduction function as a target, and then the next iteration training process is started until the loss function converges. The loss function is used for evaluating the degree of the difference between the predicted value and the actual value of the model, and the evaluation of the model performance by using the loss function is a mature technology in the field and is not described herein again.
In order to improve the generalization capability of the navigation action prediction model, the disclosure may further construct a test sample set, where the test sample set includes one or more test samples, the test sample is image data including an incoming road segment and an outgoing road segment at a junction, and a mark of the test sample is used to represent a true navigation action (for convenience of distinction, may be referred to as a first true navigation action) from the incoming road segment to the outgoing road segment. The test samples and the training samples are image data of different driving-in road sections and driving-out road sections, namely the test samples and the training samples are sample data constructed for the different driving-in road sections and the different driving-out road sections. The test sample set may be constructed by using the above-mentioned construction method of the training sample set, and the detailed description of the disclosure is omitted here.
In the process of training the navigation action prediction model based on the training sample set, the test sample may be input into the navigation action prediction model, and a prediction result of the test sample output by the navigation action prediction model, that is, a predicted navigation action (for convenience of distinction, may be referred to as a first predicted navigation action) is obtained. The navigation action prediction model may be adjusted based on a difference between the first predicted navigation action and the first true navigation action, and the adjusted navigation action prediction model may be trained based on a set of training samples.
The difference between the first predicted navigation action and the first real navigation action may reflect the performance of the navigation action prediction model trained on the training sample set on the test sample set, that is, the test sample set may be used to perform a preliminary evaluation on the capability of the navigation action prediction model trained on the training sample set, for example, whether the navigation action prediction model is over-fitted or a generalization error of the navigation action prediction model may be evaluated during or after training. And adjusting the navigation action prediction model according to the evaluation result, wherein the adjusting of the navigation action prediction model mainly refers to adjusting the hyper-parameters of the navigation action prediction model, and optionally, the model parameters of the navigation action prediction model can also be adjusted. The hyper-parameters refer to parameters set before the learning process is started, and are not parameter data obtained by training a model, such as learning rate, the number of hidden layers of a deep neural network and the like.
For example, when the difference between the first predicted navigation action and the first true navigation action indicates that the navigation action prediction model diverges over the test sample set, that the mapp (Mean Average Precision) does not grow, or grows very slowly, the training may be terminated in time, and the hyper-parameters or model parameters used for training may be readjusted without waiting until the training is finished.
The present disclosure may further construct a verification sample set, where the verification sample set includes one or more verification samples, the verification samples are image data including an incoming road segment and an outgoing road segment, and marks of the verification samples are used to characterize a real navigation action (which may be referred to as a second real navigation action for convenience of distinction) from the incoming road segment to the outgoing road segment. The verification sample, the test sample and the training sample are all image data of different driving-in road sections and driving-out road sections, namely the verification sample, the test sample and the training sample are sample data constructed for the different driving-in road sections and the different driving-out road sections respectively. The verification sample set may be constructed by using the above-mentioned construction method of the training sample set, and the detailed description of the disclosure is omitted here.
As shown in fig. 3, after the navigation action prediction model is adjusted based on the test sample set and the adjusted navigation action prediction model is trained based on the training sample set, the verification sample may be input into the trained navigation action prediction model, so as to obtain the prediction result of the verification sample output by the navigation action prediction model, that is, the predicted navigation action (for the convenience of distinguishing, it may be referred to as a second predicted navigation action), and the navigation action prediction model is evaluated based on the difference between the second predicted navigation action and the second true navigation action. Wherein the navigation action prediction model, such as the accuracy of the navigation action prediction model, may be evaluated comprehensively based on the difference between the second predicted navigation action and the second true navigation action of the plurality of verification samples.
The evaluation result may be compared with a preset iteration termination condition, and if the evaluation result does not satisfy the iteration termination condition, the step of adjusting the navigation action prediction model based on the test sample set, the step of training the adjusted navigation action prediction model based on the training sample set, and the step of evaluating the navigation action prediction model based on the verification sample set are iteratively performed until the evaluation result satisfies the iteration termination condition. The evaluation result may be, but is not limited to, an index capable of characterizing the performance of the navigation action prediction model, such as accuracy, and the iteration termination condition may be that the accuracy reaches a threshold.
In one particular embodiment of the present disclosure, a training sample set, a testing sample set, and a validation sample set may be constructed first. For example, a certain amount of original information of the incoming road section and the outgoing road section of the intersection can be randomly extracted from the whole country; generating a road condition picture containing an incoming road section and an outgoing road section based on the original information; and repeating the two steps for three times, and taking the three groups of generated pictures as a training sample set, a test sample set and a test sample set respectively. The number of samples in different sample sets in the training sample set, the testing sample set and the verification sample set can be set according to requirements. The sample label can be obtained by means of labeling.
After the training sample set, the test sample set and the test sample set are obtained, a residual error network image algorithm can be adopted, parameters (hyper-parameters) are continuously adjusted based on the training sample set and the test sample set, and a model is trained based on the adjusted parameters until a loss function is converged; then, verifying on the verification sample data set; the above steps are repeated until a residual network model (corresponding to the above-mentioned navigation action prediction model) is obtained, the verification result (corresponding to the above-mentioned evaluation result) of which meets the requirements.
The navigation action prediction model obtained based on the method is used for predicting the navigation action in a picture analysis mode, and compared with other types of data, the picture contains almost all characteristics required for determining the navigation action, so that the prediction accuracy of the navigation action prediction model obtained through training is high, and the accuracy of the prediction result of the navigation action prediction model can be calculated.
The navigation action prediction model obtained based on the method can be used for predicting the navigation action from the driving-in road section to the driving-out road section. For example, the navigation action prediction model may be applied to a navigation system (such as navigation software installed on a device side), and the navigation system may generate a navigation action for an incoming road segment and an outgoing road segment at an intersection when a user travels or is about to travel to the intersection in the process of generating path navigation information according to a start position and an end position and providing a navigation service for the user, so as to guide the user to travel from the incoming road segment to the outgoing road segment. For another example, the navigation system may also generate the navigation action in advance by using the navigation action prediction model of the present disclosure for various traveling routes at each intersection within a predetermined geographic range, so that when providing an online navigation service for a user, the navigation action generated in advance may be directly obtained according to the location of the user without online generation, thereby improving the navigation experience of the user.
The navigation action prediction model obtained based on the method can also be used for being combined with other navigation action generation methods, the navigation action obtained by using other navigation action generation methods is verified in a manner of replacing manpower, the labor cost is saved, and meanwhile, the reliability of a verification result can be improved.
Fig. 4 shows a schematic flow chart of a navigation action generation method according to an embodiment of the present disclosure. The method shown in fig. 4 may be performed by a navigation system providing a navigation service for a user, and the navigation system may be, but is not limited to, navigation software installed on a mobile phone, a vehicle, or the like.
Referring to fig. 4, in step S410, prediction samples, which are image data containing an incoming link and an outgoing link at an intersection, are acquired. The prediction samples can be constructed by using the above-mentioned construction method of the training samples, and the details of the disclosure are not repeated.
In step S420, the prediction sample is input into a pre-trained navigation action prediction model, and a prediction result output by the navigation action prediction model is obtained, where the prediction result is used to represent a navigation action from the incoming road segment to the outgoing road segment.
From the perspective of model structure, the navigation action prediction model may be any machine learning model suitable for processing image data, such as, but not limited to, a deep learning model based on a deep learning algorithm. Alternatively, the navigation action prediction model may be a deep learning model based on a residual network algorithm that is highly distinctive in the image domain, i.e., a residual network model.
From the perspective of the model prediction mechanism, the navigation action prediction model can be considered as a multi-classification model. The navigation action prediction model can obtain probability values corresponding to different navigation actions by processing input prediction samples, and outputs the navigation action with the maximum probability value as a prediction result of the prediction samples, namely the predicted navigation action from the driving-in road section to the driving-out road section.
The navigation action prediction model may be obtained by training using the training method described in the present disclosure above in conjunction with fig. 1, and the specific training process may be referred to as the above related description.
As an example, when the user travels or is about to travel to an intersection, the navigation action prediction model of the present disclosure may be used to generate navigation actions for an incoming road segment and an outgoing road segment at the intersection, and after the navigation action is obtained, prompt information corresponding to the navigation action may be generated and output to prompt the user about the navigation action that should be followed when passing through the intersection. The prompt message may be, but is not limited to, one or more forms of text, voice, and image.
Fig. 5 shows a schematic flow chart of a navigation action generation method according to another embodiment of the present disclosure. The method shown in fig. 5 may be performed by a navigation system that provides navigation services to a user, and the navigation system may be, but is not limited to, navigation software installed on a mobile phone, a vehicle, or the like.
The present disclosure does not limit the sequential execution order between step S510 and step S520, that is, step S510 may be executed first, step S520 may be executed later, step S520 may be executed first, and step S510 may be executed later, or step S510 and step S520 may be executed simultaneously without being sequentially executed.
Referring to fig. 5, in step S510, a first navigation action is generated based on an angle between an incoming road segment and an outgoing road segment at an intersection.
The included angle between the driving-in section and the driving-out section may be an included angle between a road traveling direction represented by the driving-in section and a road traveling direction represented by the driving-out section.
The navigation action from the incoming road section to the outgoing road section (i.e., the first navigation action) may be generated according to the magnitude of the included angle between the incoming road section and the outgoing road section and the relative orientation relationship between the incoming road section and the outgoing road section.
As an example, a plurality of angle sections may be set in advance, and the first navigation action may be generated according to the angle section in which the included angle between the incoming road section and the outgoing road section is located, and with reference to the relative azimuth relationship between the incoming road section and the outgoing road section.
For example, if the included angle is within the range of 0-45 degrees, a straight navigation action can be generated; if the included angle is within the range of 45-135 degrees and the driving-out road section is on the right side of the driving-in road section, a navigation action of turning right can be generated; if the included angle is in the range of 135-180 degrees and the driving-out road section is on the right side of the driving-in road section, a navigation action of turning around the right road can be generated; if the included angle is within the range of 45-135 degrees and the driving-out road section is on the left side of the driving-in road section, a left-turning navigation action can be generated; if the included angle is in the range of 135-180 degrees and the driving-out road section is on the left side of the driving-in road section, a navigation action of turning around the left road can be generated.
The above-mentioned method for generating navigation actions based on included angles may generate wrong navigation actions at critical points of the angle range on one hand, and on the other hand, because road conditions in the real world are relatively complex, the accuracy of the method for generating navigation actions based on angle calculation cannot be guaranteed.
To this end, the present embodiment proposes that the navigation action generated based on the above-described manner may be verified using a navigation action prediction model to discover and/or correct erroneous navigation actions.
Specifically, in step S520, the image data including the incoming road segment and the outgoing road segment is input into the pre-trained navigation motion prediction model, so as to obtain the second navigation motion.
That is, the image data including the incoming link and the outgoing link may be used as a prediction sample, and a prediction result (i.e., the second navigation action) output by the navigation action prediction model may be obtained by inputting a previously trained navigation action prediction model.
In step S530, it is compared whether the first navigation action and the second navigation action are consistent.
The first navigation action is consistent or substantially consistent with the second navigation action, and the first navigation action generated by the navigation action generation scheme based on the included angle and the second navigation action generated by the navigation action prediction model can be considered to be accurate. Therefore, if the first navigation operation and the second navigation operation are identical or substantially identical, the first navigation operation or the second navigation operation may be used as a navigation operation from the incoming link to the outgoing link (step S540).
If the first navigation action and the second navigation action are inconsistent, at least one of the first navigation action and the second navigation action is inaccurate (even if both of the first navigation action and the second navigation action are inaccurate), the driving-in road section and the driving-out road section can be marked as suspect samples, and the suspect samples can be handed over to relevant personnel to determine the navigation action from the driving-in road section represented by the suspect samples to the corresponding driving-out road section in a manual checking mode.
Therefore, the navigation action prediction model can be used for verifying the navigation action (namely the first navigation action) obtained by the navigation action generation scheme based on the included angle, so that the problem that the wrong navigation action can only be obtained by road side or user feedback can be solved.
Taking the navigation action generating method of the embodiment of the present disclosure executed by a navigation system providing navigation services as an example, when a navigation action from an entry road segment to an exit road segment at an intersection is generated, the navigation system may obtain a first navigation action and a second navigation action by using a navigation action generating manner based on an included angle and a navigation action generating manner based on a navigation action prediction model, and then compare the first navigation action and the second navigation action, if the first navigation action and the second navigation action are consistent, the navigation action may be considered to be accurate, and may be stored in a database in association with the entry road segment and the exit road segment. If the two are not consistent, the driving-in road section and the driving-out road section can be marked as suspect samples, and the navigation action from the driving-in road section to the driving-out road section represented by the suspect samples can be confirmed in a manual rechecking mode for the suspect samples. The navigation system can generate the navigation action in advance according to various driving paths at each intersection in the preset geographic range, so that the pre-generated navigation action can be directly acquired according to the position of the user without online generation when the online navigation service is provided for the user, and the navigation experience of the user can be improved.
The method for training the navigation action prediction model can be realized as a device for training the navigation action prediction model. Fig. 6 is a schematic structural diagram of an apparatus for training a navigation action prediction model according to an embodiment of the present disclosure. Wherein the functional elements of the means for training a predictive model of navigational motion may be implemented by hardware, software, or a combination of hardware and software implementing the principles of the present disclosure. It will be appreciated by those skilled in the art that the functional units described in fig. 6 may be combined or divided into sub-units to implement the principles of the invention described above. Thus, the description herein may support any possible combination, or division, or further definition of the functional units described herein.
In the following, brief descriptions are given to functional units that can be provided by the apparatus for training a navigation motion prediction model and operations that can be performed by each functional unit, and details related thereto may be referred to the above description, and are not repeated here.
Referring to fig. 6, an apparatus 600 for training a predictive model of navigational action may include a construction module 610 and a training module 620.
The construction module 610 is configured to construct a training sample set, where the training sample set includes one or more training samples, the training samples are image data of an incoming road segment and an outgoing road segment at a road junction, and a mark of the training sample is used to represent a navigation action from the incoming road segment to the outgoing road segment. For the construction of the training sample set, see the above related description
The training module 620 is used for training the navigation action prediction model based on the training sample set.
As an example, the construction module 610 may also construct a test sample set, with respect to which reference may be made to the above-mentioned related description. The apparatus 600 for training a predictive model of navigational action may further comprise an input module and an adjustment module. The input module is used for inputting the test sample into the navigation action prediction model to obtain a first predicted navigation action of the test sample output by the navigation action prediction model; the adjustment module is configured to adjust the navigation action prediction model based on a difference between the first predicted navigation action and the first true navigation action. The training module 620 is configured to train the adjusted navigation action prediction model based on the training sample set.
Optionally, the construction module 610 may also construct a validation sample set, which may be referred to above in relation to the relevant description. The apparatus 600 for training a navigation action prediction model may further comprise an evaluation module. The input module is used for inputting the verification sample into the trained navigation action prediction model after training the adjusted navigation action prediction model based on the training sample set, and obtaining a second predicted navigation action of the verification sample output by the navigation action prediction model; an evaluation module is configured to evaluate the navigation action prediction model based on a difference between the second predicted navigation action and the second true navigation action.
Optionally, the apparatus 600 for training a navigation action prediction model may further include an iteration module, configured to instruct the adjustment module, the training module 620, and the evaluation module to iteratively perform the step of adjusting the navigation action prediction model based on the test sample set, the step of training the adjusted navigation action prediction model based on the training sample set, and the step of evaluating the navigation action prediction model based on the verification sample set until the evaluation result satisfies the iteration termination condition.
The navigation action generation method of the present disclosure may be implemented as a navigation action generation apparatus.
Fig. 7 shows a schematic structural diagram of a navigation action generation apparatus according to an embodiment of the present disclosure. The functional units of the navigation action generating device may be implemented by hardware, software, or a combination of hardware and software implementing the principles of the present disclosure. It will be appreciated by those skilled in the art that the functional units described in fig. 7 may be combined or divided into sub-units to implement the principles of the invention described above. Thus, the description herein may support any possible combination, or division, or further definition of the functional units described herein.
In the following, functional units that the navigation action generating device can have and operations that each functional unit can perform are briefly described, and for the details related thereto, reference may be made to the above-mentioned related description, which is not repeated herein.
Referring to fig. 7, the navigation action generating apparatus 700 may include an obtaining module 710 and a predicting module 720. The obtaining module 710 is configured to obtain a prediction sample, where the prediction sample is image data of an incoming road segment and an outgoing road segment at an intersection. The prediction module 720 is configured to input the prediction sample into a pre-trained navigation action prediction model to obtain a prediction result output by the navigation action prediction model, where the prediction result is used to represent a navigation action from the driving-in road segment to the driving-out road segment. The navigation action prediction model may be obtained by using the training method mentioned above in the present disclosure.
The navigation action generating device 700 may further include a generating module and an output module. The generating module is used for generating prompt information corresponding to the navigation action; the output module is used for outputting the prompt message.
Fig. 8 shows a schematic structural diagram of a navigation action generation apparatus according to another embodiment of the present disclosure. The functional units of the navigation action generating device may be implemented by hardware, software, or a combination of hardware and software implementing the principles of the present disclosure. It will be appreciated by those skilled in the art that the functional units described in fig. 7 may be combined or divided into sub-units to implement the principles of the invention described above. Thus, the description herein may support any possible combination, or division, or further definition of the functional units described herein.
In the following, functional units that the navigation action generating device can have and operations that each functional unit can perform are briefly described, and for the details related thereto, reference may be made to the above-mentioned related description, which is not repeated herein.
Referring to fig. 8, the navigation action generating apparatus 800 may include a first generating module 810, a second generating module 820, a comparing module 830, and a determining module 840.
The first generating module 810 is configured to generate a first navigation action based on an included angle between an incoming road segment and an outgoing road segment at an intersection; the second generating module 820 is configured to input image data including an incoming road segment and an outgoing road segment into a pre-trained navigation action prediction model to obtain a second navigation action; the comparing module 830 is configured to compare whether the first navigation action and the second navigation action are consistent; the determining module 840 is configured to use the first navigation action or the second navigation action as a navigation action from the incoming road segment to the outgoing road segment if the first navigation action is consistent or substantially consistent with the second navigation action. The navigation action prediction model may be obtained by using the training method mentioned above in the present disclosure.
The navigation action generation apparatus 800 may further include a marking module for marking the inbound section and the outbound section as suspect samples if the first navigation action is inconsistent with the second navigation action.
Fig. 9 is a schematic structural diagram of a computing device that can be used to implement the navigation action prediction model training method or the navigation action generation method according to an embodiment of the present disclosure.
Referring to fig. 9, computing device 900 includes memory 910 and processor 920.
The processor 920 may be a multi-core processor or may include multiple processors. In some embodiments, processor 920 may include a general-purpose main processor and one or more special purpose coprocessors such as a Graphics Processor (GPU), Digital Signal Processor (DSP), or the like. In some embodiments, processor 920 may be implemented using custom circuits, such as Application Specific Integrated Circuits (ASICs) or Field Programmable Gate Arrays (FPGAs).
The memory 910 may include various types of storage units, such as system memory, Read Only Memory (ROM), and permanent storage. Wherein the ROM may store static data or instructions for the processor 920 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. In addition, the memory 910 may include any combination of computer-readable storage media, including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic and/or optical disks, may also be employed. In some embodiments, memory 910 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a digital versatile disc read only (e.g., DVD-ROM, dual layer DVD-ROM), a Blu-ray disc read only, an ultra-dense disc, a flash memory card (e.g., SD card, min SD card, Micro-SD card, etc.), a magnetic floppy disk, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
The memory 910 has executable code stored thereon, which when processed by the processor 920, causes the processor 920 to perform the navigation action prediction model training method or the navigation action generation method described above.
The navigation action prediction model training method, the navigation action generation method, the apparatus, and the computing device according to the present disclosure have been described in detail above with reference to the accompanying drawings.
Furthermore, the method according to the present disclosure may also be implemented as a computer program or computer program product comprising computer program code instructions for performing the above-mentioned steps defined in the above-mentioned method of the present disclosure.
Alternatively, the present disclosure may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) which, when executed by a processor of an electronic device (or computing device, server, etc.), causes the processor to perform the various steps of the above-described method according to the present disclosure.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (18)

1. A method of training a navigation action prediction model, comprising:
constructing a training sample set, wherein the training sample set comprises one or more training samples, the training samples are image data of an incoming road section and an outgoing road section at a road junction, and marks of the training samples are used for representing navigation actions from the incoming road section to the outgoing road section;
and training a navigation action prediction model based on the training sample set.
2. The method of claim 1, wherein,
the navigation action prediction model is a deep learning model.
3. The method of claim 2, wherein,
the navigation action prediction model is a deep learning model based on a residual error network algorithm.
4. The method of claim 1, the step of constructing a training sample set comprising:
extracting road images of an incoming road section and an outgoing road section at a road junction from map data, and adding the road images as training samples to a training sample set; and/or
Adding image data obtained by shooting roads including an entrance road section and an exit road section at a road junction as training samples to a training sample set; and/or
Generating a picture containing the driving-in road section and the driving-out road section based on the driving-in road section information and the driving-out road section information at the intersection, and adding the picture as a training sample to a training sample set.
5. The method of claim 1, further comprising:
constructing a test sample set, wherein the test sample set comprises one or more test samples, the test samples are image data of an incoming road section and an outgoing road section at a road junction, and marks of the test samples are used for representing a first real navigation action from the incoming road section to the outgoing road section;
inputting the test sample into the navigation action prediction model to obtain a first predicted navigation action of the test sample output by the navigation action prediction model;
adjusting the navigation action prediction model based on a difference between the first predicted navigation action and the first true navigation action; and
training the adjusted navigation action prediction model based on the training sample set.
6. The method of claim 5, further comprising:
constructing a verification sample set, wherein the verification sample set comprises one or more verification samples, the verification samples are image data of an incoming road section and an outgoing road section at a road junction, and marks of the verification samples are used for representing a second real navigation action from the incoming road section to the outgoing road section;
after training the adjusted navigation action prediction model based on the training sample set, inputting the verification sample into the trained navigation action prediction model to obtain a second predicted navigation action of the verification sample output by the navigation action prediction model;
evaluating the navigation action prediction model based on a difference between the second predicted navigation action and the second true navigation action.
7. The method of claim 6, further comprising:
iteratively executing the step of adjusting the navigation action prediction model based on the test sample set, the step of training the adjusted navigation action prediction model based on the training sample set, and the step of evaluating the navigation action prediction model based on the verification sample set until the evaluation result meets the iteration termination condition.
8. A navigation action generation method, comprising:
acquiring a prediction sample, wherein the prediction sample is image data of an entrance road section and an exit road section at a road junction;
and inputting the prediction sample into a pre-trained navigation action prediction model to obtain a prediction result output by the navigation action prediction model, wherein the prediction result is used for representing the navigation action from the driving-in road section to the driving-out road section.
9. The method of claim 8, wherein,
the navigation action prediction model is obtained by using the method of any one of claims 1 to 7.
10. The method of claim 8, further comprising:
generating prompt information corresponding to the navigation action;
and outputting the prompt information.
11. A navigation action generation method, comprising:
generating a first navigation action based on an included angle between an entering road section and an exiting road section at the intersection;
inputting image data comprising the driving-in road section and the driving-out road section into a pre-trained navigation action prediction model to obtain a second navigation action;
comparing whether the first navigation action and the second navigation action are consistent;
and if the first navigation action is consistent or basically consistent with the second navigation action, taking the first navigation action or the second navigation action as a navigation action from the driving-in road section to the driving-out road section.
12. The method of claim 11, further comprising:
and if the first navigation action is inconsistent with the second navigation action, marking the driving-in road section and the driving-out road section as suspect samples.
13. The method of claim 11, wherein,
the navigation action prediction model is obtained by using the method of any one of claims 1 to 7.
14. An apparatus for training a navigation action prediction model, comprising:
the system comprises a construction module, a data acquisition module and a data processing module, wherein the construction module is used for constructing a training sample set, the training sample set comprises one or more training samples, the training samples are image data of an entrance road section and an exit road section at a road junction, and marks of the training samples are used for representing navigation actions from the entrance road section to the exit road section;
and the training module is used for training a navigation action prediction model based on the training sample set.
15. A navigation action generating apparatus comprising:
the system comprises an acquisition module, a prediction module and a prediction module, wherein the acquisition module is used for acquiring prediction samples, and the prediction samples are image data of an entrance road section and an exit road section at a road junction;
and the prediction module is used for inputting the prediction sample into a pre-trained navigation action prediction model to obtain a prediction result output by the navigation action prediction model, and the prediction result is used for representing the navigation action from the driving-in road section to the driving-out road section.
16. A navigation action generating apparatus comprising:
the first generation module is used for generating a first navigation action based on an included angle between an entering road section and an exiting road section at the intersection;
the second generation module is used for inputting image data comprising the driving-in road section and the driving-out road section into a pre-trained navigation action prediction model to obtain a second navigation action;
the comparison module is used for comparing whether the first navigation action is consistent with the second navigation action;
and the determining module is used for taking the first navigation action or the second navigation action as the navigation action from the driving-in road section to the driving-out road section if the first navigation action is consistent or basically consistent with the second navigation action.
17. A computing device, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any of claims 1 to 12.
18. A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the method of any of claims 1-12.
CN202110230215.2A 2021-03-02 2021-03-02 Navigation action prediction model training method, navigation action generation method and device Pending CN113029146A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110230215.2A CN113029146A (en) 2021-03-02 2021-03-02 Navigation action prediction model training method, navigation action generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110230215.2A CN113029146A (en) 2021-03-02 2021-03-02 Navigation action prediction model training method, navigation action generation method and device

Publications (1)

Publication Number Publication Date
CN113029146A true CN113029146A (en) 2021-06-25

Family

ID=76465471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110230215.2A Pending CN113029146A (en) 2021-03-02 2021-03-02 Navigation action prediction model training method, navigation action generation method and device

Country Status (1)

Country Link
CN (1) CN113029146A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102009024153A1 (en) * 2009-06-05 2010-12-09 Daimler Ag Method for successive prediction of route sections by navigation system of motor vehicle, involves detecting, storing and predicting sequence-turning decision at sequence node points until reaching destinations
CN105277203A (en) * 2014-06-30 2016-01-27 高德信息技术有限公司 Navigation action generation method, navigation method and device
CN106340207A (en) * 2015-07-06 2017-01-18 空中客车运营简化股份公司 Flight Management Assembly For An Aircraft And Method For Monitoring Such An Assembly
CN110203128A (en) * 2019-06-18 2019-09-06 东风小康汽车有限公司重庆分公司 Construction method, turn signal autocontrol method and the system of turn signal submodel
CN110705717A (en) * 2019-09-30 2020-01-17 支付宝(杭州)信息技术有限公司 Training method, device and equipment of machine learning model executed by computer
CN111340880A (en) * 2020-02-17 2020-06-26 北京百度网讯科技有限公司 Method and apparatus for generating a predictive model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102009024153A1 (en) * 2009-06-05 2010-12-09 Daimler Ag Method for successive prediction of route sections by navigation system of motor vehicle, involves detecting, storing and predicting sequence-turning decision at sequence node points until reaching destinations
CN105277203A (en) * 2014-06-30 2016-01-27 高德信息技术有限公司 Navigation action generation method, navigation method and device
CN106340207A (en) * 2015-07-06 2017-01-18 空中客车运营简化股份公司 Flight Management Assembly For An Aircraft And Method For Monitoring Such An Assembly
CN110203128A (en) * 2019-06-18 2019-09-06 东风小康汽车有限公司重庆分公司 Construction method, turn signal autocontrol method and the system of turn signal submodel
CN110705717A (en) * 2019-09-30 2020-01-17 支付宝(杭州)信息技术有限公司 Training method, device and equipment of machine learning model executed by computer
CN111340880A (en) * 2020-02-17 2020-06-26 北京百度网讯科技有限公司 Method and apparatus for generating a predictive model

Similar Documents

Publication Publication Date Title
US20190204088A1 (en) Utilizing artificial neural networks to evaluate routes based on generated route tiles
Schuessler et al. Map-matching of GPS traces on high-resolution navigation networks using the Multiple Hypothesis Technique (MHT)
CN106574975A (en) Trajectory matching using peripheral signal
CN108399752A (en) A kind of driving infractions pre-judging method, device, server and medium
US9714835B2 (en) Navigation system, navigation server, navigation client, and navigation method
KR20160074998A (en) Navigation system and path prediction method thereby, and computer readable medium for performing the same
CN109074361A (en) Information providing system, information provider unit and information providing method
Montewka et al. Toward a hybrid model of ship performance in ice suitable for route planning purpose
CN111401255B (en) Method and device for identifying bifurcation junctions
WO2018058888A1 (en) Street view image recognition method and apparatus, server and storage medium
CN111651538B (en) Position mapping method, device and equipment and readable storage medium
CN106383888A (en) Method for positioning and navigation by use of picture retrieval
CN112815948B (en) Method, device, computer equipment and storage medium for identifying yaw mode
WO2021138369A1 (en) Processing map data for human quality check
CN105387844B (en) Pavement behavior measurement system and pavement behavior assay method
CN109872360A (en) Localization method and device, storage medium, electric terminal
Bandil et al. Geodart: A system for discovering maps discrepancies
CN114396956A (en) Navigation method and apparatus, computing device, storage medium, and computer program product
Goeddel et al. DART: A particle-based method for generating easy-to-follow directions
CN109740598A (en) Object localization method and device under structuring scene
CN106338292A (en) Walking path processing method and device
CN113029146A (en) Navigation action prediction model training method, navigation action generation method and device
Berjisian et al. Evaluation of map‐matching algorithms for smartphone‐based active travel data
US9110921B2 (en) Map editing with little user input
CN113029196B (en) Navigation application test method and test platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination