CN111754816A - Moving object intention identification method, system, terminal and storage medium - Google Patents

Moving object intention identification method, system, terminal and storage medium Download PDF

Info

Publication number
CN111754816A
CN111754816A CN202010499374.8A CN202010499374A CN111754816A CN 111754816 A CN111754816 A CN 111754816A CN 202010499374 A CN202010499374 A CN 202010499374A CN 111754816 A CN111754816 A CN 111754816A
Authority
CN
China
Prior art keywords
intention
moving object
layer
field moving
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010499374.8A
Other languages
Chinese (zh)
Other versions
CN111754816B (en
Inventor
余恒
王凡
唐锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zongmu Technology Shanghai Co Ltd
Original Assignee
Zongmu Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zongmu Technology Shanghai Co Ltd filed Critical Zongmu Technology Shanghai Co Ltd
Priority to CN202010499374.8A priority Critical patent/CN111754816B/en
Publication of CN111754816A publication Critical patent/CN111754816A/en
Application granted granted Critical
Publication of CN111754816B publication Critical patent/CN111754816B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/167Driving aids for lane monitoring, lane changing, e.g. blind spot detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method, a system, a terminal and a storage medium for identifying intentions of moving objects in a traffic jam scene, identifying driving intentions of approaching moving objects at the next moment, judging the probability of collision caused by the fact that near-field moving objects cut into a main body planned path, and further giving corresponding jam intention prompts to a driver at a proper time point, or selecting proper avoidance response according to the result of the jam intention judgment, or iterating a self-driving path again. The invention can not only judge the jamming intention, but also pre-judge the lane changing intention comprising the jamming intention in front of the vehicle, the passing intention in the front of the vehicle, the passing intention on the left side and the right side of the vehicle and the lane changing intention behind the vehicle, and provides prerequisites for predicting the unmanned driving near-field vehicle behaviors in the L4 level and the L5 level.

Description

Moving object intention identification method, system, terminal and storage medium
Technical Field
The present invention relates to the field of automotive electronics technologies, and in particular, to a method, a system, a terminal, and a storage medium for identifying an intention of a moving object.
Background
Safety is a major factor in the growing demand for pulling unmanned vehicles. Particularly, in the Chinese traffic environment with traffic jam and complex scenes, the number of traffic accidents caused by misjudgment of drivers on the intention of near-field cut-in is countless every year.
The existing advanced driving assistance system also makes corresponding judgment based on the real-time perception of the sensor, and the judgment has certain delay. Due to the delay, the existing advanced assistant driving system cannot give corresponding intention prompt to the driver at a proper time point, or select a proper avoidance response according to the result of intention judgment, or re-iterate the driving path of the advanced assistant driving system.
Disclosure of Invention
In order to solve the above and other potential technical problems, the present invention provides a method, a system, a terminal and a storage medium for identifying a moving object in a traffic jam scene, identifying a driving intention of an approaching moving object at a next time, determining a probability that a collision is caused by a near-field moving object cutting into a main body planned path, and then giving a driver a corresponding indication of an intention to jam at an appropriate time point, or selecting an appropriate avoidance response according to a result of the intention to jam determination, or iterating a driving path of the moving object itself. The invention can not only judge the jamming intention, but also can pre-judge various intentions including the jamming intention in front of the vehicle, the lane changing intention in front of the vehicle, the overtaking intention on the left side and the right side of the vehicle, the lane changing intention behind the vehicle and the like, thereby providing a prerequisite condition for predicting the behaviors of unmanned driving near-field vehicles with L4 grades and L5 grades.
A mobile object intention recognition network model comprising:
an intention prediction module and an intention judgment module;
the intention prediction module comprises a recurrent neural network consisting of an input layer, an output layer and at least one neuron cell layer, wherein input elements of the input layer are all dimension information representing the driving state of the near-field moving object; the output element of the output layer is the probability of the intention prediction result of each near-field moving object;
the intention judgment module identifies the specific intention of the near-field moving object through algorithm processing by combining the result output by the intention prediction module with driving state of the vehicle, driving rule elements of the scene where the vehicle is located and safety level setting elements.
Further, each input element representing the driving state of the near-field moving object input by the input layer is time series data, that is, the input layer data collected at different time points, and the time series data of the input layer reflects the time-varying state and/or degree of the driving state of the near-field moving object.
Further, the input layer input elements include, but are not limited to, camera perception data of each direction of the vehicle, millimeter wave radar perception data of each direction of the vehicle, ultrasonic perception data of each direction of the vehicle, vehicle laser radar perception data, and data clusters of near-field moving objects of the vehicle represented by fusion correction results of one or more types of data of the vehicle.
Further, the output layer outputs the factor as the probability of whether each near-field moving object is intended to be jammed at the next time, and the output layer numbers each near-field moving object and outputs the probability value of each near-field moving object intended to be jammed at the next time in a matrix form.
Further, the recurrent neural network is a deep recurrent neural network.
Further, the deep recurrent neural network includes n neuron cell layers, namely, a first neuron cell layer and a second neuron cell layer … from an input layer to an output layer, wherein the input of the first neuron cell layer includes a data cluster of a near-field moving object of the vehicle at the present time and cell memory data at a time on the first neuron cell layer, the input of the second neuron cell layer is an output result of the first neuron cell layer and cell memory data at a time on the second neuron cell layer, the input of the nth neuron cell layer is an output result of the n-1 neuron cell layer and cell memory data at a time on the nth neuron cell layer, the output of the nth neuron cell layer is a probability that each near-field moving object intends to predict a result, the first neuron cell layer, the second neuron layer, the third layer, the fourth layer, and the fourth layer are labeled as a third neuron layer, the input of the first neuron layer and the fourth layer, the input of the second layer are respectively the output result, And training each branch model in parallel between the second neuron cell layer and the … nth neuron cell layer, aggregating parallel training results, synchronously and/or asynchronously updating model parameters and applying the model parameters to each branch model.
Further, the working principle of the neuron cell layer is as follows: the neuron cell layer is like a conventional memory cell, and comprises an input layer, a memory cell with self-circulation connection, a forgetting gate and an output layer; the input layer may allow the afferent signal to change the state of the cellular memory or prevent it. On the other hand, the input layer may allow the state of the cell memory to affect other neurons or prevent it. The state of which is, however, divided into two vectors: h (t) and c (t) ("c" stands for "cell"), h (t) is considered as a short-term state, which represents the input from the next layer of neuronal cells, and c (t) is considered as a long-term state, which represents the memory of the neuronal cells at the last moment, which can last from one time step to another. The recurrent neural network can learn the long-term state of the stored content, i.e., the cell memory can selectively modulate the interaction between the cell memory (i.e., memory cell) itself and the external environment through the forgetting gate and/or memory gate of the neuronal cell layer. As long-term state c (t-1) traverses the network from left to right, it first passes through a forgetting gate, discarding some of the memory cells at the previous time, and then adds some of the new cell memory addition operations at the current time (adding the memory selected by the input gate). Therefore, in a continuous time axis, every time the input of the input layer is input once, some memory is discarded and some memory is added. Also, after the addition, the long-term state is replicated and passed through the tanh function (i.e., g (t)), and the result is filtered by the output layer. This results in a short-term state h (t).
Further, the role of the fully connected layer of the neuronal cell layer is: the input vector x (t) of the current input layer and the previous short-term state h (t-1) are fed to four different fully connected layers. The four fully attached layers all have different uses: the second fully connected layer is the layer that outputs g (t). It has the effect of analyzing the current input x (t) and the previous (short-term) state h (t-1). In the cell layer of a conventional recurrent neural network, its output is directly output to y (t) and h (t). In the long-term memory neural network (LSTM), the output of h (t) is not directly output, but the directly output portion is stored in a long-term state. The first full connecting layer, the third full connecting layer and the fourth full connecting layer are all door controllers. Because they use the logistics activation function, their outputs range from 0 to 1. Their outputs are fed to the multiplication section so if they output 0 they close the door and if they output 1 they open the door. A forgetting gate controlled by the first fully-connected layer (controlled by f (t)) controls which part of the long-term state should be forgotten. The third fully-connected layer controlled input gate (controlled by i (t)) controls which portion of g (t) of the second fully-connected layer control should be added to the long-term state. Finally, the output gates of the fourth fully connected layer (controlled by o (t)) control which long-term parts should read and output the state at this time step (from h (t)) and y (t)). In summary, the long-term memory neural network unit can learn to recognize an important input by means of the action of the input gate, store it in a long-term state, forget an unnecessary part in accordance with the action of the forget gate, memorize a necessary part, and learn to extract it when necessary. They can be applied to capture the interesting parts of the input vector x (t) of the input layer in time series, long text, sound recordings, consecutive video frames.
Further, the specific intention judged by the intention judging module comprises but is not limited to a vehicle front jamming intention, a vehicle front lane changing intention, a vehicle left side and right side overtaking intention and a vehicle rear lane changing intention.
When the intention judging module is combined with the vehicle running state, the vehicle running state comprises the relative position between the target vehicle and the vehicle at the current moment, the speed of the target vehicle, the acceleration of the target vehicle and the course angle of the target vehicle; the relative position between the target vehicle and the vehicle, the speed of the target vehicle, the acceleration of the target vehicle and the heading angle of the target vehicle in a continuous time period before the current time of the target vehicle.
Further, when the intention judgment module is combined with the driving rule elements of the scene where the vehicle is located, the driving rule elements of the scene where the vehicle is located include speed limit rules in the current scene, road traffic rule data packets in a current scene map, simulated traffic rule data packets virtually set in the current scene map, historical traffic records of the current scene map, and records of the historical traffic records of the current scene map recorded by weather and environmental changes.
Further, when the intention judgment module is combined with a safety level setting element, the safety level setting element is a record of a traffic accident caused by vehicle route planning performed by adopting intention recognition in a history record of a current scene map, if the record exists, the reliability of the intention recognition of the vehicle in the scene is reduced, and if the record does not exist, the reliability of the intention recognition of the vehicle in the scene is improved.
Furthermore, when the intention judgment module identifies the specific intention of the near-field moving object through algorithm processing, the probability predicted by the intention prediction module is corrected by the driving rule element and the safety level setting element of the scene where the vehicle is located, and then the corrected probability is selected to be displayed by the part exceeding the set rated alarm probability.
A moving object intention identifying method comprising the steps of:
s01: acquiring sensing data, identifying near-field moving objects in the sensing data, and giving a label corresponding to each near-field moving object; extracting the driving state of the near-field moving object in different dimensions;
s02: inputting an intention identification network model by taking each dimension representing the driving state of the near-field moving object as input, and outputting the probability of each near-field moving object intention prediction result by the intention identification network model;
s03: the probability of the intention prediction result of each near-field moving object is combined with driving state of the vehicle, driving rule elements of the scene where the vehicle is located and safety level setting elements, and the near-field moving object label with specific intention in the near-field moving object is identified through algorithm processing.
Further, the method also includes step S04: the near field moving object of specific intention is delivered to the driver in a mode including but not limited to image annotation and voice prompt.
Further, the step S01 further includes a step S011: and the driving state of the multi-dimensional near-field moving object is collected under the label of the near-field moving object to form a data cluster about the near-field moving object.
Further, the step S02 further includes a step S021: and classifying the data in the data cluster according to the characterization dimension, and then identifying the input intention into a network model.
A moving object intention recognition system comprising the following parts:
the sensing equipment comprises one or more of video sensing equipment, laser radar sensing equipment, millimeter wave sensing equipment and ultrasonic sensing equipment;
the sensing data preprocessing module comprises an image processing module, a millimeter wave processing module, an ultrasonic processing module and a laser radar processing module; the image processing module comprises a uniform neural network for extracting a preselected frame from video perception data and a neural network for screening an interested region; the millimeter wave processing module, the ultrasonic processing module and the laser radar processing module are used for extracting the motion track characteristics of the near-field moving object;
the intention prediction module comprises a recurrent neural network consisting of an input layer, an output layer and at least one neuron cell layer, wherein input elements of the input layer are output data of the perception data preprocessing module, namely information of each dimension representing the driving state of the near-field moving object; the output element of the output layer is the probability of the intention prediction result of each near-field moving object;
and the intention judgment module is used for identifying the specific intention of the near-field moving object through algorithm processing by combining the result output by the intention prediction module with driving rule elements including but not limited to the driving state of the vehicle, the scene where the vehicle is positioned and safety level setting elements.
Furthermore, the perception data preprocessing module further comprises a data fusion module, and the data fusion module is used for correcting and fusing data used for representing the operation state of each near-field moving object.
Further, an image processing module in the perception data preprocessing module extracts and preprocesses the characteristics of the original video data collected by the camera through a unified convolutional neural network by combining with the algorithms of deep learning and an optical flow field.
Furthermore, for millimeter wave processing module, ultrasonic processing module and laser radar processing module in the perception data preprocessing module, for data obtained by millimeter wave, ultrasonic wave and laser radar, the data needs to be processed by decoding and clustering provided by corresponding sensor manufacturers, the output is the result of object tracking and detection, and the result is decoded by time domain processing neural network to extract the motion trail characteristics of other obstacles.
Further, the characteristic parts extracted after the preprocessing of the image processing module, the millimeter wave processing module, the ultrasonic processing module and the laser radar processing module are uniformly input to a time domain recurrent neural network (LSTM NN) for comprehensive decision processing.
A terminal device such as a smartphone that can execute the moving object intention identification method program or a vehicle-mounted terminal control device that can execute the moving object intention identification method program.
A server comprising a program for implementing the mobile object intention identification method and/or the mobile object intention identification system described above.
A computer storage medium for storing a software program corresponding to the moving object intention identifying method and/or a moving object intention identifying system.
As described above, the present invention has the following advantageous effects:
the method comprises the steps of detecting a moving object in a traffic jam scene, identifying the driving intention of an approaching moving object at the next moment, judging the probability of collision caused by the fact that a near-field moving object cuts into a main body planned path, giving a driver a corresponding jam intention prompt at a proper time point, or selecting a proper avoidance response according to the result of the jam intention judgment, or iterating a driving path of the driver again. The invention can not only judge the intention of jamming in front of the vehicle, but also can pre-judge various intentions including the intention of jamming in front of the vehicle, the intention of changing lane in front of the vehicle, the intention of overtaking on the left side and the right side of the vehicle, the intention of changing lane in back of the vehicle and the like.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a layer of neuronal cells according to the invention.
FIG. 2 is a schematic diagram of the deep recurrent neural network of the present invention.
FIG. 3 is a schematic diagram of deep recurrent neural network training in accordance with the present invention.
FIG. 4 is a schematic diagram of the intent prediction module and the intent determination module.
FIG. 5 is a flow chart of the present invention.
FIG. 6 is a flow chart of another embodiment of the present invention.
Fig. 7 is a flowchart showing a process of the moving object intention recognition system.
Fig. 8 is a view showing a scene of intention recognition of a jam when a vehicle is cut in.
Fig. 9 shows a schematic view of an intention recognition scenario at the end of a vehicle cut-in jam.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be understood that the structures, ratios, sizes, and the like shown in the drawings and described in the specification are only used for matching with the disclosure of the specification, so as to be understood and read by those skilled in the art, and are not used to limit the conditions under which the present invention can be implemented, so that the present invention has no technical significance, and any structural modification, ratio relationship change, or size adjustment should still fall within the scope of the present invention without affecting the efficacy and the achievable purpose of the present invention. In addition, the terms "upper", "lower", "left", "right", "middle" and "one" used in the present specification are for clarity of description, and are not intended to limit the scope of the present invention, and the relative relationship between the terms and the terms is not to be construed as a scope of the present invention.
With reference to figure 4 of the drawings,
a mobile object intention recognition network model comprising:
an intention prediction module and an intention judgment module;
the intention prediction module comprises a recurrent neural network consisting of an input layer, an output layer and at least one neuron cell layer, wherein input elements of the input layer are all dimension information representing the driving state of the near-field moving object; the output element of the output layer is the probability of the intention prediction result of each near-field moving object;
the intention judgment module identifies the specific intention of the near-field moving object through algorithm processing by combining the result output by the intention prediction module with driving state of the vehicle, driving rule elements of the scene where the vehicle is located and safety level setting elements.
As a preferred embodiment, each input element representing the driving state of the near-field moving object input by the input layer is time series data, i.e. input layer data collected at different time points, and the time series data of the input layer reflects the change state and/or degree of the driving state of the near-field moving object along with time.
In a preferred embodiment, the input layer input elements include, but are not limited to, camera perception data of each direction of the vehicle, millimeter wave radar perception data of each direction of the vehicle, ultrasonic perception data of each direction of the vehicle, vehicle lidar perception data, and a data cluster of a near-field moving object of the vehicle represented by a fusion correction result of one or more types of data thereof.
As a preferred embodiment, the output layer outputs the factor as the probability of whether each near-field moving object will be intentionally jammed at the next time, and the output layer numbers each near-field moving object and outputs the probability value of each near-field moving object that is intentionally jammed at the next time in the form of a matrix.
As a preferred embodiment, the recurrent neural network is a deep recurrent neural network.
As a preferred embodiment, the deep recurrent neural network comprises n neuron cell layers, namely, the nth neuron cell layer marked as the first neuron cell layer and the second neuron cell layer … respectively from the input layer to the output layer, the input of the first neuron cell layer comprises a data cluster of a vehicle near-field moving object at the moment and cell memory data at a moment on the first neuron cell layer, the input of the second neuron cell layer is the output result of the first neuron cell layer and the cell memory data at a moment on the second neuron cell layer, the input of the nth neuron cell layer is the output result of the nth-1 neuron cell layer and the cell memory data of the nth neuron cell layer at a moment, the output of the nth neuron cell layer is a probability of an intention prediction result of each near-field moving object. Referring to fig. 3, the first neuron cell layer, the second neuron cell layer, and … nth neuron cell layer train respective branch models in parallel therebetween, and aggregate the parallel training results and update model parameters synchronously and/or asynchronously and apply them in the respective branch models. The neural network of the first neuron cell layer, the neural network of the second neuron cell layer and the neural network of the third neuron cell layer correspond to A, B, C parallel branch parts, the training part of the network model is shown in figure 3, each branch is trained in parallel among the neuron cell layers, and the training is performed into the network model through aggregation gradual change, updating and synchronization/asynchronization.
Referring to fig. 1, as a preferred embodiment, the operation principle of the neuron cell layer is as follows: the neuron cell layer is like a conventional memory cell, and comprises an input layer, a memory cell with self-circulation connection, a forgetting gate and an output layer; the input layer may allow the afferent signal to change the state of the cellular memory or prevent it. On the other hand, the input layer may allow the state of the cell memory to affect other neurons or prevent it. The state of which is, however, divided into two vectors: h (t) and c (t) ("c" stands for "cell"), h (t) is considered as a short-term state, which represents the input from the next layer of neuronal cells, and c (t) is considered as a long-term state, which represents the memory of the neuronal cells at the last moment, which can last from one time step to another. The recurrent neural network can learn the long-term state of the stored content, i.e., the cell memory can selectively modulate the interaction between the cell memory (i.e., memory cell) itself and the external environment through the forgetting gate and/or memory gate of the neuronal cell layer. As long-term state c (t-1) traverses the network from left to right, it first passes through a forgetting gate, discarding some of the memory cells at the previous time, and then adds some of the new cell memory addition operations at the current time (adding the memory selected by the input gate). Therefore, in a continuous time axis, every time the input of the input layer is input once, some memory is discarded and some memory is added. Also, after the addition, the long-term state is replicated and passed through the tanh function (i.e., g (t)), and the result is filtered by the output layer. This results in a short-term state h (t).
Referring to fig. 1 to 2, as a preferred embodiment, the fully-connected layer of the neuron cell layer functions as: the input vector x (t) of the current input layer and the previous short-term state h (t-1) are fed to four different fully connected layers. The four fully attached layers all have different uses: the second fully connected layer is the layer that outputs g (t). It has the effect of analyzing the current input x (t) and the previous (short-term) state h (t-1). In the cell layer of a conventional recurrent neural network, its output is directly output to y (t) and h (t). In the long-term memory neural network (LSTM), the output of h (t) is not directly output, but the directly output portion is stored in a long-term state. The first full connecting layer, the third full connecting layer and the fourth full connecting layer are all door controllers. Because they use the logistics activation function, their outputs range from 0 to 1. Their outputs are fed to the multiplication section so if they output 0 they close the door and if they output 1 they open the door. A forgetting gate controlled by the first fully-connected layer (controlled by f (t)) controls which part of the long-term state should be forgotten. The third fully-connected layer controlled input gate (controlled by i (t)) controls which portion of g (t) of the second fully-connected layer control should be added to the long-term state. Finally, the output gates of the fourth fully connected layer (controlled by o (t)) control which long-term parts should read and output the state at this time step (from h (t)) and y (t)). In summary, the long-term memory neural network unit can learn to recognize an important input by means of the action of the input gate, store it in a long-term state, forget an unnecessary part in accordance with the action of the forget gate, memorize a necessary part, and learn to extract it when necessary. They can be applied to capture the interesting parts of the input vector x (t) of the input layer in time series, long text, sound recordings, consecutive video frames.
As a preferred embodiment, the specific intention judged by the intention judging module includes, but is not limited to, an intention of pinching in front of the vehicle, an intention of changing lane in front of the vehicle, an intention of passing on the left side and the right side of the vehicle, and an intention of changing lane in back of the vehicle.
As a preferred embodiment, when the intention judging module is combined with the vehicle running state, the vehicle running state comprises the relative position between the target vehicle and the vehicle at the current moment, the speed of the target vehicle, the acceleration of the target vehicle and the heading angle of the target vehicle; the relative position between the target vehicle and the vehicle, the speed of the target vehicle, the acceleration of the target vehicle and the heading angle of the target vehicle in a continuous time period before the current time of the target vehicle.
As a preferred embodiment, when the intention determining module is combined with the driving rule elements of the scene where the vehicle is located, the driving rule elements of the scene where the vehicle is located include speed limit rules in the current scene, road traffic rule data packets in a current scene map, simulated traffic rule data packets virtually set in the current scene map, historical traffic records of the current scene map, and records of the historical traffic records of the current scene map as weather and environmental changes.
In a preferred embodiment, when the intention judging module is combined with a safety level setting element, the safety level setting element is a record of a traffic accident caused by vehicle route planning performed by adopting intention recognition in a history record of a current scene map, if the record exists, the reliability of the intention recognition of the vehicle in the scene is reduced, and if the record does not exist, the reliability of the intention recognition of the vehicle in the scene is improved.
As a preferred embodiment, when the intention judgment module identifies the specific intention of the near-field moving object through algorithm processing, the probability predicted by the intention prediction module is corrected by the driving rule element and the safety level setting element of the scene where the vehicle is located, and then the corrected probability is selected to be displayed by the part exceeding the set rated alarm probability.
Referring to fig. 5 to 7, a moving object intention identifying method includes the steps of:
s01: acquiring sensing data, identifying near-field moving objects in the sensing data, and giving a label corresponding to each near-field moving object; extracting the driving state of the near-field moving object in different dimensions;
s02: inputting an intention identification network model by taking each dimension representing the driving state of the near-field moving object as input, and outputting the probability of each near-field moving object intention prediction result by the intention identification network model;
s03: the probability of the intention prediction result of each near-field moving object is combined with driving state of the vehicle, driving rule elements of the scene where the vehicle is located and safety level setting elements, and the near-field moving object label with specific intention in the near-field moving object is identified through algorithm processing.
As a preferred embodiment, the method further includes step S04: the near field moving object of specific intention is delivered to the driver in a mode including but not limited to image annotation and voice prompt.
As a preferred embodiment, the step S01 further includes a step S011: and the driving state of the multi-dimensional near-field moving object is collected under the label of the near-field moving object to form a data cluster about the near-field moving object.
As a preferred embodiment, the step S02 further includes a step S021: and classifying the data in the data cluster according to the characterization dimension, and then identifying the input intention into a network model.
A moving object intention recognition system comprising the following parts:
the sensing equipment comprises one or more of video sensing equipment, laser radar sensing equipment, millimeter wave sensing equipment and ultrasonic sensing equipment;
the sensing data preprocessing module comprises an image processing module, a millimeter wave processing module and an ultrasonic processing module; the image processing module comprises a uniform neural network for extracting a preselected frame from video perception data and a neural network for screening an interested region; the millimeter wave processing module and the ultrasonic processing module are used for acquiring the speed and the acceleration of the near-field moving object;
the intention prediction module comprises a recurrent neural network consisting of an input layer, an output layer and at least one neuron cell layer, wherein input elements of the input layer are output data of the perception data preprocessing module, namely information of each dimension representing the driving state of the near-field moving object; the output element of the output layer is the probability of the intention prediction result of each near-field moving object;
and the intention judgment module is used for identifying the specific intention of the near-field moving object through algorithm processing by combining the result output by the intention prediction module with driving rule elements including but not limited to the driving state of the vehicle, the scene where the vehicle is positioned and safety level setting elements.
As a preferred embodiment, the sensing data preprocessing module further includes a data fusion module, and the data fusion module is configured to modify and fuse data used for characterizing an operation state of each near-field moving object.
As a preferred embodiment, the image processing module in the perceptual data preprocessing module performs feature extraction and preprocessing on the original video data acquired by the camera through a unified convolutional neural network in combination with the algorithms of deep learning and an optical flow field.
As a preferred embodiment, for the millimeter wave processing module, the ultrasonic processing module, and the lidar processing module in the perception data preprocessing module, for data acquired by millimeter waves, ultrasonic waves, and lidar, the data needs to be processed by decoding, clustering, and the like provided by a corresponding sensor manufacturer, and the output is a result of object tracking and detection, and then decoded by a time domain processing neural network, and the motion trajectory characteristics of other obstacles are extracted.
As a preferred embodiment, the characteristic parts extracted after the preprocessing of the image processing module, the millimeter wave processing module, the ultrasonic processing module and the laser radar processing module are uniformly input to a time domain recurrent neural network (LSTM NN) for comprehensive decision processing.
Referring to fig. 8 to 9, the present invention detects a moving object in a traffic jam scene, identifies a driving intention of an approaching moving object at the next time, and determines a probability of collision caused by a near-field moving object cutting into a main body planned path, so as to give a driver a corresponding intention prompt of jamming at an appropriate time point, or selects an appropriate avoidance response according to a result of the intention judgment of jamming, or re-iterates a driving path of the driver. The invention can not only judge the jamming intention, but also can pre-judge various intentions including the jamming intention in front of the vehicle, the lane changing intention in front of the vehicle, the overtaking intention on the left side and the right side of the vehicle, the lane changing intention behind the vehicle and the like, thereby providing a prerequisite condition for predicting the behaviors of unmanned driving near-field vehicles with L4 grades and L5 grades.
A terminal device such as a smartphone that can execute the moving object intention identification method program or a vehicle-mounted terminal control device that can execute the moving object intention identification method program.
A server comprising a program for implementing the mobile object intention identification method and/or the mobile object intention identification system described above.
A computer storage medium for storing a software program corresponding to the moving object intention identifying method and/or a moving object intention identifying system.
As a preferred embodiment, this embodiment further provides a terminal device, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack server, a blade server, a tower server, or a rack server (including an independent server or a server cluster composed of multiple servers) capable of executing programs. The terminal device of this embodiment at least includes but is not limited to: a memory, a processor communicatively coupled to each other via a system bus. It is noted that a terminal device having a component memory, a processor, but it is understood that not all of the illustrated components are required to be implemented, and that more or fewer components may be implemented in alternative moving object intent recognition methods.
As a preferred embodiment, the memory (i.e., readable storage medium) includes a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the memory may be an internal storage unit of the computer device, such as a hard disk or a memory of the computer device. In other embodiments, the memory may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the computer device. Of course, the memory may also include both internal and external storage devices for the computer device. In the present embodiment, the memory is generally used to store an operating system and various types of application software installed in the computer device, such as the moving object intention identification method program code in the embodiment. In addition, the memory may also be used to temporarily store various types of data that have been output or are to be output.
A computer-readable storage medium having stored thereon a computer program, characterized in that: the program is executed by a processor to implement the steps in the moving object intention identifying method described above.
The present embodiment also provides a computer-readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application mall, etc., on which a computer program is stored, which when executed by a processor implements corresponding functions. The computer-readable storage medium of the present embodiment is for storing a moving object intention identification method program, which when executed by a processor, implements the moving object intention identification method in the moving object intention identification method embodiment.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention shall be covered by the claims of the present invention.

Claims (12)

1. A mobile object intention recognition network model, comprising:
an intention prediction module and an intention judgment module;
the intention prediction module comprises a recurrent neural network consisting of an input layer, an output layer and at least one neuron cell layer, wherein input elements of the input layer are all dimension information representing the driving state of the near-field moving object; the output element of the output layer is the probability of the intention prediction result of each near-field moving object;
the intention judgment module identifies the specific intention of the near-field moving object through algorithm processing by combining the result output by the intention prediction module with driving state of the vehicle, driving rule elements of the scene where the vehicle is located and safety level setting elements.
2. The moving object intent recognition network model of claim 1, wherein the input layer inputs each input element characterizing the driving status of the near-field moving object are time series data, i.e., input layer data collected at different points in time, the input layer time series data reflecting the state and/or degree of the driving status of the near-field moving object over time.
3. The moving object intention recognition network model of claim 1, wherein the output layer output element is a probability of whether each near-field moving object will be intentionally jammed at a next time, and the output layer numbers each near-field moving object and outputs the probability value of each near-field moving object that is intentionally jammed at the next time in a matrix form.
4. The moving object intention recognition network model according to claim 1, wherein the recurrent neural network is a deep recurrent neural network including n neuron cell layers, namely, an nth neuron cell layer labeled as a first neuron cell layer and a second neuron cell layer …, respectively, from an input layer to an output layer, the input of the first neuron cell layer including a data cluster of a near-field moving object of the vehicle at the present time and cell memory data at a time on the first neuron cell layer, the input of the second neuron cell layer being an output result of the first neuron cell layer and cell memory data at a time on the second neuron cell layer, the input of the nth neuron cell layer being an output result of the n-1 st neuron cell layer and cell memory data at a time on the nth neuron cell layer, the output of the nth neuron cell layer is a probability of an intention prediction result of each near-field moving object.
5. A moving object intention identifying method characterized by comprising the steps of:
s01: acquiring sensing data, identifying near-field moving objects in the sensing data, and giving a label corresponding to each near-field moving object; extracting the driving state of the near-field moving object in different dimensions;
s02: inputting an intention identification network model by taking each dimension representing the driving state of the near-field moving object as input, and outputting the probability of each near-field moving object intention prediction result by the intention identification network model;
s03: the probability of the intention prediction result of each near-field moving object is combined with driving state of the vehicle, driving rule elements of the scene where the vehicle is located and safety level setting elements, and the near-field moving object label with specific intention in the near-field moving object is identified through algorithm processing.
6. The moving object intention recognition method according to claim 5, characterized by further comprising step S04: the near field moving object of specific intention is delivered to the driver in a mode including but not limited to image annotation and voice prompt.
7. The moving object intention recognition method according to claim 6, wherein the step S01 further includes a step S011: collecting the driving state of the multi-dimensional near-field moving object under the label of the near-field moving object to form a data cluster related to the near-field moving object; the step S02 further includes a step S021: and classifying the data in the data cluster according to the characterization dimension, and then identifying the input intention into a network model.
8. A moving object intention recognition system characterized by comprising:
the sensing equipment comprises one or more of video sensing equipment, laser radar sensing equipment, millimeter wave sensing equipment and ultrasonic sensing equipment;
the sensing data preprocessing module comprises an image processing module, a millimeter wave processing module and an ultrasonic processing module; the image processing module comprises a uniform neural network for extracting a preselected frame from video perception data and a neural network for screening an interested region; the millimeter wave processing module and the ultrasonic processing module are used for acquiring the speed and the acceleration of the near-field moving object;
the intention prediction module comprises a recurrent neural network consisting of an input layer, an output layer and at least one neuron cell layer, wherein input elements of the input layer are output data of the perception data preprocessing module, namely information of each dimension representing the driving state of the near-field moving object; the output element of the output layer is the probability of the intention prediction result of each near-field moving object;
and the intention judgment module is used for identifying the specific intention of the near-field moving object through algorithm processing by combining the result output by the intention prediction module with driving rule elements including but not limited to the driving state of the vehicle, the scene where the vehicle is positioned and safety level setting elements.
9. The moving object intent recognition system of claim 8, wherein the perceptual data preprocessing module further comprises a data fusion module for modifying and fusing data characterizing the operational state of each of the near-field moving objects.
10. The moving object intention recognition system according to claim 8, wherein the image processing module in the perceptual data preprocessing module performs feature extraction and preprocessing by a unified convolutional neural network for the original video data collected by the camera in combination with the algorithms of deep learning and optical flow field; for data acquired by millimeter waves, ultrasonic waves and laser radars, the millimeter wave processing module, the ultrasonic wave processing module and the laser radar processing module in the perception data preprocessing module need to be processed by decoding, clustering and the like provided by corresponding sensor manufacturers, output results of object tracking and detection, and then decode through a time domain processing neural network to extract the motion trail characteristics of other obstacles; the characteristic parts extracted after the image processing module, the millimeter wave processing module, the ultrasonic processing module and the laser radar processing module are preprocessed are uniformly input to a time domain recurrent neural network (LSTM NN) for comprehensive decision processing.
11. A terminal device characterized by: the terminal device is a smartphone that controls the mobile object intention recognition method according to any one of claims 5 to 7 or a vehicle-mounted terminal control device that performs the mobile object intention recognition method according to any one of claims 5 to 7.
12. A computer-readable storage medium having stored thereon a computer program, characterized in that: the program when executed by a processor implementing the steps in the method as claimed in any one of claims 5 to 7.
CN202010499374.8A 2020-06-04 2020-06-04 Device, method, system, terminal and medium for identifying intention of mobile object Active CN111754816B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010499374.8A CN111754816B (en) 2020-06-04 2020-06-04 Device, method, system, terminal and medium for identifying intention of mobile object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010499374.8A CN111754816B (en) 2020-06-04 2020-06-04 Device, method, system, terminal and medium for identifying intention of mobile object

Publications (2)

Publication Number Publication Date
CN111754816A true CN111754816A (en) 2020-10-09
CN111754816B CN111754816B (en) 2023-04-28

Family

ID=72674500

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010499374.8A Active CN111754816B (en) 2020-06-04 2020-06-04 Device, method, system, terminal and medium for identifying intention of mobile object

Country Status (1)

Country Link
CN (1) CN111754816B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114153207A (en) * 2021-11-29 2022-03-08 北京三快在线科技有限公司 Control method and control device of unmanned equipment
CN114973166A (en) * 2022-07-26 2022-08-30 中诚华隆计算机技术有限公司 Traffic information tracking method, system and computer equipment

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1170652A1 (en) * 2000-07-06 2002-01-09 Renault Process of adaptive regulation of the distance between two mobile vehicles
US20050131589A1 (en) * 2003-12-16 2005-06-16 Nissan Motor Co., Ltd. Intention estimation method and system with confidence indication
CN101089917A (en) * 2007-06-01 2007-12-19 清华大学 Quick identification method for object vehicle lane changing
CN107919027A (en) * 2017-10-24 2018-04-17 北京汽车集团有限公司 Predict the methods, devices and systems of vehicle lane change
US20190077398A1 (en) * 2017-09-14 2019-03-14 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for vehicle lane change prediction using structural recurrent neural networks
US20190186940A1 (en) * 2017-12-18 2019-06-20 Hyundai Motor Company System and method for creating driving route of vehicle
CN110097785A (en) * 2019-05-30 2019-08-06 长安大学 A kind of front truck incision or urgent lane-change identification prior-warning device and method for early warning
CN110210058A (en) * 2019-04-26 2019-09-06 纵目科技(上海)股份有限公司 Meet reference line generation method, system, terminal and the medium of dynamics of vehicle
US20190367019A1 (en) * 2018-05-31 2019-12-05 TuSimple System and method for proximate vehicle intention prediction for autonomous vehicles
CN110555402A (en) * 2019-08-27 2019-12-10 北京纵目安驰智能科技有限公司 congestion car following method, system, terminal and storage medium based on look-around
CN110758382A (en) * 2019-10-21 2020-02-07 南京航空航天大学 Surrounding vehicle motion state prediction system and method based on driving intention
CN110796856A (en) * 2019-10-16 2020-02-14 腾讯科技(深圳)有限公司 Vehicle lane change intention prediction method and training method of lane change intention prediction network
CN111079590A (en) * 2019-12-04 2020-04-28 东北大学 Peripheral vehicle behavior pre-judging method of unmanned vehicle
CN111104969A (en) * 2019-12-04 2020-05-05 东北大学 Method for pre-judging collision possibility between unmanned vehicle and surrounding vehicle
CN111114556A (en) * 2019-12-24 2020-05-08 北京工业大学 Lane change intention identification method based on LSTM under multi-source exponential weighting loss

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1170652A1 (en) * 2000-07-06 2002-01-09 Renault Process of adaptive regulation of the distance between two mobile vehicles
US20050131589A1 (en) * 2003-12-16 2005-06-16 Nissan Motor Co., Ltd. Intention estimation method and system with confidence indication
CN101089917A (en) * 2007-06-01 2007-12-19 清华大学 Quick identification method for object vehicle lane changing
US20190077398A1 (en) * 2017-09-14 2019-03-14 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for vehicle lane change prediction using structural recurrent neural networks
CN107919027A (en) * 2017-10-24 2018-04-17 北京汽车集团有限公司 Predict the methods, devices and systems of vehicle lane change
US20190186940A1 (en) * 2017-12-18 2019-06-20 Hyundai Motor Company System and method for creating driving route of vehicle
US20190367019A1 (en) * 2018-05-31 2019-12-05 TuSimple System and method for proximate vehicle intention prediction for autonomous vehicles
CN110210058A (en) * 2019-04-26 2019-09-06 纵目科技(上海)股份有限公司 Meet reference line generation method, system, terminal and the medium of dynamics of vehicle
CN110097785A (en) * 2019-05-30 2019-08-06 长安大学 A kind of front truck incision or urgent lane-change identification prior-warning device and method for early warning
CN110555402A (en) * 2019-08-27 2019-12-10 北京纵目安驰智能科技有限公司 congestion car following method, system, terminal and storage medium based on look-around
CN110796856A (en) * 2019-10-16 2020-02-14 腾讯科技(深圳)有限公司 Vehicle lane change intention prediction method and training method of lane change intention prediction network
CN110758382A (en) * 2019-10-21 2020-02-07 南京航空航天大学 Surrounding vehicle motion state prediction system and method based on driving intention
CN111079590A (en) * 2019-12-04 2020-04-28 东北大学 Peripheral vehicle behavior pre-judging method of unmanned vehicle
CN111104969A (en) * 2019-12-04 2020-05-05 东北大学 Method for pre-judging collision possibility between unmanned vehicle and surrounding vehicle
CN111114556A (en) * 2019-12-24 2020-05-08 北京工业大学 Lane change intention identification method based on LSTM under multi-source exponential weighting loss

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SAJAN PATEL 等: "Predicting Future Lane Changes of Other Highway Vehicles using RNN-based Deep Models", 《ARXIV:1801.04340V4》, 16 May 2019 (2019-05-16), pages 1 - 8, XP081222477 *
伍淑莉 等: "基于LSTM的智能车变道预测研究", 《信息通信》, 31 December 2019 (2019-12-31), pages 7 - 11 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114153207A (en) * 2021-11-29 2022-03-08 北京三快在线科技有限公司 Control method and control device of unmanned equipment
CN114153207B (en) * 2021-11-29 2024-02-27 北京三快在线科技有限公司 Control method and control device of unmanned equipment
CN114973166A (en) * 2022-07-26 2022-08-30 中诚华隆计算机技术有限公司 Traffic information tracking method, system and computer equipment
CN114973166B (en) * 2022-07-26 2022-10-25 中诚华隆计算机技术有限公司 Traffic information tracking method, system and computer equipment

Also Published As

Publication number Publication date
CN111754816B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
US20200257300A1 (en) Method and system for risk modeling in autonomous vehicles
CN110843794B (en) Driving scene understanding method and device and trajectory planning method and device
CN112133089B (en) Vehicle track prediction method, system and device based on surrounding environment and behavior intention
Chang et al. Online boosting for vehicle detection
US20220048536A1 (en) Method and device for testing a driver assistance system
CN108388834A (en) The object detection mapped using Recognition with Recurrent Neural Network and cascade nature
CN113052321B (en) Generating trajectory markers from short-term intent and long-term results
Jeong et al. Bidirectional long shot-term memory-based interactive motion prediction of cut-in vehicles in urban environments
CN114379581B (en) Algorithm iteration system and method based on automatic driving
CN111754816B (en) Device, method, system, terminal and medium for identifying intention of mobile object
Remmen et al. Cut-in scenario prediction for automated vehicles
CN112307978A (en) Target detection method and device, electronic equipment and readable storage medium
CN113548054A (en) Vehicle lane change intention prediction method and system based on time sequence
Kim et al. Driving style-based conditional variational autoencoder for prediction of ego vehicle trajectory
CN113688760A (en) Automatic driving data identification method and device, computer equipment and storage medium
US11420623B2 (en) Systems for determining object importance in on-road driving scenarios and methods thereof
Sayeed et al. Bangladeshi traffic sign recognition and classification using cnn with different kinds of transfer learning through a new (btsrb) dataset
CN111753371B (en) Training method, system, terminal and storage medium for vehicle body control network model
CN112435466A (en) Method and system for predicting take-over time of CACC vehicle changing into traditional vehicle under mixed traffic flow environment
CN116461507A (en) Vehicle driving decision method, device, equipment and storage medium
CN114104005B (en) Decision-making method, device and equipment of automatic driving equipment and readable storage medium
CN113920166B (en) Method, device, vehicle and storage medium for selecting object motion model
CN111661034B (en) Vehicle body control method, system, terminal and storage medium based on deep recurrent neural network
Meftah et al. Deep residual network for autonomous vehicles obstacle avoidance
CN112180913A (en) Special vehicle identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant