CN116909317B - Unmanned aerial vehicle control system and method based on terminal Internet of vehicles - Google Patents

Unmanned aerial vehicle control system and method based on terminal Internet of vehicles Download PDF

Info

Publication number
CN116909317B
CN116909317B CN202311184066.6A CN202311184066A CN116909317B CN 116909317 B CN116909317 B CN 116909317B CN 202311184066 A CN202311184066 A CN 202311184066A CN 116909317 B CN116909317 B CN 116909317B
Authority
CN
China
Prior art keywords
road condition
feature
training
matrix
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311184066.6A
Other languages
Chinese (zh)
Other versions
CN116909317A (en
Inventor
沈玲玲
王虹澎
陈宇轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Feike Workshop Technology Beijing Co ltd
Original Assignee
Feike Workshop Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Feike Workshop Technology Beijing Co ltd filed Critical Feike Workshop Technology Beijing Co ltd
Priority to CN202311184066.6A priority Critical patent/CN116909317B/en
Publication of CN116909317A publication Critical patent/CN116909317A/en
Application granted granted Critical
Publication of CN116909317B publication Critical patent/CN116909317B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses an unmanned aerial vehicle control system and method based on terminal vehicle networking, wherein a vehicle terminal receives a road condition overlooking monitoring chart transmitted through the vehicle networking and collected by an unmanned aerial vehicle; extracting features of the road condition overlook monitoring graph to obtain a global road condition semantic feature matrix; and generating an unmanned aerial vehicle position adjustment instruction based on the global road condition semantic feature matrix. In this way, the position of the drone may be intelligently controlled such that the vehicle is within the field of view of the drone.

Description

Unmanned aerial vehicle control system and method based on terminal Internet of vehicles
Technical Field
The application relates to the technical field of intelligent control, in particular to an unmanned aerial vehicle control system and method based on terminal internet of vehicles.
Background
Conventional vehicle safety and navigation systems are generally limited in the perceived range and field of view of the vehicle itself, and cannot provide comprehensive road condition information and accurate navigation guidance.
Along with the rapid development of the internet of vehicles and unmanned aerial vehicle technology, in order to remedy the defects, unmanned aerial vehicles can be introduced as auxiliary equipment, and wider and fine road condition monitoring is provided through high-altitude visual angles and flexible maneuverability. However, in the actual operation, how to control the position of the unmanned aerial vehicle so that the vehicle can be in the visual field is an important technical problem.
Thus, a solution is desired.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides an unmanned aerial vehicle control system and method based on a terminal vehicle networking, wherein a vehicle terminal receives a road condition overlooking monitoring chart transmitted through the vehicle networking and collected by an unmanned aerial vehicle; extracting features of the road condition overlook monitoring graph to obtain a global road condition semantic feature matrix; and generating an unmanned aerial vehicle position adjustment instruction based on the global road condition semantic feature matrix. In this way, the position of the drone may be intelligently controlled such that the vehicle is within the field of view of the drone.
In a first aspect, a method for controlling an unmanned aerial vehicle based on terminal internet of vehicles is provided, which includes:
the vehicle terminal receives the road condition overlooking monitoring chart transmitted through the Internet of vehicles and collected by the unmanned aerial vehicle.
And extracting features of the road condition overlook monitoring graph to obtain a global road condition semantic feature matrix.
And generating an unmanned aerial vehicle position adjustment instruction based on the global road condition semantic feature matrix.
In a second aspect, there is provided a terminal internet of vehicles-based unmanned aerial vehicle control system, comprising:
The monitoring graph acquisition module is used for receiving the road condition overlooking monitoring graph which is transmitted through the Internet of vehicles and is acquired by the unmanned aerial vehicle by the vehicle terminal.
And the feature extraction module is used for carrying out feature extraction on the road condition overlook monitoring graph to obtain a global road condition semantic feature matrix.
And the adjustment instruction generation module is used for generating an unmanned aerial vehicle position adjustment instruction based on the global road condition semantic feature matrix.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for controlling a unmanned aerial vehicle based on terminal internet of vehicles according to an embodiment of the present application.
Fig. 2 is a schematic architecture diagram of a method for controlling an unmanned aerial vehicle based on terminal internet of vehicles according to an embodiment of the present application.
Fig. 3 is a flowchart of the sub-steps of step 120 in a terminal internet of vehicles-based unmanned aerial vehicle control method according to an embodiment of the present application.
Fig. 4 is a block diagram of a terminal internet of vehicles-based unmanned aerial vehicle control system according to an embodiment of the present application.
Fig. 5 is a schematic view of a scenario of a method for controlling an unmanned aerial vehicle based on terminal internet of vehicles according to an embodiment of the present application.
Detailed Description
The following description of the technical solutions according to the embodiments of the present application will be given with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Unless defined otherwise, all technical and scientific terms used in the embodiments of the application have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application.
In describing embodiments of the present application, unless otherwise indicated and limited thereto, the term "connected" should be construed broadly, for example, it may be an electrical connection, or may be a communication between two elements, or may be a direct connection, or may be an indirect connection via an intermediate medium, and it will be understood by those skilled in the art that the specific meaning of the term may be interpreted according to circumstances.
It should be noted that, the term "first\second\third" related to the embodiment of the present application is merely to distinguish similar objects, and does not represent a specific order for the objects, it is to be understood that "first\second\third" may interchange a specific order or sequence where allowed. It is to be understood that the "first\second\third" distinguishing objects may be interchanged where appropriate such that embodiments of the application described herein may be practiced in sequences other than those illustrated or described herein.
Conventional vehicle safety and navigation systems typically rely on sensors of the vehicle itself, such as cameras, radars, etc., which have limited sensing range. This means that the system can only obtain road condition information in a limited range and may miss important situations such as traffic jams, road conditions or obstacles.
Traffic information in conventional systems is typically based on analysis and prediction of vehicle awareness and sensor data. Such processing may be delayed to some extent, resulting in inaccuracy in navigation guidance and warning, particularly in rapidly changing traffic environments, where delays may affect driving decisions and safety.
The conventional system is limited in view of the vehicle itself, and cannot provide traffic information at a global view. This means that in case of complex road network or traffic jam, the system cannot fully understand the situation of the whole road section, resulting in inaccuracy and inefficiency of navigation guidance.
Certain complex traffic scenarios, such as intersections, complex road signs and signal light systems, etc., present challenges to conventional vehicle safety and navigation systems. These systems may not accurately understand and interpret these complex traffic conditions, resulting in inaccuracies in navigation directions and safety alerts.
Conventional vehicle safety and navigation systems have drawbacks in terms of perceived range, information latency, global viewing angle, and complex scenarios. The unmanned aerial vehicle is introduced as auxiliary equipment, and an intelligent control system is combined, so that the defects can be overcome, and more comprehensive and accurate road condition information and navigation guidance can be provided.
It should be appreciated that the unmanned aerial vehicle may fly above the vehicle, providing a wider field of view, and the unmanned aerial vehicle may cover a larger area to obtain more road condition information than the perceived range of the vehicle itself. This means that traffic jams, road conditions, obstacles and the like can be found in time, providing more comprehensive road condition monitoring.
The unmanned aerial vehicle has high maneuverability and flexibility, can fly freely in the air and change the flight path, so that the unmanned aerial vehicle can respond to road condition change more quickly and adjust the flight position in time so as to provide real-time road condition information and navigation guidance. The flexibility of the unmanned aerial vehicle also enables the unmanned aerial vehicle to enter into areas where some vehicles are difficult to reach, and more accurate road condition data is provided.
The unmanned aerial vehicle can be connected with the vehicle through wireless communication, and road condition data and control instructions are transmitted in real time, so that the vehicle can acquire latest road condition information in time, and navigation adjustment is performed according to data provided by the unmanned aerial vehicle. The real-time data transmission can also accelerate the response speed of the system and improve the accuracy and efficiency of navigation.
The unmanned aerial vehicle is introduced as auxiliary equipment, so that the safety of a vehicle can be enhanced, potential dangerous situations such as traffic accidents, road obstacles and the like can be timely found through monitoring and early warning of the unmanned aerial vehicle, and warning is sent to a vehicle driver. This helps to reduce the occurrence of accidents and provides a safer driving environment.
The unmanned plane provides more comprehensive and accurate road condition information, so that the vehicle can be helped to select an optimal route and avoid traffic jam, the navigation efficiency can be improved, and the driving time and the fuel consumption can be reduced. Meanwhile, the unmanned aerial vehicle can provide real-time navigation guidance to help a driver make a more intelligent driving decision.
The unmanned aerial vehicle is introduced as auxiliary equipment, so that the unmanned aerial vehicle has the advantages of wider visual field range, high maneuverability and flexibility, real-time data transmission, safety enhancement, navigation efficiency improvement and the like. The development of the technology can bring about remarkable improvement on vehicle safety and navigation systems and improve driving experience.
Fig. 1 is a flowchart of a method for controlling a unmanned aerial vehicle based on terminal internet of vehicles according to an embodiment of the present application. Fig. 2 is a schematic architecture diagram of a method for controlling an unmanned aerial vehicle based on terminal internet of vehicles according to an embodiment of the present application. As shown in fig. 1 and fig. 2, the unmanned aerial vehicle control method based on terminal internet of vehicles includes: 110, the vehicle terminal receives a road condition overlooking monitoring chart transmitted through the internet of vehicles and collected by the unmanned aerial vehicle; 120, extracting features of the road condition overlook monitoring graph to obtain a global road condition semantic feature matrix; and 130, generating an unmanned aerial vehicle position adjustment instruction based on the global road condition semantic feature matrix.
In the step 110, by receiving the road condition overlooking monitoring chart collected by the unmanned aerial vehicle, the vehicle terminal may obtain real-time road condition information, including traffic jams, road conditions, obstacles, etc., so that the driver may timely learn about the current road condition and make a corresponding driving decision. Because unmanned aerial vehicle flies in the air, can provide wider field of vision, compare in the perception ability of vehicle self, unmanned aerial vehicle can cover bigger region. This allows the vehicle terminal to acquire more comprehensive road condition information, not limited to the range around the vehicle.
In the step 120, a global road condition semantic feature matrix may be obtained by performing feature extraction on the road condition overlook monitoring chart. The characteristics can help the vehicle terminal to understand and analyze the road condition of the whole road section, including road condition, traffic flow, traffic sign, signal lamp and the like, and help the vehicle terminal to make more accurate navigation decisions. The original road condition overlook monitoring graph can be converted into a more compact global road condition semantic feature matrix through feature extraction. This can reduce the amount of data transmission and improve the efficiency and speed of data transmission.
In the step 130, the vehicle terminal may generate a position adjustment command of the unmanned aerial vehicle based on the global road condition semantic feature matrix. The instructions can enable the unmanned aerial vehicle to conduct position adjustment according to the current road condition so as to monitor and acquire road condition information better, and the unmanned aerial vehicle is beneficial to providing real-time and accurate road condition data and navigation guidance. By generating the position adjustment instruction of the unmanned aerial vehicle, the unmanned aerial vehicle can be ensured to monitor in a key area, such as a traffic jam area, a road construction area and the like. The method can improve the monitoring effect, help the vehicle terminal to acquire more accurate road condition information, and provide better navigation guidance for the driver.
According to the unmanned aerial vehicle control method based on the terminal internet of vehicles, the global road condition semantic feature matrix can be obtained by receiving the road condition overlooking monitoring image collected by the unmanned aerial vehicle and extracting the features, and the unmanned aerial vehicle position adjustment instruction is generated based on the matrix. The beneficial effects of the steps include obtaining real-time road condition information, more comprehensive visual field scope, overall road condition understanding, reducing data transmission quantity, real-time position adjustment and improving monitoring effect.
It will be appreciated that the field of view of the drone is wider than the perceived range of the vehicle itself, and may cover a larger area. By controlling the position of the unmanned aerial vehicle, the unmanned aerial vehicle flies around the vehicle, and more road condition information including traffic jams, road conditions, obstacles and the like can be provided. However, during the running process of the vehicle, the vehicle is easy to leave the visual field of the unmanned aerial vehicle, which may miss important road condition information and increase accident risk. In this regard, the technical concept of the present application is as follows: the position of the unmanned aerial vehicle is intelligently controlled so that the vehicle is within the field of view of the unmanned aerial vehicle in combination with artificial intelligence technology based on deep learning.
Specifically, in the step 110, the vehicle terminal receives the road condition overhead monitoring map acquired by the unmanned aerial vehicle and transmitted through the internet of vehicles. In the technical scheme of the application, firstly, a vehicle terminal receives a road condition overlooking monitoring graph which is transmitted through the internet of vehicles and is collected by an unmanned aerial vehicle. The vehicle terminal receives the road condition overlooking monitoring graph transmitted through the Internet of vehicles and collected by the unmanned aerial vehicle, and plays a key role in finally generating the unmanned aerial vehicle position adjustment instruction.
By receiving the road condition overlooking monitoring image acquired by the unmanned aerial vehicle, the vehicle terminal can acquire real-time road condition information. Such information includes traffic congestion conditions, road conditions, obstructions, etc., and such real-time traffic information is critical to generating accurate unmanned aerial vehicle position adjustment instructions.
Through the road condition overlooking monitoring graph, the vehicle terminal can acquire a wider visual field range, and the visual field range is not limited to the range around the vehicle, so that the vehicle terminal can sense the road condition of the whole road section, including the condition of the road in front, the traffic flow, the traffic sign, the signal lamp and the like. The global road condition information is very important for generating global unmanned aerial vehicle position adjustment instructions.
And the vehicle terminal performs data processing and feature extraction on the received road condition overlook monitoring graph. The method can convert the original image data into a more compact global road condition semantic feature matrix, and the vehicle terminal can better understand and analyze the road condition information through extracting and processing the features, so as to provide a basis for generating the unmanned aerial vehicle position adjustment instruction.
Based on the received global road condition semantic feature matrix, the vehicle terminal can carry out navigation decision and generate an unmanned aerial vehicle position adjustment instruction. According to the road condition information, the vehicle terminal can determine the position, the flight height, the flight speed and other parameters which need to be adjusted of the unmanned aerial vehicle so as to better monitor and acquire the road condition information and provide real-time navigation guidance.
The vehicle terminal receives the road condition overlooking monitoring image which is transmitted through the Internet of vehicles and is collected by the unmanned aerial vehicle, plays a key role in generating the position adjustment instruction of the unmanned aerial vehicle finally, provides real-time road condition information, global road condition perception, data processing and feature extraction, and provides a basis for navigation decision and instruction generation. The functions enable the unmanned aerial vehicle to carry out position adjustment according to real-time road conditions and provide accurate road condition data and navigation guidance.
Specifically, in the step 120, feature extraction is performed on the road condition top view monitoring graph to obtain a global road condition semantic feature matrix. Fig. 3 is a flowchart of the sub-steps of step 120 in a terminal internet of vehicles-based unmanned aerial vehicle control method according to an embodiment of the present application. As shown in fig. 3, performing feature extraction on the road condition overlook monitoring graph to obtain a global road condition semantic feature matrix, which includes: 121, extracting road condition characteristics of the road condition overlook monitoring graph to obtain a road condition monitoring characteristic graph; 122, extracting node features and topological features from the road condition monitoring feature map to obtain a sequence of road condition local feature vectors and a local feature-to-local feature correlation topological feature matrix; and 123, passing the sequence of the road condition local feature vector and the local feature inter-feature association topological feature matrix through a graph neural network model to obtain the global road condition semantic feature matrix.
In the application, firstly, the road condition characteristics of the road condition overlook monitoring graph are extracted to obtain a road condition monitoring characteristic graph. The road condition features in the image are extracted by extracting features from the road condition overlooking monitoring image, and the features can comprise vehicles, pedestrians, traffic signs, road marks and the like. By extracting the road condition characteristics, the road condition information in the image can be converted into an available data representation form. By extracting road condition characteristics, the original image data can be converted into a more compact road condition monitoring characteristic diagram with more information content. This reduces the cost of data storage and transmission and increases the efficiency of subsequent processing.
And then extracting node features and topological features from the road condition monitoring feature map to obtain a sequence of road condition local feature vectors and a local feature-to-local feature correlation topological feature matrix. Node features are extracted from the road condition monitoring feature map, that is, feature vectors of each node (such as traffic sign, vehicle, pedestrian, etc.) are extracted from the image, and the node features may include information of position, size, color, shape, etc. By extracting these features, the attributes and states of each node can be better represented and understood. In addition to the node characteristics, this step also extracts topological relation characteristics between nodes in the image, such as connection relations, relative positions, and the like between the nodes. These topological features can help establish associations between nodes and provide more comprehensive traffic information. The road condition local features can be represented as the feature vector and the associated topological feature matrix in the form of sequences by extracting the node features and the topological features, and the structured data representation mode can better capture the relation among the nodes and the whole road condition.
And finally, the sequence of the road condition local feature vector and the local feature inter-feature association topological feature matrix are processed through a graph neural network model to obtain the global road condition semantic feature matrix. The sequence of the road condition local feature vectors and the local inter-feature association topological feature matrix are input into the graph neural network model, global semantic understanding can be carried out on the whole road condition, and the graph neural network model can learn the association between nodes and global road condition features, so that a global road condition semantic feature matrix is generated. The graphic neural network model can extract higher-level semantic features so as to better represent road condition, and can mine deeper features hidden in road condition data through the learning capability of the model, thereby providing more accurate and richer global road condition information. The local features and the topological features can be integrated and fused through the graph neural network model to obtain a global road condition semantic feature matrix, so that comprehensive information of the whole road condition can be captured better, and more accurate road condition description and semantic understanding are provided.
The step of extracting the characteristics of the road condition overlook monitoring graph to obtain a global road condition semantic characteristic matrix comprises the steps of extracting road condition characteristics, extracting node characteristics and topology characteristics, and generating the global road condition semantic characteristic matrix through a graph neural network model. The method has the beneficial effects of road condition information extraction, simplified data representation, road condition local feature extraction, associated topology feature extraction, data representation and structuring, global road condition semantic understanding, advanced feature extraction and comprehensive information fusion.
In one embodiment of the present application, for the step 121, extracting the road condition features of the road condition overlook monitoring chart to obtain a road condition monitoring feature chart includes: and the road condition overlooking monitoring graph is passed through a road condition feature extractor based on a convolutional neural network model to obtain the road condition monitoring feature graph.
And then, extracting the road condition characteristics of the road condition overlook monitoring graph to obtain a road condition monitoring characteristic graph. In a specific example of the present application, the process of extracting the road condition characteristics of the road condition overlook monitoring graph to obtain the encoding process of the road condition monitoring characteristic graph includes: and the road condition overlooking monitoring graph is passed through a road condition feature extractor based on a convolutional neural network model to obtain a road condition monitoring feature graph. That is, the road condition feature extractor is constructed by using a convolutional neural network model to capture road condition features contained in the road condition overlook monitoring map.
It should be appreciated that Convolutional Neural Networks (CNNs) are an efficient image processing model that can automatically learn the representation of features in an image. By using the CNN-based road condition feature extractor, rich and advanced image features can be extracted from the road condition overlook monitoring graph. These features may include edges, textures, colors, shapes, etc., which better capture key information in the road condition image.
Features extracted by the road condition feature extractor may have a lower dimension and a more compact representation. Compared with the original road condition overlook monitoring image, the road condition monitoring feature image can more effectively represent information in the image, and the information compression is beneficial to reducing the storage and transmission cost of data and improving the efficiency of subsequent processing.
The road condition feature extractor based on the convolutional neural network can gradually extract more and more abstract feature representations through multi-layer convolution and pooling operation. The abstract representations can capture higher-level and more semantic features in the image, such as objects, structures, scenes and the like, and the content in the road condition image can be better understood and analyzed through the road condition monitoring feature map.
Convolutional neural networks have the characteristics of translational invariance and partial scale invariance, meaning that they can maintain stable feature representations for transformations such as translation, rotation, and scaling of images to some extent. The road condition monitoring feature map with invariance and generalization can be obtained through the CNN-based road condition feature extractor, so that the feature map can provide reliable feature representation under different scenes and conditions.
The road condition overlook monitoring graph is processed through a road condition feature extractor based on a convolutional neural network model, so that the road condition monitoring feature graph has the advantages of feature extraction, information compression, abstract representation, invariance generalization and the like. These effects help to improve understanding and analysis of road condition images and provide a more accurate and useful representation of characteristics for subsequent road condition processing and decision making.
For the step 122, extracting node features and topology features from the road condition monitoring feature map to obtain a sequence of road condition local feature vectors and a local feature-to-feature association topology feature matrix, including: performing feature flattening processing on each feature matrix of the road condition monitoring feature map along the channel dimension to obtain a sequence of the road condition local feature vector; calculating cosine similarity between any two road condition local feature vectors in the sequence of the road condition local feature vectors to obtain a local inter-feature association topology matrix; and the local inter-feature association topology matrix is passed through a topology feature extractor based on a convolutional neural network model to obtain the local inter-feature association topology feature matrix.
Considering that the information in the road condition overlook monitoring graph often has complex spatial relationship and topological structure. Conventional convolutional neural networks are generally suitable for processing mesh data in Euclidean space, but have limited modeling capabilities for graph structure data. The graph neural network can effectively process graph structure data in a non-Euclidean space, and can better capture the relation and dependence among different positions in the road condition overlook monitoring graph. That is, the graph neural network can obtain global context information by means of information transfer and aggregation in the graph structure.
Because the road condition monitoring feature map is the feature representation obtained by extracting the local neighborhood feature by the road condition feature extractor based on the convolutional neural network model, in the technical scheme of the application, each feature matrix of the road condition monitoring feature map along the channel dimension is expected to be regarded as node information in map data, so that map data are constructed, more abundant feature representations can be extracted by using the map neural network model, and the recognition and reasoning capacity of complex road condition modes is enhanced.
Specifically, node features and topological features are extracted from the road condition monitoring feature map to obtain a sequence of road condition local feature vectors and a local feature-to-feature correlation topological feature matrix. In a specific example of the present application, the encoding process for extracting node features and topology features from the road condition monitoring feature map to obtain a sequence of road condition local feature vectors and a local feature-to-local feature associated topology feature matrix includes: firstly, carrying out characteristic flattening treatment on each characteristic matrix of the road condition monitoring characteristic diagram along the channel dimension to obtain a sequence of road condition local characteristic vectors; then, calculating cosine similarity between any two road condition local feature vectors in the sequence of the road condition local feature vectors to obtain a local inter-feature association topology matrix; and then the local inter-feature association topological matrix passes through a topological feature extractor based on a convolutional neural network model to obtain the local inter-feature association topological feature matrix.
Further, the sequence of the road condition local feature vectors and the local feature inter-feature association topological feature matrix are processed through a graph neural network model to obtain a global road condition semantic feature matrix.
Specifically, in the step 130, based on the global road condition semantic feature matrix, an unmanned aerial vehicle position adjustment instruction is generated, including: the global road condition semantic feature matrix passes through a classifier to obtain a classification result, wherein the classification result is used for indicating whether a vehicle object is contained in a road condition overlook monitoring graph or not; and generating the unmanned aerial vehicle position adjustment instruction based on the classification result, wherein the unmanned aerial vehicle position adjustment instruction is transmitted to the unmanned aerial vehicle through the Internet of vehicles and is used for controlling the position of the unmanned aerial vehicle so that the vehicle is in the visual field range of the unmanned aerial vehicle.
According to the application, by inputting the global road condition semantic feature matrix into the classifier, whether the road condition overlook monitoring graph contains the vehicle object can be judged, the real-time monitoring of the vehicle condition on the road is facilitated, and accurate information of the existence of the vehicle is provided. Based on the classification result, an unmanned aerial vehicle position adjustment instruction can be generated, and the instruction is transmitted to the unmanned aerial vehicle through the Internet of vehicles for controlling the position of the unmanned aerial vehicle. The unmanned aerial vehicle can adjust the position of the unmanned aerial vehicle in real time according to the position and the running direction of the vehicle, so that the vehicle is always in the visual field of the unmanned aerial vehicle, real-time navigation guidance can be provided, the vehicle is helped to avoid traffic jam, the optimal route is selected, and the like, and the navigation efficiency and the safety of the vehicle are improved.
Through the collaborative work with unmanned aerial vehicle, the vehicle can acquire the overall road conditions information that unmanned aerial vehicle overlooked, and unmanned aerial vehicle can provide wider field of vision and more comprehensive road conditions perception, including traffic conditions, road obstacle, accident etc.. Therefore, the vehicle can more accurately know the road condition, make corresponding driving decisions and improve the driving safety. Through the unmanned aerial vehicle position adjustment instruction of car networking transmission, the vehicle can receive and respond unmanned aerial vehicle's instruction in real time, and this kind of real-time update and response mechanism can make the collaborative work between vehicle and the unmanned aerial vehicle more high-efficient and nimble for the vehicle can in time adjust the driving strategy, should different road conditions change.
Then, the global road condition semantic feature matrix passes through a classifier to obtain a classification result, wherein the classification result is used for indicating whether a vehicle object is contained in a road condition overlook monitoring graph or not; and generating an unmanned aerial vehicle position adjustment instruction based on the classification result, wherein the unmanned aerial vehicle position adjustment instruction is transmitted to the unmanned aerial vehicle through the Internet of vehicles and is used for controlling the position of the unmanned aerial vehicle so that the vehicle is in the visual field range of the unmanned aerial vehicle.
In an embodiment of the present application, the unmanned aerial vehicle control method based on terminal internet of vehicles further includes a training step: training the road condition feature extractor based on the convolutional neural network model, the topological feature extractor based on the convolutional neural network model, the graph neural network model and the classifier; wherein the training step comprises: acquiring training data, wherein the training data comprises a overlooking monitoring chart of training road conditions collected by an unmanned aerial vehicle and whether the overlooking monitoring chart of the training road conditions contains a true value of a vehicle object; the training road condition overlook monitoring graph passes through the road condition feature extractor based on the convolutional neural network model to obtain a training road condition monitoring feature graph; performing feature flattening processing on each feature matrix of the training road condition monitoring feature map along the channel dimension to obtain a sequence of training road condition local feature vectors; calculating cosine similarity between any two training road condition local feature vectors in the sequence of the training road condition local feature vectors to obtain a training local feature-to-feature association topology matrix; the correlation topology matrix among the training local features passes through the topology feature extractor based on the convolutional neural network model to obtain the correlation topology feature matrix among the training local features; the sequence of the training road condition local feature vector and the training local feature inter-correlation topological feature matrix are processed through a graph neural network model to obtain a training global road condition semantic feature matrix; the training global road condition semantic feature matrix is passed through a classifier to obtain a classification loss function value; training the road condition feature extractor based on the convolutional neural network model, the topological feature extractor based on the convolutional neural network model, the graph neural network model and the classifier by using the classification loss function value, wherein in each round of iteration of training, fine granularity density prediction search optimization iteration of a weight space is performed on the global road condition semantic feature vector obtained after the global road condition semantic feature matrix is developed.
According to the technical scheme, the road condition overlooking monitoring graph is processed through the road condition feature extractor based on the convolutional neural network model to obtain the road condition monitoring feature graph, and feature flattening processing is carried out on each feature matrix of the road condition monitoring feature graph along the channel dimension to obtain a sequence of road condition local feature vectors, and each feature value of the road condition local feature vectors expresses the road condition overlooking monitoring graph based on the locally associated image semantic features of the convolutional neural network model, so that the road condition overlooking monitoring graph has the image semantic feature super-resolution expression belonging to the spatial dimension distribution of the feature matrix. After the sequence of the road condition local feature vector and the local feature inter-associated topological feature matrix are passed through a graph neural network model to obtain a global road condition semantic feature matrix, the global road condition semantic feature matrix is used for expressing topological association expression of image semantic features of the road condition overlook monitoring graph under a channel distribution semantic cosine similarity topology, so that the global road condition semantic feature matrix has a channel semantic expression dimension of vector granularity besides a feature matrix space semantic expression dimension of feature value granularity, that is, the global road condition semantic feature matrix has super-resolution expression characteristics under a multi-dimensional context, and the training efficiency when the classification is carried out through a classifier is affected.
Therefore, when the global road condition semantic feature matrix is trained by the classifier, in each iteration, the global road condition semantic feature vector obtained by expanding the global road condition semantic feature matrix is recorded as, for examplePerforming weight space refinementParticle size density predictive search optimization, expressed as: in each iteration of the training, carrying out fine granularity density prediction search optimization iteration of a weight space on the global road condition semantic feature vector obtained after the global road condition semantic feature matrix is unfolded, wherein the method comprises the following steps of: carrying out fine granularity density prediction search optimization iteration of a weight space on the global road condition semantic feature vector obtained after the global road condition semantic feature matrix is unfolded by using the following optimization formula; wherein, the optimization formula is:
wherein,and->The weight matrix of last and current iteration, respectively,/->Is the global road condition semantic feature vector, +.>And->Respectively represent feature vector +>And->Global mean of (2), and->Is a bias vector, ++>Is the global road condition semantic feature vector after iteration, < ->Representing matrix multiplication +.>Representing matrix addition, ++>Representing multiplication by location.
Here, for the super-resolution expression characteristic of the global road condition semantic feature vector in the multi-dimensional context, the fine granularity density prediction search optimization of the weight space can reduce the total sequence complexity (overall sequential complexity) of the representation of the global road condition semantic feature vector in the weight search space while providing a corresponding fine granularity weight search strategy for the dense prediction task in the weight search space through the feedforward serialization mapping of the projected vector space of the global road condition semantic feature vector, thereby improving the training efficiency.
In summary, the unmanned aerial vehicle control method 100 based on the terminal internet of vehicles according to the embodiment of the present application is illustrated, and the unmanned aerial vehicle is intelligently controlled in position by combining the artificial intelligence technology based on deep learning, so that the vehicle is within the field of view of the unmanned aerial vehicle.
In one embodiment of the present application, fig. 4 is a block diagram of a terminal internet of vehicles-based unmanned aerial vehicle control system according to an embodiment of the present application. As shown in fig. 4, a terminal internet of vehicles-based unmanned aerial vehicle control system 200 according to an embodiment of the present application includes: the monitoring map acquisition module 210 is configured to receive, by using a vehicle terminal, a road condition overlooking monitoring map acquired by an unmanned aerial vehicle and transmitted through a vehicle network; the feature extraction module 220 is configured to perform feature extraction on the road condition overlook monitoring graph to obtain a global road condition semantic feature matrix; and an adjustment instruction generating module 230, configured to generate an unmanned aerial vehicle position adjustment instruction based on the global road condition semantic feature matrix.
Specifically, in the unmanned aerial vehicle control system based on terminal internet of vehicles, the feature extraction module includes: the road condition feature extraction unit is used for extracting the road condition features of the road condition overlook monitoring graph to obtain a road condition monitoring feature graph; the node characteristic and topology characteristic extraction unit is used for extracting node characteristics and topology characteristics from the road condition monitoring characteristic diagram to obtain a sequence of road condition local characteristic vectors and a local characteristic inter-correlation topology characteristic matrix; and the map neural network unit is used for obtaining the global road condition semantic feature matrix through a map neural network model by the sequence of the road condition local feature vector and the local feature inter-feature association topological feature matrix.
Specifically, in the unmanned aerial vehicle control system based on the terminal internet of vehicles, the road condition feature extraction unit is configured to: and the road condition overlook monitoring graph is obtained through a road condition feature extractor based on a convolutional neural network model.
Here, it will be understood by those skilled in the art that the specific functions and operations of the respective units and modules in the above-described terminal-car-networking-based unmanned aerial vehicle control system have been described in detail in the above description of the terminal-car-networking-based unmanned aerial vehicle control system method with reference to fig. 1 to 3, and thus, repetitive descriptions thereof will be omitted.
As described above, the terminal-car-networking-based unmanned aerial vehicle control system 200 according to the embodiment of the present application may be implemented in various terminal devices, for example, a server or the like for terminal-car-networking-based unmanned aerial vehicle control. In one example, the terminal internet of vehicles-based drone control system 200 according to embodiments of the present application may be integrated into the terminal device as a software module and/or hardware module. For example, the terminal-based vehicle-networking unmanned aerial vehicle control system 200 may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the unmanned aerial vehicle control system 200 based on terminal internet of vehicles can also be one of a plurality of hardware modules of the terminal device.
Alternatively, in another example, the terminal-internet-of-vehicle-based drone control system 200 and the terminal device may be separate devices, and the terminal-internet-of-vehicle-based drone control system 200 may be connected to the terminal device via a wired and/or wireless network, and transmit the interaction information in a agreed data format.
Fig. 5 is a schematic view of a scenario of a method for controlling an unmanned aerial vehicle based on terminal internet of vehicles according to an embodiment of the present application. As shown in fig. 5, in this application scenario, first, a vehicle terminal receives a road condition overhead monitoring map (e.g., C as illustrated in fig. 5) acquired by an unmanned aerial vehicle transmitted through the internet of vehicles; then, the obtained road condition overlooking monitoring graph is input into a server (for example, S as illustrated in fig. 5) deployed with an unmanned aerial vehicle control algorithm based on the terminal internet of vehicles, wherein the server can process the road condition overlooking monitoring graph based on the unmanned aerial vehicle control algorithm of the terminal internet of vehicles to determine whether the service data flow to be analyzed belongs to an interference sample.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent aspects of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (3)

1. The unmanned aerial vehicle control method based on the terminal internet of vehicles is characterized by comprising the following steps of:
the vehicle terminal receives a road condition overlooking monitoring chart transmitted through the Internet of vehicles and collected by the unmanned aerial vehicle;
extracting features of the road condition overlook monitoring graph to obtain a global road condition semantic feature matrix;
generating an unmanned aerial vehicle position adjustment instruction based on the global road condition semantic feature matrix;
Feature extraction is carried out on the road condition overlook monitoring graph to obtain a global road condition semantic feature matrix, and the method comprises the following steps:
extracting road condition characteristics of the road condition overlooking monitoring graph to obtain a road condition monitoring characteristic graph;
extracting node features and topological features from the road condition monitoring feature map to obtain a sequence of road condition local feature vectors and a local feature-to-local feature correlation topological feature matrix;
the sequence of the road condition local feature vector and the local feature inter-correlation topological feature matrix are processed through a graph neural network model to obtain the global road condition semantic feature matrix;
extracting the road condition characteristics of the road condition overlook monitoring graph to obtain a road condition monitoring characteristic graph, comprising:
the road condition overlooking monitoring graph is passed through a road condition feature extractor based on a convolutional neural network model to obtain the road condition monitoring feature graph;
extracting node features and topological features from the road condition monitoring feature map to obtain a sequence of road condition local feature vectors and a local feature-to-local feature correlation topological feature matrix, wherein the method comprises the following steps:
performing feature flattening processing on each feature matrix of the road condition monitoring feature map along the channel dimension to obtain a sequence of the road condition local feature vector;
Calculating cosine similarity between any two road condition local feature vectors in the sequence of the road condition local feature vectors to obtain a local inter-feature association topology matrix;
the local inter-feature association topology matrix passes through a topology feature extractor based on a convolutional neural network model to obtain the local inter-feature association topology feature matrix;
based on the global road condition semantic feature matrix, generating an unmanned aerial vehicle position adjustment instruction comprises:
the global road condition semantic feature matrix passes through a classifier to obtain a classification result, wherein the classification result is used for indicating whether a vehicle object is contained in a road condition overlook monitoring graph or not;
and generating the unmanned aerial vehicle position adjustment instruction based on the classification result, wherein the unmanned aerial vehicle position adjustment instruction is transmitted to the unmanned aerial vehicle through the internet of vehicles and is used for controlling the position of the unmanned aerial vehicle so that the vehicle is in the visual field range of the unmanned aerial vehicle;
the method further comprises the training step of: training the road condition feature extractor based on the convolutional neural network model, the topological feature extractor based on the convolutional neural network model, the graph neural network model and the classifier;
Wherein the training step comprises:
acquiring training data, wherein the training data comprises a overlooking monitoring chart of training road conditions collected by an unmanned aerial vehicle and whether the overlooking monitoring chart of the training road conditions contains a true value of a vehicle object;
the training road condition overlook monitoring graph passes through the road condition feature extractor based on the convolutional neural network model to obtain a training road condition monitoring feature graph;
performing feature flattening processing on each feature matrix of the training road condition monitoring feature map along the channel dimension to obtain a sequence of training road condition local feature vectors;
calculating cosine similarity between any two training road condition local feature vectors in the sequence of the training road condition local feature vectors to obtain a training local feature-to-feature association topology matrix;
the correlation topology matrix among the training local features passes through the topology feature extractor based on the convolutional neural network model to obtain the correlation topology feature matrix among the training local features;
the sequence of the training road condition local feature vector and the training local feature inter-correlation topological feature matrix are processed through a graph neural network model to obtain a training global road condition semantic feature matrix;
The training global road condition semantic feature matrix is passed through a classifier to obtain a classification loss function value;
training the road condition feature extractor based on the convolutional neural network model, the topological feature extractor based on the convolutional neural network model, the graph neural network model and the classifier by using the classification loss function value, wherein in each round of iteration of training, fine granularity density prediction search optimization iteration of a weight space is performed on the global road condition semantic feature vector obtained after the global road condition semantic feature matrix is developed.
2. The unmanned aerial vehicle control method based on the terminal internet of vehicles according to claim 1, wherein in each iteration of the training, performing fine granularity density prediction search optimization iteration of a weight space on the global road condition semantic feature vector obtained after the global road condition semantic feature matrix is developed, comprises: carrying out fine granularity density prediction search optimization iteration of a weight space on the global road condition semantic feature vector obtained after the global road condition semantic feature matrix is unfolded by using the following optimization formula;
wherein, the optimization formula is:
wherein the method comprises the steps of,And- >The weight matrix of last and current iteration, respectively,/->Is the semantic feature vector of the global road condition,and->Respectively represent feature vector +>And->Global mean of (2), and->Is a bias vector, ++>Is the global road condition semantic feature vector after iteration, < ->Representing matrix multiplication +.>Representing matrix addition, ++>Representing multiplication by location.
3. Unmanned aerial vehicle control system based on terminal car networking, characterized by comprising:
the monitoring graph acquisition module is used for receiving the road condition overlooking monitoring graph which is transmitted through the internet of vehicles and is acquired by the unmanned aerial vehicle by the vehicle terminal;
the feature extraction module is used for extracting features of the road condition overlook monitoring graph to obtain a global road condition semantic feature matrix;
the adjustment instruction generation module is used for generating an unmanned aerial vehicle position adjustment instruction based on the global road condition semantic feature matrix;
the feature extraction module comprises:
the road condition feature extraction unit is used for extracting the road condition features of the road condition overlook monitoring graph to obtain a road condition monitoring feature graph;
the node characteristic and topology characteristic extraction unit is used for extracting node characteristics and topology characteristics from the road condition monitoring characteristic diagram to obtain a sequence of road condition local characteristic vectors and a local characteristic inter-correlation topology characteristic matrix;
The map neural network unit is used for enabling the sequence of the road condition local feature vectors and the local feature inter-feature association topological feature matrix to pass through a map neural network model to obtain the global road condition semantic feature matrix;
the road condition feature extraction unit is specifically configured to:
the road condition overlooking monitoring graph is passed through a road condition feature extractor based on a convolutional neural network model to obtain the road condition monitoring feature graph;
the node characteristic and topology characteristic extraction unit is specifically configured to:
performing feature flattening processing on each feature matrix of the road condition monitoring feature map along the channel dimension to obtain a sequence of the road condition local feature vector;
calculating cosine similarity between any two road condition local feature vectors in the sequence of the road condition local feature vectors to obtain a local inter-feature association topology matrix;
the local inter-feature association topology matrix passes through a topology feature extractor based on a convolutional neural network model to obtain the local inter-feature association topology feature matrix;
the adjustment instruction generation module is specifically configured to:
the global road condition semantic feature matrix passes through a classifier to obtain a classification result, wherein the classification result is used for indicating whether a vehicle object is contained in a road condition overlook monitoring graph or not;
And generating the unmanned aerial vehicle position adjustment instruction based on the classification result, wherein the unmanned aerial vehicle position adjustment instruction is transmitted to the unmanned aerial vehicle through the internet of vehicles and is used for controlling the position of the unmanned aerial vehicle so that the vehicle is in the visual field range of the unmanned aerial vehicle;
the system further includes a training module: the road condition feature extractor is used for training the road condition feature extractor based on the convolutional neural network model, the topological feature extractor based on the convolutional neural network model, the graph neural network model and the classifier;
the training module is specifically configured to:
acquiring training data, wherein the training data comprises a overlooking monitoring chart of training road conditions collected by an unmanned aerial vehicle and whether the overlooking monitoring chart of the training road conditions contains a true value of a vehicle object;
the training road condition overlook monitoring graph passes through the road condition feature extractor based on the convolutional neural network model to obtain a training road condition monitoring feature graph;
performing feature flattening processing on each feature matrix of the training road condition monitoring feature map along the channel dimension to obtain a sequence of training road condition local feature vectors;
calculating cosine similarity between any two training road condition local feature vectors in the sequence of the training road condition local feature vectors to obtain a training local feature-to-feature association topology matrix;
The correlation topology matrix among the training local features passes through the topology feature extractor based on the convolutional neural network model to obtain the correlation topology feature matrix among the training local features;
the sequence of the training road condition local feature vector and the training local feature inter-correlation topological feature matrix are processed through a graph neural network model to obtain a training global road condition semantic feature matrix;
the training global road condition semantic feature matrix is passed through a classifier to obtain a classification loss function value;
training the road condition feature extractor based on the convolutional neural network model, the topological feature extractor based on the convolutional neural network model, the graph neural network model and the classifier by using the classification loss function value, wherein in each round of iteration of training, fine granularity density prediction search optimization iteration of a weight space is performed on the global road condition semantic feature vector obtained after the global road condition semantic feature matrix is developed.
CN202311184066.6A 2023-09-14 2023-09-14 Unmanned aerial vehicle control system and method based on terminal Internet of vehicles Active CN116909317B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311184066.6A CN116909317B (en) 2023-09-14 2023-09-14 Unmanned aerial vehicle control system and method based on terminal Internet of vehicles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311184066.6A CN116909317B (en) 2023-09-14 2023-09-14 Unmanned aerial vehicle control system and method based on terminal Internet of vehicles

Publications (2)

Publication Number Publication Date
CN116909317A CN116909317A (en) 2023-10-20
CN116909317B true CN116909317B (en) 2023-11-21

Family

ID=88351575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311184066.6A Active CN116909317B (en) 2023-09-14 2023-09-14 Unmanned aerial vehicle control system and method based on terminal Internet of vehicles

Country Status (1)

Country Link
CN (1) CN116909317B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218858A (en) * 2023-10-25 2023-12-12 河北高速公路集团有限公司承德分公司 Traffic safety early warning system and method for expressway
CN117608283A (en) * 2023-11-08 2024-02-27 浙江孚宝智能科技有限公司 Autonomous navigation method and system for robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825713A (en) * 2016-04-08 2016-08-03 重庆大学 Vehicular-mounted unmanned aerial vehicle auxiliary driving system and operation mode
EP3273318A1 (en) * 2016-07-22 2018-01-24 Parrot Drones Autonomous system for collecting moving images by a drone with target tracking and improved target positioning
CN108648489A (en) * 2018-05-15 2018-10-12 湖北文理学院 A kind of traffic information Real-Time Sharing system and method based on car networking
CN116466737A (en) * 2023-02-02 2023-07-21 岚图汽车科技有限公司 Control method and related equipment of vehicle-mounted unmanned aerial vehicle

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3054334A1 (en) * 2016-07-22 2018-01-26 Parrot Drones AUTONOMOUS ANIMATED VIEWING SYSTEM COMPRISING A DRONE AND A GROUND STATION, AND ASSOCIATED METHOD.

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825713A (en) * 2016-04-08 2016-08-03 重庆大学 Vehicular-mounted unmanned aerial vehicle auxiliary driving system and operation mode
EP3273318A1 (en) * 2016-07-22 2018-01-24 Parrot Drones Autonomous system for collecting moving images by a drone with target tracking and improved target positioning
CN108648489A (en) * 2018-05-15 2018-10-12 湖北文理学院 A kind of traffic information Real-Time Sharing system and method based on car networking
CN116466737A (en) * 2023-02-02 2023-07-21 岚图汽车科技有限公司 Control method and related equipment of vehicle-mounted unmanned aerial vehicle

Also Published As

Publication number Publication date
CN116909317A (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN110837778B (en) Traffic police command gesture recognition method based on skeleton joint point sequence
CN116909317B (en) Unmanned aerial vehicle control system and method based on terminal Internet of vehicles
US10860896B2 (en) FPGA device for image classification
US10817731B2 (en) Image-based pedestrian detection
CN109582993B (en) Urban traffic scene image understanding and multi-view crowd-sourcing optimization method
US11455813B2 (en) Parametric top-view representation of complex road scenes
Boudjit et al. Human detection based on deep learning YOLO-v2 for real-time UAV applications
EP3690744B1 (en) Method for integrating driving images acquired from vehicles performing cooperative driving and driving image integrating device using same
US12023812B2 (en) Systems and methods for sensor data packet processing and spatial memory updating for robotic platforms
CN110705412A (en) Video target detection method based on motion history image
CN113536920B (en) Semi-supervised three-dimensional point cloud target detection method
CN116923442B (en) Control strategy generation method and system for intelligent network-connected automobile
CN114266889A (en) Image recognition method and device, readable medium and electronic equipment
Vlahogianni et al. Model free identification of traffic conditions using unmanned aerial vehicles and deep learning
US20230154198A1 (en) Computer-implemented method for multimodal egocentric future prediction
WO2023192397A1 (en) Capturing and simulating radar data for autonomous driving systems
Wang et al. Aprus: An airborne altitude-adaptive purpose-related uav system for object detection
CN116880462A (en) Automatic driving model, training method, automatic driving method and vehicle
Lin et al. Application of the efficientdet algorithm in traffic flow statistics
CN117591847B (en) Model pointing evaluating method and device based on vehicle condition data
Zhang et al. Accurate Detection and Tracking of Small‐Scale Vehicles in High‐Altitude Unmanned Aerial Vehicle Bird‐View Imagery
Mahendran et al. Multi-Modal Visual Features Perception Technology for Internet of Vehicles (IoV)
Wang et al. Deep Reinforcement Learning based Planning for Urban Self-driving with Demonstration and Depth Completion
Zhang Learning-based monocular vision obstacle detection and avoidance for UAV navigation in urban airspace
Bhatia et al. Road Image Segmentation for Autonomous Car

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant