CN115100623A - Unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system based on end-edge-cloud cooperation - Google Patents

Unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system based on end-edge-cloud cooperation Download PDF

Info

Publication number
CN115100623A
CN115100623A CN202210528443.2A CN202210528443A CN115100623A CN 115100623 A CN115100623 A CN 115100623A CN 202210528443 A CN202210528443 A CN 202210528443A CN 115100623 A CN115100623 A CN 115100623A
Authority
CN
China
Prior art keywords
task
unmanned aerial
node
aerial vehicle
blind area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210528443.2A
Other languages
Chinese (zh)
Inventor
张棋森
刘凯
蒋璐遥
钟成亮
晏国志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202210528443.2A priority Critical patent/CN115100623A/en
Publication of CN115100623A publication Critical patent/CN115100623A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses an unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system based on end-edge-cloud cooperation. The system at least comprises an unmanned aerial vehicle terminal, a mobile edge node, a static edge node and a cloud end node, and each node has calculation and transmission capabilities. The method comprises the following steps: 1. the unmanned aerial vehicle is deployed in the air, monitors the blind area of the vehicle in real time, collects real-time video stream information, and timely starts blind area pedestrian detection service to inform static edge nodes on the roadside; 2. the static edge node determines which node a task is to be unloaded to according to a certain scheduling algorithm by combining available computing resources and communication bandwidth of the heterogeneous node according to the task demand information; 3. the static edge node informs the unmanned aerial vehicle of the scheduling result, and the unmanned aerial vehicle sends the video stream monitored by the unmanned aerial vehicle to the corresponding node through V2X communication; 4. and the corresponding nodes execute a pedestrian detection task according to the deployed model, and send warning messages to the vehicle when the potential collision danger is detected. The invention provides a scheme for assisting vehicle networking through an unmanned aerial vehicle, and the prevention of dangerous collision of vehicle blind areas in a traffic system is realized.

Description

Unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system based on end-edge-cloud cooperation
Technical Field
The invention relates to unmanned aerial vehicles, an internet of vehicles, edge computing, task unloading, V2X communication and an intelligent transportation system, in particular to an unmanned aerial vehicle auxiliary internet of vehicles blind area pedestrian detection system based on end-edge-cloud coordination and a processing method thereof.
Background
In recent years, the development of vehicle ad hoc networks (VANETs) is promoted by the development of wireless communication and intelligent network technologies, and the VANETs is the basis of emerging Intelligent Transportation Systems (ITS). On the other hand, with popularization of applications based on unmanned aerial vehicles, such as environmental monitoring, intelligent monitoring, wireless communication, aerial photography and the like, formation of cooperation of VANET and unmanned aerial vehicles, and realization of a brand-new paradigm of innovative and powerful information technology services is a vision.
There has been a great deal of research into the integration of drones with VANETs, including data forwarding, traffic monitoring, computational offloading and trajectory optimization. In a data forwarding scene, an unmanned aerial vehicle is used as a relay node, the problems of data caching and trajectory planning are researched, and the network throughput is maximized. In a traffic monitoring scene, the unmanned aerial vehicle fully utilizes the maneuverability of the unmanned aerial vehicle, monitors the traffic network condition in real time and reports abnormal conditions to the control center. In a calculation unloading scene, the research focus is to consider the communication, storage and calculation resource allocation cost between the unmanned aerial vehicle and the VANET, reduce the calculation processing delay and save the energy consumption of the unmanned aerial vehicle. Under a track optimization scene, the maximum system throughput and the reduction of service delay when the unmanned aerial vehicle is integrated with the VANET are researched by planning the track of the unmanned aerial vehicle.
In the current traffic system, the blind vision zone in front of the vehicle is the main cause of traffic accidents in many cases, and the blind vision zone of the vehicle in the road can be fixed and randomly generated. For example, a large bus or an obstacle exists in front of a vehicle at a certain time and a certain position, a visual field blind area is randomly generated in front of the vehicle at the moment, the vehicle is easy to cause a traffic accident under the condition, the existing ITS mainly adopts a roadside monitoring camera to carry out real-time road condition live broadcast and aims to eliminate the hidden trouble, but the blind area is generated randomly, the method cannot help the vehicle to carry out real-time detection and early warning on the blind area under the scene, the traffic accident prevention effect is not good, and in addition, the traditional cloud computing architecture cannot provide real-time and efficient service for the road vehicle. How to solve the randomness generated by the vehicle vision blind area and solve the problems of untimely task response of the cloud computing architecture and the like still has great test.
Disclosure of Invention
In order to solve the problems, the invention mainly provides an unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system based on end-edge-cloud coordination, and particularly relates to an unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system which is provided with computing capacity and a communication interface and used for monitoring blind areas possibly occurring in vehicles in roads when hovering in the air and can detect pedestrians on the basis of a certain target detection model. Meanwhile, drones may communicate with vehicles, roadside infrastructure, and cloud servers through V2V (vehicle-to-vehicle), V2I (vehicle-to-infrastructure), and V2C (vehicle-to-cloud) communications, respectively. Therefore, the unmanned aerial vehicle can unload the target detection task to the mobile vehicle, the static edge node deployed on the roadside and the cloud server by transmitting the monitored video to the corresponding node, and due to the fact that the unmanned aerial vehicle has high flexibility, the system solves the problem that a vehicle vision blind area generates randomness on one hand, and on the other hand, the problem that cloud computing response is slow is solved in an end-edge-cloud cooperation mode.
In order to solve the technical problems, the unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system based on end-edge-cloud cooperation provided by the invention mainly comprises the following steps:
step 1: and (5) off-line model training for monitoring pedestrians in the blind area in real time. The method comprises the steps of collecting a pedestrian data set shot by an unmanned aerial vehicle, training a model, and deploying the trained model on a terminal with heterogeneous computing and communication capabilities, a mobile edge, a static edge and a cloud node. The method considers two classic target detection models, namely SSD and YOLO, and YOLO is a network with larger scale compared with SSD, needs higher calculation expense and has better detection effect; the SSD is a single-phase network that achieves object detection and classification simultaneously. The lightweight and fast reasoning nature of SSDs makes them more suitable for mobile devices.
The training data set of the system comes from videos of a university campus square shot by unmanned aerial vehicles. 2000 frames were randomly selected from the video, of which 90% were used for training and the remaining 10% for model validation, the training process set the batch size to 32, the IOU threshold to 0.98, and the training iteration to 5000 times. Both models were tested in traffic scenarios. The unmanned aerial vehicle video frame is collected, the scene simulates the potential blind area of the vehicle, and finally the experimental result shows that the detection precision of the YOLO and the SSD can be respectively kept at 95% and 90%.
Step 2: the method comprises the steps of deploying unmanned aerial vehicles in the air, monitoring the vision blind areas of vehicles in traffic roads, starting blind area pedestrian detection service, informing static edge nodes on road sides, and starting to update task unloading strategies on line after the static edge nodes receive the information. Specifically, the unmanned aerial vehicle adopts FFmpeg to realize video decoding operation, and converts an obtained decoding result into an image in a YUVIVIImage format, so as to realize image transmission and storage. The resultant YUV video stream is sent to various nodes in the system via V2X communication.
And step 3: a method for determining to offload the task to a certain node for execution in combination with computing resources available to heterogeneous nodes and communication bandwidth of a network is provided, the method comprising:
step 3.1: under the condition that node calculation, storage and communication resources are constrained, an objective function defined by taking minimized average task delay as an optimization objective is as follows:
wherein the node parameters include: definition S ═ S 1 ,s 2 ,…,s |S| },M={m 1 ,m 2 ,…,m |M| },U={u 1 ,u 2 ,…,u |U| The (c) and (c) are respectively a set of static edge nodes, mobile edge nodes, terminal nodes and cloud servers;
with N ═ s 1 ,s 2 ,…,s |S| ,m 1 ,m 2 ,…,m |M| ,u 1 ,u 2 ,…,u |U| And c } the offload node;
task set for unmanned aerial vehicle perception
Figure BDA0003645549840000021
Represents;
each task can be represented as a 5-tuple
Figure BDA0003645549840000022
Respectively representing the size of input data, the number of required CPU cycles, the precision requirement, the generation time and the expiration date;
each offload node N ∈ N and 5 tuples<C n ,B nn ,S n ,R n >Radius, total wireless bandwidth, number of channels, maximum storage function, and computational power (e.g., number of CPU cycles per unit time), respectively, representing communication coverage;
memory con u,n,t 1 indicates that u epsilon N can exchange messages with N epsilon N in single-hop communication at the time t;
with H ═ H 1 ,h 2 ,…,h |H| Expressing the weight value of the neural network model;
hypothesis utilization of neural network model h k Task w ji Offloading to node n for reasoning, defining a binary variable
Figure BDA0003645549840000023
Figure BDA0003645549840000024
Communication delay between nodes:
Figure BDA0003645549840000025
the total transmission delay sum comprises that the unmanned aerial vehicle sends video data to the unloading node, the unloading node sends a calculation result to the two parts of the vehicle, and the total communication delay definition of task unloading is as follows:
Figure BDA0003645549840000026
computing time delay of task:
Figure BDA0003645549840000027
waiting time delay of task:
Figure BDA0003645549840000031
finally, w is ji Task delay to unload to node n
Figure BDA0003645549840000032
It is expressed by the calculation of the total transmission delay, the calculated delay and the waiting delay:
Figure BDA0003645549840000033
the objective of the system is to minimize the average task delay by determining the offloading policy, the objective function being:
Figure BDA0003645549840000034
the constraints of the objective function include:
constraint C1 and constraint C2 indicate that each task must be offloaded and computed to a node:
C1:x j,i,n,h ∈{0,1}
C2:∑ n∈N x j,i,n,h =1
constraint C3 represents a storage constraint:
Figure BDA0003645549840000035
constraint C4 indicates that the number of simultaneous tasks cannot exceed the number of channels of the offload node:
Figure BDA0003645549840000036
constraint C5 indicates that the selected neural network model meets the accuracy requirement:
Figure BDA0003645549840000037
constraint C6 indicates that the task must be completed by the expiration date:
Figure BDA0003645549840000038
step 3.2: according to the optimization target of the objective function, a task unloading algorithm is designed:
and estimating the task time delay according to the currently applied unloading strategy. If the predefined threshold is exceeded, a greedy approach is employed to search for new offloading strategies. The method consists of two strategies, wherein one strategy is a delay driving strategy, and the aim is to minimize the overall task delay of the system; the other is a resource-driven strategy, with the goal of maximizing resource utilization.
A time delay driving strategy: given a certain unloading strategy, the task delay is represented as t cur . When the data is theta ji And the calculation requirement is c ji When the new task arrives, the unloading time delay t is estimated according to the deduced time delay model aver . When t is cur And t aver Exceeds a predefined threshold value t diff At that time, the algorithm begins to look for a new offload policy. Specifically, it calculates the total task delay through each computation node and each neural network model
Figure BDA0003645549840000039
Then, select
Figure BDA00036455498400000310
The minimum strategy.
And (3) resource driving strategy, taking resource utilization maximization into consideration, and taking computing resource availability as an example. Specifically, given an offload policy, the computational latency is represented as
Figure BDA00036455498400000311
R of nodes in the system n When added, the algorithm begins to look for new offload strategies. The specific update strategy is similar to the task-driven latency strategy.
And 4, step 4: the drones communicate with other nodes via V2X, in particular, drones and static edge nodes, the mobile edge node being located in one car networking at the same time, by as follows: WiFi, DSRC communicate with each other, unmanned aerial vehicle and high in the clouds node communicate through 4G, 5G or cellular network, and unmanned aerial vehicle can receive the task uninstallation node selection result that comes from static edge node to send the blind area video stream who gathers to corresponding node and carry out the task calculation.
And 5: according to the pedestrian detection model obtained in the step 1, the nodes of the calculation task deploy the pedestrian detection model trained offline in advance, the nodes select the corresponding deep learning model according to the received message to carry out reasoning calculation, and the selection result can be obtained through the two task unloading strategies provided in the step 3.2. After the node obtains the calculation result, the node does not send all the calculation results to the vehicles in the road in consideration of the limited communication bandwidth and the validity of the result, and only sends a warning signal to the vehicles when the collision danger is detected.
Drawings
The drawings of the invention are illustrated as follows:
FIG. 1 is a schematic diagram of the system of the present invention;
FIG. 2 is a flow chart of the present invention;
Detailed Description
FIG. 1 is a schematic diagram of the system of the present invention. The diagram mainly shows that the terminal-edge-cloud cooperative architecture consists of 4 basic elements, namely an unmanned aerial vehicle terminal, a vehicle serving as a mobile edge node, a road side unit serving as a static edge node and a cloud server. Each element has certain calculation and communication capacity, so that the unmanned aerial vehicle terminal can communicate with a vehicle, a road side unit and a cloud through V2X communication. Under this architecture, a general application scenario is described as follows: this unmanned aerial vehicle deploys in the air, can monitor the blind area of driving the vehicle, like the pedestrian that the circle surrounded in the picture. The monitored video stream may be processed according to a specific object detection model (e.g., YOLO). The model can be deployed on unmanned aerial vehicles, road side units and clouds for pedestrian detection. The road side unit obtains the unloading node of the current task according to a certain unloading algorithm and informs the unmanned aerial vehicle, and the unmanned aerial vehicle end transmits the video stream to the corresponding node through V2X communication for task calculation. Finally, if a potential pedestrian collision is detected, a warning message is transmitted to the driving vehicle by the mission computation node.
FIG. 2 is a flow chart of an embodiment of the present invention. The following will describe the system execution steps in detail:
in step 101, a pedestrian detection model is trained offline, two classic models, namely an SSD (solid state disk) model and a YOLO (YOLO), are used for training, wherein the SSD is more suitable for mobile equipment due to the light weight characteristic, the YOLO is higher in detection precision, a large number of pedestrian data sets shot by an unmanned aerial vehicle in the air in a traffic road are collected, and the trained models are all deployed at an unmanned aerial vehicle terminal, a roadside static edge node, a vehicle-mounted mobile edge node and a far-end cloud node.
At step 102, drones are deployed in the air of the traffic road, and as most drones do not have computing power, a raspberry pi can be loaded on the drones as a processor for monitoring the presence of vehicle vision blind areas in the traffic road.
In step 103, the unmanned aerial vehicle monitors the blind area condition in real time, shoots information such as pedestrian positions and the like, then adopts FFmpeg to realize video decoding operation, converts the obtained decoding result into an image with a YUVIImage format, so as to realize transmission and storage of video streams, and simultaneously sends a notification to the roadside unit while starting monitoring the blind area.
In step 104, the unmanned aerial vehicle communicates with the roadside unit through V2I, there are more channel interference signals in the traffic road, the packet loss phenomenon is more serious in the transmission process, and the use of the reliable transmission TCP protocol will greatly improve the transmission delay, and will not meet the requirement for the task response rate in the actual traffic road.
In step 105, the road side unit receives the notification information sent by the unmanned aerial vehicle, and the road side unit starts to update the task unloading strategy in real time. Because the calculation capacity of the road side unit is stronger than that of a vehicle-mounted end and an unmanned aerial vehicle end, and meanwhile, compared with a cloud node, the road side unit is closer to an actual traffic road, the task unloading algorithm is deployed to the road side unit in advance.
In step 106, the rsu integrates the computation, storage and bandwidth resources of all the heterogeneous nodes of the system, and determines an objective function defined with the minimized average task delay as an optimization objective, wherein the definition and constraint conditions are as follows:
the relevant parameters include: definition S ═ S 1 ,s 2 ,…,s |S| },M={m 1 ,m 2 ,…,m |M| },U={u 1 ,u 2 ,…,u |U| The (c) are respectively a set of static edge nodes, mobile edge nodes, terminal nodes and cloud servers;
by N ═ s 1 ,s 2 ,…,s |S| ,m 1 ,m 2 ,…,m |M| ,u 1 ,u 2 ,…,u |U| And c } the offload node;
task set perceived by unmanned aerial vehicle
Figure BDA0003645549840000059
Represents;
each task can be represented as a 5-tuple
Figure BDA0003645549840000051
Respectively representing the size of input data, the number of required CPU cycles, the precision requirement, the generation time and the expiration date;
each offload node N ∈ N and 5 tuples<C n ,B nn ,S n ,R n >Radius, total wireless bandwidth, number of channels, maximum storage function, and computational power (e.g., number of CPU cycles per unit time), respectively, representing communication coverage;
memory con u,n,t 1 indicates that u epsilon N can exchange messages with N epsilon N in single-hop communication at the time t;
with H ═ H 1 ,h 2 ,…,h |H| Expressing the weight value of the neural network model;
hypothesis utilization of neural network model h k Task w ji Offloading to node n for reasoning, defining a binary variable
Figure BDA0003645549840000052
Figure BDA0003645549840000053
Communication delay between nodes:
Figure BDA0003645549840000054
the total transmission time delay sum comprises that the unmanned aerial vehicle sends video data to the unloading node, the unloading node sends a calculation result to the two parts of the vehicle, and the definition of the total communication time delay of task unloading includes:
Figure BDA0003645549840000055
computing time delay of task:
Figure BDA0003645549840000056
waiting time delay of task:
Figure BDA0003645549840000057
finally, w is ji Task delay of offloading to node n
Figure BDA0003645549840000058
It is expressed by the calculation of the total transmission delay, the calculated delay and the waiting delay:
Figure BDA0003645549840000061
the objective of the system is to minimize the average task delay by determining the offloading policy, the objective function being:
Figure BDA0003645549840000062
the constraints of the objective function include:
constraint C1 and constraint C2 indicate that each task must be offloaded and computed to a node:
C1:x j,i,n,h ∈{0,1}
C2:∑ n∈N x j,i,n,h =1
constraint C3 represents a storage constraint:
Figure BDA0003645549840000063
constraint C4 indicates that the number of simultaneous tasks cannot exceed the number of channels of the offload node:
Figure BDA0003645549840000064
constraint C5 indicates that the selected neural network model meets the accuracy requirement:
Figure BDA0003645549840000065
constraint C6 indicates that the meaning of the task must be completed by the expiration date:
Figure BDA0003645549840000066
in step 107, the system estimates the task latency based on the currently applied offload policy. If the predefined threshold is exceeded, a greedy approach is employed to search for new offloading strategies. The method consists of two strategies, wherein one strategy is a delay driving strategy, and the aim is to minimize the overall task delay of the system; the other is a resource-driven strategy, with the goal of maximizing resource utilization.
Given a certain unloading strategy, the system initially unloads the task according to the unloading strategy, which indicates that the task time delay is t cur . When the data is theta ji And the calculation requirement is c ji When the new task arrives, the unloading time delay t is estimated according to the deduced time delay model aver . The same resource-driven policy considers resource utilization maximization, taking computing resource availability as an example. Specifically, given an offload policy, the computational latency is represented as
Figure BDA0003645549840000067
R of nodes in the system n When increasing, the current unload delay t is calculated aver The specific update strategy is similar to the task-driven latency strategy.
At step 108, when t is cur And t aver Exceeds a predefined threshold value t diff At that time, the algorithm begins to look for a new offload policy.
In step 109, the road side unit begins to find the optimal task offloading policy in the current environment, specifically, traverse each compute nodeAnd each neural network model is used for calculating the total task time delay
Figure BDA0003645549840000068
Then choose greedy choices
Figure BDA0003645549840000069
And recording the minimum strategy, and sending the unloading strategy to the unmanned aerial vehicle end.
At step 110, when t cur And t aver If the difference does not exceed the threshold, the algorithm continues to follow the previous unload strategy.
In step 111, the unmanned aerial vehicle receives the unloading strategy notification from the road side unit, and then sends the collected blind area video stream to a corresponding node for task calculation. During this process the drone communicates with other nodes via V2X, specifically the drone and the static edge node, the mobile edge node being located in a car networking simultaneously, by for example: WiFi, DSRC communicate like the mode, unmanned aerial vehicle and high in the clouds node through 4G, 5G or cellular network communicate.
In step 112, the node receives the video stream from the unmanned aerial vehicle, the node of the calculation task calls a pedestrian detection model which is deployed in advance and trained offline, the node selects a corresponding deep learning model according to the received message to perform inference calculation, and after the node obtains the calculation result, the node does not send all the calculation results to vehicles on the road and only sends a warning signal to the vehicles when the danger of collision of pedestrians is detected, considering the limitation of communication bandwidth and the validity of the result.
In step 113, the computing node detects that there is a pedestrian collision danger, the early warning information is sent to the vehicle end, a driver in the vehicle performs corresponding operation according to the early warning information, whether blind area pedestrian detection service needs to be ended or not is selected, and if the blind area pedestrian detection service is not ended, the system continues to execute step 103.
At step 114, the computing node does not detect the existence of a pedestrian collision risk, does not send warning information, and the system will continue to execute blind zone pedestrian detection service.

Claims (8)

1. An unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system based on end-edge-cloud cooperation is characterized by comprising the following steps:
step 1, an off-line preparation phase, namely acquiring a data set of a dangerous road section containing pedestrians in a traffic road, and using the data set to train pedestrian detection models such as YOLO and SSD, wherein the trained models can be deployed on terminals, mobile edges, static edges and cloud nodes with heterogeneous computing and communication capabilities.
And 2, deploying the unmanned aerial vehicle in the air, monitoring the visual field blind area of the vehicle in the traffic road, and simultaneously starting blind area pedestrian detection service and informing the roadside static edge node.
And 3, determining to unload the task to a certain node to execute a corresponding reasoning task according to a proposed task unloading algorithm by the roadside static edge node in combination with available computing resources of all heterogeneous nodes in the current system and communication bandwidth of the network.
And 4, transmitting the unloading result of the task to the unmanned aerial vehicle by the roadside static edge node, and sending the monitored video stream to the corresponding node by the unmanned aerial vehicle through V2X communication.
And 5, detecting and calculating the pedestrian by the node acquiring the calculation task according to the pedestrian detection model acquired in the step 1, if the potential collision danger exists in the blind area, sending warning information to the vehicle in the road, acquiring the vehicle with the warning signal, and performing corresponding danger avoiding operation according to the actual driving condition.
2. The unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system based on end-edge-cloud coordination of claim 1, wherein: in the step 1, according to the actual condition of the traffic system, the accuracy of blind area pedestrian detection is improved, the system considers two classic target detection models, namely SSD and YOLO, and the YOLO is a network with larger scale compared with the SSD, needs higher calculation cost and has better detection effect; the lightweight and fast reasoning nature of SSDs makes them more suitable for mobile devices.
The training data set of the system comes from a video stream of a university campus square shot by an unmanned aerial vehicle. 2000 frames were randomly selected from the video, of which 90% were used for training and the remaining 10% for model validation, the training process set the batch size to 32, the IOU threshold to 0.98, and the training iteration to 5000 times. Finally, according to experimental tests, the detection precision of the YOLO and the SSD can be respectively kept at 95% and 90%.
3. The unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system based on end-edge-cloud cooperation of claim 1, wherein the unmanned aerial vehicle in step 2 monitors the blind area in real time, and is characterized in that: the method for acquiring the video stream data of the unmanned aerial vehicle comprises the steps that the unmanned aerial vehicle adopts FFmpeg to realize video decoding operation, and an obtained decoding result is converted into an image in a YUVIVO image format, so that the image is transmitted and stored. The resultant YUV video stream is sent to various nodes in the system via V2X communication.
4. The unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system based on end-edge-cloud cooperation as claimed in claim 1, wherein the computing and bandwidth resources available for the heterogeneous nodes are integrated in step 3, and the system is characterized in that:
the node parameters include: definition S ═ S 1 ,s 2 ,…,s |S| },M={m 1 ,m 2 ,…,m |M| },U={u 1 ,u 2 ,…,u |U| The (c) and (c) are respectively a set of static edge nodes, mobile edge nodes, terminal nodes and cloud servers;
by N ═ s 1 ,s 2 ,…,s |S| ,m 1 ,m 2 ,…,m |M| ,u 1 ,u 2 ,…,u |U| And c } the offload node;
task set for unmanned aerial vehicle perception
Figure FDA0003645549830000011
Representing;
each task can be represented as a 5-tuple
Figure FDA0003645549830000012
Respectively representing the size of input data, the number of required CPU cycles, the precision requirement, the generation time and the expiration date;
each offload node N ∈ N and 5 tuples<C n ,B nn ,S n ,R n >Radius, total wireless bandwidth, number of channels, maximum storage function, and computational power (e.g., number of CPU cycles per unit time), respectively, representing communication coverage;
memory con u,n,t 1 indicates that u epsilon N can exchange messages with N epsilon N in single-hop communication at the time t;
by H ═ H 1 ,h 2 ,…,h |H| Expressing the weight value of the neural network model;
hypothesis utilization of neural network model h k Will task w ji Offloading to node n for reasoning, defining a binary variable
Figure FDA0003645549830000021
Figure FDA0003645549830000022
Communication delay between nodes:
Figure FDA0003645549830000023
total communication delay for task offloading:
Figure FDA0003645549830000024
computing time delay of task:
Figure FDA0003645549830000025
waiting time delay of task:
Figure FDA0003645549830000026
finally, w is ji Task delay of offloading to node n
Figure FDA0003645549830000027
It is expressed by the calculation of the total transmission delay, the calculated delay and the waiting delay:
Figure FDA0003645549830000028
the objective of the system is to minimize the average task delay by determining the offloading policy, the objective function being:
Figure FDA0003645549830000029
5. the unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system based on end-edge-cloud cooperation according to claim 4, wherein an objective function defined by taking average task time delay as an optimization objective is characterized in that: the constraints of the objective function include:
constraint C1 and constraint C2 indicate that each task must be offloaded and computed to a node:
C1:x j,i,n,h ∈{0,1}
C2:∑ n∈N x j,i,n,h =1
constraint C3 represents a storage constraint:
C3:
Figure FDA00036455498300000210
constraint C4 indicates that the number of simultaneous tasks cannot exceed the number of channels of the offload node:
C4:
Figure FDA00036455498300000211
constraint C5 indicates that the selected neural network model meets the accuracy requirement:
C5:
Figure FDA0003645549830000031
constraint C6 indicates that the task must be completed by the expiration date:
C6:
Figure FDA0003645549830000032
6. the unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system based on end-edge-cloud coordination of claim 1, wherein the task unloading algorithm of step 3 is characterized in that: and the system estimates the task time delay according to the unloading strategy of the current application. If the predefined threshold is exceeded, a greedy approach is employed to search for a new offloading strategy. The method consists of two strategies, one is a delay driving strategy, and the aim is to minimize the overall task delay of the system; the other is a resource-driven strategy, with the goal of maximizing resource utilization.
A time delay driving strategy: given a certain unloading strategy, the task delay is represented as t cur . When the data is theta ji And a computational requirement of c ji When the new task arrives, the unloading time delay t is estimated according to the deduced time delay model aver . When t is cur And t aver Exceeds a predefined threshold value t diff At that time, the algorithm begins to look for a new offload policy. Specifically, it calculates the total task delay through each computation node and each neural network model
Figure FDA0003645549830000033
Then, select
Figure FDA0003645549830000034
The minimum strategy.
And (3) resource driving strategy, taking resource utilization maximization into consideration, and taking computing resource availability as an example. Specifically, given an offload policy, the computational latency is represented as
Figure FDA0003645549830000035
R of nodes in the system n When added, the algorithm begins to look for new offload strategies. The specific update strategy is similar to the task-driven latency strategy.
7. The unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system based on end-edge-cloud cooperation of claim 1, wherein the unmanned aerial vehicle of step 4 communicates with other nodes in the system, and is characterized in that: the drones communicate with other nodes via V2X, in particular, drones and static edge nodes, the mobile edge node being located in one car networking at the same time, by as follows: WiFi, DSRC communicate, unmanned aerial vehicle and high in the clouds node are through 4G, 5G or cellular network communicate, and unmanned aerial vehicle can receive the task uninstallation node selection result that comes from static edge node to the blind area video stream that will gather sends corresponding node and carries out the task calculation.
8. The unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system based on end-edge-cloud cooperation of claim 1, wherein the node of step 5 detects pedestrians in a blind area, and is characterized in that: the nodes of the calculation task deploy the pedestrian detection model trained offline in advance, the nodes select the corresponding deep learning model to perform inference calculation according to the received message, and the selection result can be obtained by two task unloading strategies provided in claim 6. After the node obtains the calculation result, all the calculation results are not sent to vehicles in the road in consideration of the limited communication bandwidth and the validity of the result, and only a warning signal is sent to the vehicles when the collision danger is detected.
CN202210528443.2A 2022-05-16 2022-05-16 Unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system based on end-edge-cloud cooperation Pending CN115100623A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210528443.2A CN115100623A (en) 2022-05-16 2022-05-16 Unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system based on end-edge-cloud cooperation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210528443.2A CN115100623A (en) 2022-05-16 2022-05-16 Unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system based on end-edge-cloud cooperation

Publications (1)

Publication Number Publication Date
CN115100623A true CN115100623A (en) 2022-09-23

Family

ID=83287848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210528443.2A Pending CN115100623A (en) 2022-05-16 2022-05-16 Unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system based on end-edge-cloud cooperation

Country Status (1)

Country Link
CN (1) CN115100623A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115641497A (en) * 2022-12-23 2023-01-24 ***数字城市科技有限公司 Multi-channel video processing system and method
CN117590863A (en) * 2024-01-18 2024-02-23 苏州朗捷通智能科技有限公司 Unmanned aerial vehicle cloud edge end cooperative control system of 5G security rescue net allies oneself with

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115641497A (en) * 2022-12-23 2023-01-24 ***数字城市科技有限公司 Multi-channel video processing system and method
CN117590863A (en) * 2024-01-18 2024-02-23 苏州朗捷通智能科技有限公司 Unmanned aerial vehicle cloud edge end cooperative control system of 5G security rescue net allies oneself with
CN117590863B (en) * 2024-01-18 2024-04-05 苏州朗捷通智能科技有限公司 Unmanned aerial vehicle cloud edge end cooperative control system of 5G security rescue net allies oneself with

Similar Documents

Publication Publication Date Title
CN111630933B (en) Method and apparatus for performing side link communication by UE in NR V2X
EP3621274B1 (en) Method for predicting a quality of service for a communication between at least two moving communication partners, apparatus for performing steps of the method, vehicle, backend server and computer program
US10257695B2 (en) Wireless communication device and wireless communication method
CN115100623A (en) Unmanned aerial vehicle auxiliary vehicle networking blind area pedestrian detection system based on end-edge-cloud cooperation
US9935875B2 (en) Filtering data packets to be relayed in the car2X network
US11457459B2 (en) Communication mode selection method and apparatus
CN112367640B (en) V2V mode multi-task unloading method and system based on mobile edge calculation
US20220240168A1 (en) Occupancy grid map computation, v2x complementary sensing, and coordination of cooperative perception data transmission in wireless networks
US20220217568A1 (en) Method and apparatus for controlling loads on networks
EP3556092A1 (en) Predictive network management for real-time video with varying video and network conditions
Zhou et al. Edge-facilitated augmented vision in vehicle-to-everything networks
US20220070826A1 (en) Sidelink resource handling for cu-du split based v2x communication
US20220386169A1 (en) Method for transferring a message in a communications network for communication between a road user and at least one further road user
Saleem et al. A vehicle-to-infrastructure data offloading scheme for vehicular networks with QoS provisioning
Balen et al. Survey on using 5G technology in VANETs
WO2022168286A1 (en) Communication management device, communication management method, communication management program, driving assistance device, driving assistance method, and driving assistance program
EP3721646B1 (en) Control device configured for and method of determining a data format
Bragato et al. Towards decentralized predictive quality of service in next-generation vehicular networks
US20240015583A1 (en) Operating method of ue, related to sensor raw data sharing and feedback in wireless communication system
US11894887B2 (en) Method and communication device for transmitting and receiving camera data and sensor data
US20220345934A1 (en) Method and communication device for transmitting or receiving data by using available buffer size
Ding et al. Multi-link scheduling algorithm of LLC protocol in heterogeneous vehicle networks based on environment and vehicle-risk-field model
WO2020175604A1 (en) Wireless communication terminal device, and wireless communication method therefor
Ogawa et al. Proposal of Adaptive Data Collection Method of On-Vehicle Sensors in the 5G Era
EP4280635A1 (en) Method and apparatus for transmitting and receiving wireless signal in wireless communication system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination