CN115412844B - Multi-mode information alliance-based real-time alignment method for vehicle networking wave beams - Google Patents

Multi-mode information alliance-based real-time alignment method for vehicle networking wave beams Download PDF

Info

Publication number
CN115412844B
CN115412844B CN202211024842.1A CN202211024842A CN115412844B CN 115412844 B CN115412844 B CN 115412844B CN 202211024842 A CN202211024842 A CN 202211024842A CN 115412844 B CN115412844 B CN 115412844B
Authority
CN
China
Prior art keywords
time
vehicle
data
matrix
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211024842.1A
Other languages
Chinese (zh)
Other versions
CN115412844A (en
Inventor
程翔
张浩天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN202211024842.1A priority Critical patent/CN115412844B/en
Publication of CN115412844A publication Critical patent/CN115412844A/en
Application granted granted Critical
Publication of CN115412844B publication Critical patent/CN115412844B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/023Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • H04B7/0426Power distribution
    • H04B7/043Power distribution using best eigenmode, e.g. beam forming or beam steering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0613Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
    • H04B7/0615Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
    • H04B7/0619Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
    • H04B7/0621Feedback content
    • H04B7/0626Channel coefficients, e.g. channel state information [CSI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Power Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a real-time alignment method of a vehicle networking wave beam based on multi-modal information alliance, which is characterized in that a track prediction network model is designed, and distance data obtained by processing RGB images and radar signals captured by a road side unit RSU in the vehicle networking and a channel state information CSI matrix obtained from a control channel Sub-6GHz frequency band are utilized as the multi-modal information input track prediction network model to perform feature extraction and early fusion on multi-modal information so as to predict a wave beam forming angle, improve the accuracy of predicting the future position of a vehicle and realize the real-time alignment of the vehicle networking wave beam. The invention predicts the beam forming angle of the vehicle relative to the RSU at the future moment through the hidden characteristic of the future position of the vehicle learned by the neural network. The invention can better cope with the transverse random micro-movement behavior of the vehicle, realize more stable establishment of the communication link and improve the achievable millimeter wave communication rate.

Description

Multi-mode information alliance-based real-time alignment method for vehicle networking wave beams
Technical Field
The invention belongs to the technical field of wireless communication, and relates to a millimeter wave (MILLIMETER WAVE, mmWave) beam real-time alignment technology in wireless communication of the Internet of vehicles, in particular to an Internet of vehicles millimeter wave beam real-time alignment method based on multi-mode sensing and channel state Information (CHANNEL STATE Information, CSI) simultaneous sensing (i.e. fusion of multi-mode sensing and channel state Information), which is implemented by extracting and fusing multi-mode Information features by applying a deep learning method and learning a nonlinear relation between future positions of vehicles and multi-mode Information.
Background
Along with the rapid development of the automobile industry, the Internet of vehicles is taken as the most important component part in an intelligent traffic system in the future, and is one of important technologies for realizing intelligent travel and intelligent traffic. Meanwhile, along with the large-scale commercial use of 5G, millimeter waves are regarded as key technologies for meeting various high-performance communication requirements of the Internet of vehicles due to the advantages of large bandwidth and low time delay. In order to ensure the communication service requirements of various applications of the internet of vehicles, improve the overall safety and the user experience quality of the wireless communication of the internet of vehicles, the vehicles need to ensure the wireless connection with the wireless communication network with high quality at any time. In various technologies related to high-speed mobility management for vehicles in the millimeter wave internet of vehicles, real-time alignment of millimeter wave beams is a precondition for ensuring stable connection of vehicles to a communication network.
How to achieve beam alignment between a high-speed vehicle and an RSU is one of the core technologies for vehicle mobility management in the internet of vehicles is a key guarantee to ensure that vehicles can connect to a communication network stably and with high quality. The method for realizing the millimeter wave beam real-time alignment between the receiving and transmitting parties mainly comprises beam training, beam tracking and beam forming prediction. Traditionally, the alignment of narrow beams between millimeter wave transceivers is accomplished by beam training. In beam training, the transmitting end needs to transmit pilot signals to the global angle so as to find the beam forming direction with the strongest signal-to-noise ratio and determine the beam forming angle. However, this brings about a large communication overhead and time delay, making it difficult to apply to internet of vehicles communication. In order to improve the defect, the millimeter wave beam tracking technology utilizes the time correlation of the change of the beam angle between the receiving and transmitting parties at adjacent moments, greatly reduces the space range to be searched for in beam training, and for example, the invention patent with the publication number of CN112738764B provides a broadband millimeter wave beam tracking method based on vehicle track cognition, and utilizes the motion characteristic of a vehicle to assist in millimeter wave beam tracking. Beam tracking still requires the pilot signal to be transmitted before each communication link is established, resulting in significant communication overhead. The beam forming prediction technology directly performs beam forming at a prediction angle by predicting the future position of the vehicle in advance, so that the beam forming prediction technology has low communication overhead and time delay. However, the stability of the communication link established in this way and the achievable communication rate are greatly affected by the accuracy of the prediction algorithm.
Currently, the beam forming prediction of the internet of vehicles is mostly based on an extended Kalman filtering algorithm and is given by a simple vehicle motion state evolution model and measured values obtained by radar equipment, and the method has low precision and limited application scenes. With the increasing variety of sensing devices equipped on intelligent vehicles and RSUs, performance is gradually enhanced, and the auxiliary effect of multi-mode sensing information on the Internet of vehicles communication system is gradually emphasized and researched. Different from electromagnetic environment characteristics reflected by the CSI, the multi-mode sensing information comprises visual space characteristics with finer granularity and wider visual field, and the capability of predicting the future position of the vehicle is better. How to select a proper mode to extract and fuse vehicle position features of multi-mode sensing information and CSI in the Internet of vehicles, and further assist in predicting future beam forming angles of vehicles is an important direction of current research.
Disclosure of Invention
The invention provides a real-time alignment technology of a vehicle networking wave beam based on multi-mode information alliance, which can better cope with the transverse random micro-movement behavior of a vehicle, realize more stable establishment of a communication link and improve the communication rate of millimeter waves of the vehicle networking on the basis of ensuring the high precision of the predicted value of the wave beam forming angle of the vehicle networking.
In the invention, multi-mode information allied sense finger Road end multi-mode sensing information is fused with the CSI, RGB images captured by Road Side Units (RSU) in the Internet of vehicles, distance data obtained after radar signal processing and a CSI matrix obtained on a control channel Sub-6 GHz frequency band are used as multi-mode information input, and a track prediction network model containing different types of neural network components is designed to perform feature extraction and early fusion on the multi-mode information, so that the accuracy of predicting the future position of the vehicle is improved, and the accuracy of the predicted beam forming angle at the next moment is further ensured. In addition, the multi-mode information on the past track of the vehicle is constructed into a time sequence form at each prediction time, and then the track prediction network model is used for extracting and learning time sequence characteristics, so that the robustness of the vehicle to the transverse random micro-movement behavior is further improved.
The technical scheme of the invention is as follows:
A real-time alignment method of vehicle networking wave beams based on multi-mode information alliance is characterized in that a network model is predicted by designing a track, and distance data obtained by processing RGB images and radar signals captured by a road side unit RSU in the vehicle networking and a CSI matrix obtained from a frequency band of a control channel Sub-6 GHz are used as multi-mode information input, so that feature extraction and early fusion are carried out on the multi-mode information, a wave beam forming angle is predicted, and the accuracy of real-time alignment of the vehicle networking wave beams and future position prediction of vehicles is improved; the method comprises the following steps:
1) Acquiring original multi-modal data: before each data block between the RSU and the vehicle starts to be transmitted, namely, at each beam forming angle prediction time (in the invention, the current time of beam forming angle prediction is defined as an nth time block or an nth time slot in periodical prediction by taking a time block as a unit), RGB image of a traffic system and vehicle distance data are acquired through a sensing device, and a CSI matrix of a control channel frequency band is acquired through a communication device; the CSI matrix is calculated by the RSU through channel estimation. The raw multimodal data includes: RGB image, vehicle distance data and CSI matrix of control channel frequency band.
In the implementation, a road side unit RSU is provided with a plurality of sensing devices (RGB cameras and radar devices), communication devices running on a Sub-6GHz frequency band and a mmWave frequency band, a vehicle is provided with communication devices running on the Sub-6GHz frequency band and the mmWave frequency band, RGB images of a traffic system are shot by the RGB cameras, distance data of a target vehicle are obtained by the radar devices, and a CSI matrix of the Sub-6GHz frequency band is obtained on a control channel of the communication device through a signal processing device of the RSU;
2) Preprocessing the original multi-mode data obtained in the step 1) to obtain a preprocessed RGB image, a vehicle distance matrix and a CSI angular domain feature matrix;
Based on the original multi-mode data collected in the previous step, the data preprocessing module preprocesses the original multi-mode data: performing size reduction and data standardization on the RGB image, constructing distance data into a matrix form, performing data normalization, and performing angular domain feature extraction on the CSI matrix;
3) Constructing and obtaining time sequence multi-modal data; the time sequence multi-mode data comprises RGB images, a vehicle distance matrix and a CSI angular domain feature matrix;
After an antenna of the RSU obtains a CSI matrix through signal estimation sent by a receiving vehicle, a storage unit of the RSU at each beam forming angle prediction time stores and stacks multi-mode sensing data and CSI at the current and previous prediction times to construct time sequence multi-mode data;
4) Constructing a track prediction neural network model, inputting time series multi-mode data into the network model, extracting to obtain visual space characteristics, electromagnetic space characteristics and time sequence characteristics, carrying out early fusion, and predicting to obtain vehicle position coordinates and motion angles (namely beam forming angles) of an (n+2) th time slot;
Inputting the time series multi-mode data into a track prediction neural network to extract visual space characteristics, electromagnetic space characteristics and time sequence characteristics and fuse the visual space characteristics, the electromagnetic space characteristics and the time sequence characteristics in early stage, predicting to obtain vehicle position coordinates after a time slot, and further obtaining the angle of a vehicle relative to an RSU (signal processing unit) in the (n+2) th time slot, namely the beam forming angle of the (n+2) th time slot;
5) In the next data block transmission, the RSU transmits the beam forming angle of the (n+2) th time slot obtained in the step 4) to the vehicle;
The RSU transmits the beam forming angle of the (n+2) th time slot obtained by the prediction in the previous step to the vehicle through the data block of the (n+1) th time slot, so that the vehicle knows the beam forming angle of the (n+2) th time slot in advance, and the vehicle carries out the beam forming of the (n+2) th time slot according to the angle;
6) On the (n+2) th time slot, the RSU and the vehicle respectively perform beam forming and alignment through angle values obtained by prediction in advance, and a millimeter wave communication link is established for communication;
The RSU and the vehicle respectively carry out beam forming and alignment through angle values obtained by prediction in advance on the (n+2) th time slot, and a millimeter wave communication link is established for communication;
7) At each prediction time, the RSU executes the steps 1) to 6), so that the millimeter wave beam real-time alignment in the vehicle running process is completed, and the stable connection of the vehicle to the wireless communication network is ensured;
through the steps, beam forming prediction based on multi-mode information alliance is realized, and millimeter wave beam real-time alignment in the vehicle running process is completed.
In specific implementation, the invention also provides a vehicle networking wave beam real-time alignment device based on multi-mode information alliance, which comprises an RSU and a vehicle; the method comprises the steps that a sensing module (comprising an RGB camera and radar equipment), a communication module (comprising Sub-6GHz and mmWave dual-band), an image preprocessing module, a distance data preprocessing module, an angular domain information extraction module and a track prediction neural network module are assembled on an RSU; a communication module (comprising Sub-6GHz and mmWave dual bands) is mounted on the vehicle. Firstly, RGB images and distance data are acquired by an RGB camera and radar equipment in a perception module of a vehicle respectively in each time slot, and CSI data are acquired by a Sub-6GHz frequency channel in a communication module. And then, carrying out data preprocessing on the multi-mode original data by utilizing an image preprocessing module, a distance data preprocessing module and an angular domain information extraction module. And then, the track prediction neural network module is utilized to learn the vehicle position characteristics and the motion state evolution characteristics in the multi-mode data, so that the prediction of the future position of the vehicle is completed.
Compared with the prior art, the invention has the beneficial effects that:
The invention provides a vehicle networking wave beam real-time alignment technical scheme based on multi-mode information alliance, which utilizes multi-mode perception information acquired by an RSU and CSI on a Sub-6 GHz frequency band, learns the hidden characteristics of future positions of vehicles through a neural network, and predicts that the wave beam forming angle of the vehicle at the future moment relative to the RSU is closer to reality. By adopting the technical scheme provided by the invention, the transverse random micro-movement behavior of the vehicle can be better dealt with on the basis of ensuring the high precision of the beam forming angle predicted value, the establishment of a more stable communication link is realized, and the achievable millimeter wave communication rate is improved.
The beam real-time alignment method based on multi-mode information alliance provided by the invention has the following technical advantages:
Firstly, establishing a track prediction neural network model, and learning the relation between an image and distance data and a future position of a vehicle from the image captured by an RSU and the distance data obtained after radar signal processing through the network model so as to ensure the reliability of the future position prediction of the moving vehicle;
secondly, extracting angular domain feature information of the CSI data acquired by the Sub-6 GHz frequency band on a control channel of the communication equipment, and further extracting electromagnetic space features of a communication system through a track prediction neural network model, so that prediction accuracy is improved;
superposing multi-mode information of past historical tracks of the vehicle at each prediction moment, and learning by utilizing a track prediction neural network model to obtain vehicle motion trend characteristics, so that the track prediction neural network model has good robustness on the transverse random micro-movement behavior of the vehicle;
Aiming at the data characteristics of the multi-mode sensing information and the CSI, three networks with different structures including a full-connection network, a residual neural network (ResNet-18) and a gating circulation unit (Gated Recurrent Unit, GRU) are adopted to extract electromagnetic spatial features, visual spatial features and time sequence features, and different features are fused early, so that the vehicle position prediction precision is further improved.
Drawings
Fig. 1 is a block diagram of a beam real-time alignment apparatus configured for use in the practice of the present invention.
Fig. 2 is a block flow diagram of a beam real-time alignment algorithm provided by the present invention.
Fig. 3 is a block diagram of an RGB image preprocessing module embodying the present invention.
Fig. 4 is a block diagram of a structure of an angular domain information extraction module of a CSI matrix embodying the present invention.
FIG. 5 is a block flow diagram of a trajectory prediction neural network module embodying the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Referring to fig. 1, the present invention utilizes a vehicle equipped with a communication module (including Sub-6 GHz and mmWave dual-band), an RSU equipped with a sensing module (including RGB camera and radar device) and a communication module (including Sub-6 GHz and mmWave dual-band), wherein electromagnetic space information of Sub-6 GHz band is acquired through the RSU and a communication device module on the vehicle. The method comprises the steps of obtaining visual space information of a vehicle position by using a sensing module on an RSU, and then carrying out data preprocessing on multi-mode original data by using an image preprocessing module, a distance data preprocessing module and an angular domain information extraction module. And then, the track prediction neural network module is utilized to learn the vehicle position characteristics and the motion state evolution characteristics in the multi-mode data, so that the prediction of the future position of the vehicle is completed.
The invention provides a beam real-time alignment device, which comprises a sensing module, a communication module, an image preprocessing module, an angle information extraction module and a track prediction neural network module, wherein the sensing module, the communication module, the image preprocessing module, the angle information extraction module and the track prediction neural network module are arranged on an RSU; the method for aligning the beam in real time provided by the invention is shown in fig. 2, and comprises the following specific steps:
s10: referring to fig. 1, the RSU is provided with a sensing module, a communication module, an image preprocessing module, an angular domain information extraction module and a trajectory prediction neural network module, firstly, RGB images X and vehicle distance data D are acquired by an RGB camera and a radar device in the sensing module at each time slot, CSI data H is acquired by a Sub-6 GHz frequency channel in the communication module, N t represents the number of mmWave frequency band transmitting antennas of the RSU communication equipment, N s represents the number of OFDM subcarriers adopted by the RSU communication equipment, and the number of OFDM subcarriers is stored in a storage unit of the RSU;
S20: referring to fig. 3, based on the RGB image data X acquired in the previous step, an image preprocessing module performs data preprocessing on the RGB image data X to obtain an image form X P in which features are easy to extract;
S30: based on the vehicle distance data D acquired by the radar in the step S10, constructing the distance data into a matrix D' according to the position coordinates of the detection target vehicle, and carrying out data normalization to obtain a form D P which is easy to process by a neural network and extract features;
s40: referring to fig. 4, based on the CSI obtained by the RSU in step S10 on the Sub-6 GHz band, the angular domain information extraction module performs data preprocessing on the CSI to obtain a form H P that facilitates the extraction of angular domain features by the neural network, Where C represents the matrix complex domain dimension.
S50: referring to fig. 5, the preprocessed multi-mode data obtained in steps S20, S30, S40 is input into a track prediction neural network module, time series multi-mode data is constructed, the track prediction neural network module extracts and fuses visual space characteristics, electromagnetic space characteristics and time series characteristics, and learns the relation between the characteristics and future positions of the vehicle, so as to further predict and obtain a beam forming angle of the RSU and the vehicle after the next time slot, and complete millimeter wave beam alignment;
In step S10: the sensing module of the RSU acquires RGB images and distance data of the traffic environment on each time slot, and the communication module acquires a CSI matrix of a Sub-6 GHz frequency band, namely H, on a control channel;
In step S20: the image preprocessing module is utilized to preprocess the original RGB image data X so as to obtain an image X P which is easy to extract the characteristics, and the preprocessing method comprises the following processes S21 to S22:
S21: cutting the original RGB image with higher resolution to obtain an image X' with reduced size;
in the implementation, an image X' with the resolution of 224X 224 is obtained according to the Z-shaped filling sequence of the pixel values of the original RGB image from left to right and from top to bottom;
S22: x 'R,X′G,X′B represents the pixel values of the three channels of the resulting image X' R, G, B, respectively. The mean value of the three channels of the obtained image X' G,X′B was denoted as μ R、μG、μB, respectively, and the standard deviation was denoted as σ R、σG、σB, respectively. And then, carrying out the following data standardization operation on the X' R,X′G,X′B: An image X P with standardized data is obtained. /(I) Representing the R channel pixel value after data normalization; /(I)Representing the G channel pixel value after data normalization; /(I)Representing the normalized B-channel pixel values of the data. In practice, mu R、μG、μB can take values of 0.306, 0.281, 0.251.σ R、σG、σB takes the value 0.016,0.0102,0.013. The mean value and standard deviation adopted by the standardization of the three channels can be properly adjusted according to the track prediction network structure;
in step S30: preprocessing the distance data D obtained after radar signal processing, constructing the obtained distance data of the target vehicle into a matrix D 'according to the position coordinates of the target vehicle, and carrying out data normalization on the D', thereby obtaining a matrix form D P of the neural network, which is easy to extract features;
Data normalization is carried out on the matrix D ', namely, all elements in the matrix D ' are divided by the maximum value of the elements of the matrix D ', so as to obtain a distance matrix DP after data normalization,
In step S40: the angle domain information extraction module is utilized to preprocess the CSI matrix H so as to obtain a representation form H P with strong angle domain characteristics, and the preprocessing method comprises the following processes S41-S43:
s41: a codebook F with an angular resolution B is predefined, Calculating to obtain an energy value matrix E, E= |F T H|,/>, of each subcarrier at each angleF T denotes a transpose of F.
S42: based on the energy value matrix E obtained in the previous step,E ij is the energy value of the j sub-carriers at the angle i; accumulating the energy values of all subcarriers in each angle to obtain an energy value dimension-reducing matrix E ', E ' = (E ' ij)B×1,/>)
S43: based on the energy value dimension-reducing matrix E' obtained in the previous step,The first five maximum energy values are reserved, the rest positions are set to zero, angular domain information extraction is completed, and a representation form with strong angular domain characteristics and low dimensionality is obtained and is recorded as an angular domain characteristic matrix H P;
in step S50: and inputting the preprocessed multi-mode data into a track prediction neural network model.
The track prediction neural network model constructed by the invention is characterized in that three networks with different structures, namely a full-connection network, a residual neural network (ResNet-18) and a gate control circulation unit (Gated Recurrent Unit, GRU), are adopted to extract electromagnetic spatial features, visual spatial features and time sequence features, and different features are fused in an early stage.
In specific implementation, the track prediction neural network takes an RGB image, a distance matrix and an angular domain feature matrix which are preprocessed at the current prediction time and the last prediction time as input, time series data are constructed by stacking multi-mode data at the current prediction time and the last prediction time, RGB images at the current prediction time are processed by adopting a residual neural network ResNet-18 in the time series data, the distance matrix at the distance matrix time sequence and the current prediction time is processed by ResNet-18, the angular domain feature matrix at the current prediction time is processed by a full-connection network, visual space features and electromagnetic space features in the RGB images at the current prediction time are learned, and all features at the current time are spliced and input into the full-connection network to obtain an ordinate predicted value; and splicing all the processed characteristics of the time series multi-mode data, inputting the processed characteristics into the GRU to obtain time sequence characteristics, inputting the time sequence characteristics into a fully-connected network, and outputting the time sequence characteristics to obtain an abscissa predicted value. And (3) predicting the position of the vehicle after the next time slot by using the neural network model through the track, further calculating a beam forming angle by using an inverse trigonometric function, and completing millimeter wave beam alignment. The steps include the following processes S51 to S54:
S51: inputting an RGB image X P, a distance matrix D P and a CSI matrix H P extracted from angular domain information at the current moment to a constructed track prediction neural network model, overlapping the RGB image X P, the distance matrix D P and the CSI matrix H P with the same type of data of the last time slot, adding a time dimension, constructing time sequence multi-mode data, and recording the time sequence multi-mode data as T;
S52: RGB image in time sequence form in time sequence multi-mode data T And inputting the RGB image X P at the current prediction time into a ResNet-18 network, wherein the number of neurons at an output layer of the ResNet-18 network is 256; matrix/>, timing distance in time-series multi-mode data TAnd inputting the distance matrix D P at the current predicted time into a ResNet-18 network, wherein the number of neurons at an output layer of the ResNet-18 network is 256; CSI angular domain feature matrix/>, in time-series multi-modal data TAnd inputting the CSI angular domain feature matrix H P at the current prediction moment into a fully-connected network, wherein the number of hidden layers of the fully-connected network is 2-4 layers, the number of neurons of an output layer is 4-8, and when the method is implemented, the number of neurons of the hidden layers is 2 layers and the number of neurons of the output layer is 8. Then, the characteristics obtained after the time sequence is processed in the mode are spliced to obtain a time sequence multi-mode characteristic F 1; all the characteristics at the current moment are spliced to obtain a multi-mode characteristic F 2 at the current moment;
S53: inputting the multi-mode characteristic F 1 of the time sequence obtained in the step S52 into a gating cycle unit GRU, wherein the GRU is a single layer, the hidden layer dimension value is 16-32, and the hidden layer dimension value is 16 in specific implementation, so as to obtain the time sequence characteristic F T. F T is input into a fully-connected network for prediction, wherein the number of hidden layers is 2 layers, the number of output neurons is 1, and the network outputs and obtains the predicted value of the abscissa of the vehicle after the next time slot
S54: inputting the current-moment multi-modal feature F 2 obtained in the step S52 into a fully-connected network for prediction, wherein the number of hidden layers is 3-4 layers, the number of output neurons is 1, and the number of hidden layers takes a value of 3 in the specific implementation process to obtain the longitudinal coordinate predicted value of the vehicle after the next time slotCombining the/>, obtained in step S53Calculation of the beamforming angle/>, of a vehicle with respect to an RSU, by means of an inverse trigonometric functionAnd (5) completing the real-time alignment of the beam.
In specific implementation, the number of neurons of each layer of the fully-connected network in the steps S52 to S54 is different, where 64 neurons of the first layer, a dropoff probability of 0.2 between the first layer and the second layer, 32 neurons of the second layer, a dropoff probability of 0.2 between the second layer and the third layer, 16 neurons of the third layer, and 8 neurons of the fourth layer (i.e., the output layer) of the fully-connected network in the step S52. The fully connected network in S53 has 16 neurons in the first layer, a dropout probability of 0.1 between the first layer and the second layer, 32 neurons in the second layer, a dropout probability of 0.1 between the second layer and the third layer, 16 neurons in the third layer, and 1 neuron in the fourth layer (i.e. the output layer). The fully connected network in S54 has 520 neurons in the first layer, a dropout probability of 0.1 is set between the first layer and the second layer, 256 neurons in the second layer, 128 neurons in the third layer, 64 neurons in the fourth layer, and 1 neuron in the fifth layer (i.e., the output layer).
The invention provides a multi-mode information-alliance-based real-time alignment technology for a beam of an Internet of vehicles, which utilizes images acquired by an RSU and target distance information obtained after radar signal processing, utilizes a neural network to extract visual space features from the images, and ensures the reliability of future position prediction; the angular domain information of the CSI data acquired on the Sub-6 GHz frequency band is utilized for extraction and optimization, so that the neural network is easy to extract the angular domain characteristics rich in electromagnetic space, and the prediction precision is improved; and the multi-mode information of the vehicle history track is overlapped at each moment to construct time sequence data, so that the neural network learns the vehicle motion trend, and the robustness of the neural network to the prediction of the vehicle transverse random micro-movement behavior is improved. The RGB camera, the radar sensing equipment, the Sub-6 GHz frequency band and the communication equipment of the mmWave frequency band are common equipment on RSU and intelligent vehicles in the Internet of vehicles, and the practical requirements of convenience in installation, flexibility and reliability in operation and low cost are met.
It should be noted that the purpose of the disclosed embodiments is to aid further understanding of the present invention, but those skilled in the art will appreciate that: various alternatives and modifications are possible without departing from the scope of the invention and the appended claims. Therefore, the invention should not be limited to the disclosed embodiments, but rather the scope of the invention is defined by the appended claims.

Claims (9)

1. A real-time alignment method of vehicle networking wave beams based on multi-modal information alliance is characterized in that a track prediction network model is designed, and the RGB image and the distance data obtained by processing radar signals captured by a road side unit RSU in the vehicle networking and a channel state information CSI matrix obtained from a frequency band of a control channel Sub-6 GHz are utilized as multi-modal information input track prediction network models, so that the wave beam forming angle is predicted, the accuracy of predicting the future position of a vehicle is improved, and the real-time alignment of the vehicle networking wave beams is realized; the method comprises the following steps:
1) Acquiring original multi-modal data:
In the nth time slot, RGB images of a traffic system and vehicle distance data are acquired through sensing equipment, and a CSI matrix of a control channel frequency band is acquired through communication equipment; the nth time slot predicts the moment of each beam forming angle before each data block between the RSU and the vehicle starts to be transmitted;
2) Preprocessing the original multi-mode data obtained in the step 1) to obtain a preprocessed RGB image, a vehicle distance matrix and a CSI angular domain feature matrix;
3) Constructing and obtaining time sequence multi-modal data;
the time sequence multi-mode data comprises RGB images, a vehicle distance matrix and a CSI angular domain feature matrix;
4) Constructing a track prediction neural network model, inputting time sequence multi-mode data and current moment multi-mode data into the track prediction neural network model, extracting to obtain visual space characteristics, electromagnetic space characteristics and time sequence characteristics, and carrying out early fusion; predicting a vehicle position coordinate after obtaining a time slot, and further obtaining an angle of the vehicle relative to the RSU in the n+2th time slot, namely a beam forming angle of the n+2th time slot; the track prediction neural network model comprises a full-connection network, a residual neural network and a gating circulation unit network structure;
5) In the next data block transmission, the RSU transmits the beam forming angle of the n+2th time slot predicted in the step 4) to the vehicle through the data block of the n+1th time slot, so that the vehicle knows the beam forming angle of the n+2th time slot in advance, and the vehicle performs beam forming of the n+2th time slot according to the angle;
6) On the n+2 time slot, the RSU and the vehicle respectively perform beam forming and alignment through a beam forming angle obtained by prediction in advance, and a millimeter wave communication link is established for communication;
7) At each prediction time, the RSU executes the steps 1) to 6), so that the millimeter wave beam real-time alignment in the vehicle running process is completed, and the stable connection of the vehicle to the wireless communication network is ensured;
Through the steps, beam forming prediction based on multi-mode information alliance is realized, and millimeter wave beam real-time alignment in the vehicle running process is completed through the fusion of multi-mode perception information and the CSI.
2. The method for aligning beam of internet of vehicles in real time based on multi-modal information alliance as claimed in claim 1, wherein step 1) obtains original multi-modal data, in particular, a plurality of sensing devices and communication devices are equipped on an RSU, and a communication device is equipped on a vehicle; the plurality of sensing devices include RGB cameras and radar devices; an RGB image of a traffic system is obtained by shooting through an RGB camera, and distance data of a target vehicle is obtained through radar equipment; the communication equipment operates in Sub-6 GHz and mmWave frequency bands; and obtaining the CSI matrix of the Sub-6 GHz frequency band on the control channel of the communication equipment through the signal processing equipment of the RSU.
3. The method for real-time alignment of beam of internet of vehicles based on multi-modal information alliance as set forth in claim 1, wherein the step 2) of preprocessing original multi-modal data includes: performing size reduction and data standardization on the RGB image; constructing distance data into a matrix form, and normalizing the data; and extracting angular domain features of the CSI matrix.
4. The method for real-time alignment of beam of internet of vehicles based on multi-modal information alliance as set forth in claim 1, wherein the step 3) is configured to generate time-series multi-modal data, specifically: after the RSU obtains the CSI matrix through signal estimation sent by the receiving vehicle, a storage unit of the RSU at each beam forming angle prediction time stores and stacks the multi-mode sensing data and the CSI matrix at the current and the last prediction time, so as to construct time sequence multi-mode data.
5. The method for aligning the beam of the internet of vehicles in real time based on the multi-mode information alliance according to claim 1, wherein the method is realized by arranging a beam real-time aligning device; the device comprises a sensing module, a communication module, an image preprocessing module, an angular domain information extraction module and a track prediction neural network module on the RSU; a communication module is included on the vehicle.
6. The method for aligning beams of the internet of vehicles in real time based on multi-modal information alliance according to claim 5, wherein the image preprocessing module is used for preprocessing the original RGB image data X to obtain an image X P which is easy to extract characteristics; the method comprises the following processes S21 to S22:
S21: cutting the original RGB image with higher resolution to obtain an image X' with reduced size;
S22: x 'R,X′G,X′B represents the pixel values of R, G, B three channels of the obtained image X', respectively; the mean value and standard deviation adopted by the standardization of the three channels are determined according to the track prediction network structure; the mean value of the pixel values X 'R,X′G,X′B of the three channels of the image X' is respectively recorded as mu R、μG、μB, and the standard deviation is respectively recorded as sigma R、σG、σB; data normalization operations are performed on X' R,X′G,X′B: Obtaining a data standardized image X P; /(I) Representing the R channel pixel value after data normalization; /(I)Representing the G channel pixel value after data normalization; /(I)Representing the B channel pixel value after data normalization;
preprocessing the distance data D obtained after radar signal processing, constructing the obtained distance data of the target vehicle into a matrix D 'according to the position coordinates of the target vehicle, and carrying out data normalization on the matrix D', namely dividing all elements in the matrix D 'by the maximum value of the elements of the matrix D', so as to obtain a matrix form D P of which the neural network is easy to extract features.
7. The method for aligning beams of the internet of vehicles in real time based on multi-mode information alliance according to claim 5, wherein the CSI matrix H is preprocessed by the angular domain information extraction module to obtain a representation form H P with strong angular domain characteristics; the following processes S41 to S43 are included:
S41, predefining a codebook F with the angle resolution of B, Calculating to obtain an energy value matrix E, E= |F T H|,/>, of each subcarrier at each angle
S42: based on the matrix of energy values E,E ij is the energy value of the j sub-carriers at the angle i; accumulating the energy values of all subcarriers in each angle to obtain an energy value dimension-reducing matrix E ', E ' = (E ' ij)B×1,/>)
S43, based on the obtained energy value dimension-reducing matrix E', reserving the maximum set number of energy values, setting the rest positions to be zero, and completing angle domain information extraction, wherein the obtained angle domain features are recorded as an angle domain feature matrix H P.
8. The method for real-time alignment of beam of internet of vehicles based on multi-modal information alliance according to claim 7, wherein the preprocessed multi-modal data is input into the constructed track prediction neural network model; the track prediction neural network takes RGB images, distance matrixes and angular domain feature matrixes after preprocessing at the current prediction time and the last prediction time as input, builds time sequence data by stacking multi-mode data at the current prediction time and the last prediction time, processes an image time sequence in the time sequence data and the RGB images at the current prediction time by adopting the residual neural network ResNet-18, processes the distance matrixes of the distance matrixes and the current prediction time by ResNet-18, processes the angular domain feature matrixes and the angular domain feature matrixes at the current prediction time by the full-connection network, learns visual space features and electromagnetic space features, splices all features at the current time, and then inputs the spliced features into the full-connection network to obtain an ordinate predicted value; each processed characteristic of the time series multi-mode data is spliced, the processed characteristic is input into the GRU to obtain a time sequence characteristic, and then the time sequence characteristic is input into a fully-connected network to obtain an abscissa predicted value; and the position of the vehicle after the next time slot is obtained through the track prediction neural network model, so that the beam forming angle is calculated, and the millimeter wave beam alignment is completed.
9. The method for real-time alignment of beam of internet of vehicles based on multi-modal information alliance as set forth in claim 8, comprising the following processes S51 to S54:
S51: inputting an RGB image X P, a distance matrix D P and a CSI matrix H P extracted from angular domain information at the current moment to a constructed track prediction neural network model, overlapping the RGB image X P, the distance matrix D P and the CSI matrix H P with the same type of data of the last time slot, adding a time dimension, constructing time sequence multi-mode data, and recording the time sequence multi-mode data as T;
S52: RGB image in time sequence form in time sequence multi-mode data T And the RGB image X P at the current predicted time is input ResNet-18 into the network; matrix/>, timing distance in time-series multi-mode data TAnd the distance matrix D P at the current predicted time is input into ResNet-18 network; CSI angular domain feature matrix/>, in time-series multi-modal data TInputting the CSI angular domain feature matrix H P at the current prediction moment into a fully-connected network; splicing the characteristics obtained after the time sequence is processed to obtain a time sequence multi-mode characteristic F 1; all the characteristics at the current moment are spliced to obtain a multi-mode characteristic F 2 at the current moment;
S53: inputting the multi-mode characteristic F 1 of the time sequence obtained in the step S52 into a gating cycle unit GRU to obtain a time sequence characteristic F T; f T is input into a fully-connected network for prediction, and the abscissa predicted value of the vehicle after the next time slot is obtained
S54: inputting the current-time multi-mode characteristic F 2 obtained in the step S52 into a fully-connected network for prediction to obtain the predicted value of the ordinate of the vehicle after the next time slotCombining the/>, obtained in step S53Calculation of the beamforming angle/>, of a vehicle with respect to an RSU, by means of an inverse trigonometric functionAnd (5) completing the real-time alignment of the beam.
CN202211024842.1A 2022-08-25 2022-08-25 Multi-mode information alliance-based real-time alignment method for vehicle networking wave beams Active CN115412844B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211024842.1A CN115412844B (en) 2022-08-25 2022-08-25 Multi-mode information alliance-based real-time alignment method for vehicle networking wave beams

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211024842.1A CN115412844B (en) 2022-08-25 2022-08-25 Multi-mode information alliance-based real-time alignment method for vehicle networking wave beams

Publications (2)

Publication Number Publication Date
CN115412844A CN115412844A (en) 2022-11-29
CN115412844B true CN115412844B (en) 2024-05-24

Family

ID=84161211

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211024842.1A Active CN115412844B (en) 2022-08-25 2022-08-25 Multi-mode information alliance-based real-time alignment method for vehicle networking wave beams

Country Status (1)

Country Link
CN (1) CN115412844B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111787481A (en) * 2020-06-17 2020-10-16 北京航空航天大学 Road-vehicle coordination high-precision sensing method based on 5G
CN112738764A (en) * 2020-12-28 2021-04-30 北京邮电大学 Broadband millimeter wave beam tracking method based on vehicle motion track cognition
CN113260084A (en) * 2021-05-18 2021-08-13 北京邮电大学 Millimeter wave-based vehicle networking V2X communication link establishment method
CN114120288A (en) * 2021-12-02 2022-03-01 北京航空航天大学合肥创新研究院(北京航空航天大学合肥研究生院) Vehicle detection method based on millimeter wave radar and video fusion
CN114706068A (en) * 2022-02-24 2022-07-05 重庆邮电大学 Road side unit cooperative target tracking system, method and storage medium
CN114844545A (en) * 2022-05-05 2022-08-02 东南大学 Communication beam selection method based on sub6GHz channel and partial millimeter wave pilot frequency

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560972B (en) * 2020-12-21 2021-10-08 北京航空航天大学 Target detection method based on millimeter wave radar prior positioning and visual feature fusion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111787481A (en) * 2020-06-17 2020-10-16 北京航空航天大学 Road-vehicle coordination high-precision sensing method based on 5G
CN112738764A (en) * 2020-12-28 2021-04-30 北京邮电大学 Broadband millimeter wave beam tracking method based on vehicle motion track cognition
CN113260084A (en) * 2021-05-18 2021-08-13 北京邮电大学 Millimeter wave-based vehicle networking V2X communication link establishment method
CN114120288A (en) * 2021-12-02 2022-03-01 北京航空航天大学合肥创新研究院(北京航空航天大学合肥研究生院) Vehicle detection method based on millimeter wave radar and video fusion
CN114706068A (en) * 2022-02-24 2022-07-05 重庆邮电大学 Road side unit cooperative target tracking system, method and storage medium
CN114844545A (en) * 2022-05-05 2022-08-02 东南大学 Communication beam selection method based on sub6GHz channel and partial millimeter wave pilot frequency

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
段续庭等 ; .深度学习在自动驾驶领域应用综述.无人***技术.2021,(第06期),全文. *
毫米波雷达与视觉传感器信息融合的车辆跟踪;胡延平等;;中国机械工程;20211231(第18期);全文 *
深度学习在自动驾驶领域应用综述;段续庭等;;无人***技术;20211231(第06期);全文 *
胡延平等 ; .毫米波雷达与视觉传感器信息融合的车辆跟踪.中国机械工程.2021,(第18期),全文. *

Also Published As

Publication number Publication date
CN115412844A (en) 2022-11-29

Similar Documents

Publication Publication Date Title
Charan et al. Vision-aided 6G wireless communications: Blockage prediction and proactive handoff
Chen et al. Edge intelligence empowered vehicle detection and image segmentation for autonomous vehicles
CN105488534B (en) Traffic scene deep analysis method, apparatus and system
WO2019243863A1 (en) Vehicle re-identification techniques using neural networks for image analysis, viewpoint-aware pattern recognition, and generation of multi-view vehicle representations
Charan et al. Vision-aided dynamic blockage prediction for 6G wireless communication networks
CN114972654B (en) Three-dimensional target detection method based on road side point cloud completion
Chen et al. Enhancing the robustness of object detection via 6G vehicular edge computing
CN112614130A (en) Unmanned aerial vehicle power transmission line insulator fault detection method based on 5G transmission and YOLOv3
CN115412844B (en) Multi-mode information alliance-based real-time alignment method for vehicle networking wave beams
Guan et al. Mask-VRDet: A robust riverway panoptic perception model based on dual graph fusion of vision and 4D mmWave radar
Liu et al. HPL-ViT: A unified perception framework for heterogeneous parallel LiDARs in V2V
CN117557923A (en) Real-time traffic detection method for unmanned aerial vehicle vision sensing device
Tian et al. Multimodal Transformers for Wireless Communications: A Case Study in Beam Prediction
CN107525525A (en) A kind of path alignment system and method for new-energy automobile
CN117274967A (en) Multi-mode fusion license plate recognition algorithm based on convolutional neural network
Sali et al. A review on object detection algorithms for ship detection
CN116384470A (en) Convolutional neural network model compression method and device combining quantization and pruning
CN115600101A (en) Unmanned aerial vehicle signal intelligent detection method and device based on priori knowledge
Wang et al. F-transformer: Point cloud fusion transformer for cooperative 3d object detection
CN107831762A (en) The path planning system and method for a kind of new-energy automobile
Chen et al. Rf-inpainter: Multimodal image inpainting based on vision and radio signals
Neema et al. User spatial localization for vision aided beam tracking based millimeter wave systems using convolutional neural networks
Liang et al. Transformer vehicle re-identification of intelligent transportation system under carbon neutral target
Gupta et al. Efficient mmWave Beam Selection using ViTs and GVEC: GPS-based Virtual Environment Capture
CN116343522B (en) Intelligent park unmanned parking method based on space-time vehicle re-identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant