CN109889792A - A kind of Vehicular video based on V2X direct transfers method - Google Patents

A kind of Vehicular video based on V2X direct transfers method Download PDF

Info

Publication number
CN109889792A
CN109889792A CN201910295668.6A CN201910295668A CN109889792A CN 109889792 A CN109889792 A CN 109889792A CN 201910295668 A CN201910295668 A CN 201910295668A CN 109889792 A CN109889792 A CN 109889792A
Authority
CN
China
Prior art keywords
vehicle
video image
image
video
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910295668.6A
Other languages
Chinese (zh)
Other versions
CN109889792B (en
Inventor
田大新
段续庭
孙成明
张创
田柯宇
袁昊东
刘天燕
陈忻恺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201910295668.6A priority Critical patent/CN109889792B/en
Publication of CN109889792A publication Critical patent/CN109889792A/en
Application granted granted Critical
Publication of CN109889792B publication Critical patent/CN109889792B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)

Abstract

The application provides a kind of Vehicular video based on V2X and direct transfers method, vehicle is by the video image of imaging device captured in real-time, using template matching be fade-in gradually go out blending algorithm carry out video image splice in real time, obtain the moment mobile unit full-view video image, recycle H.264 algorithm panoramic video is transferred directly to other mobile units.The imaging device systems and communication interaction resource taken full advantage of in intelligent bus or train route cooperative system are invented, the real-time splicing of vehicle-mounted panoramic video can be effectively solved the problems, such as or alleviate and are direct transferred.

Description

A kind of Vehicular video based on V2X direct transfers method
Technical field
It direct transfers method the present invention relates to a kind of Vehicular video based on V2X, belongs to intelligent vehicular visual technology and bus or train route is wireless Communication technology interleaving techniques field.
Background technique
As the forward position direction of traffic in recent years scientific development, intelligent bus or train route cooperative system (Intelligent Vehicle-Infrastructure Cooperative System, I-VICS) by the shape of all kinds of sensing facilities perception bus or train route State information realizes that the information exchange between vehicle vehicle, bus or train route is shared by bus or train route wireless communication technique.In this traffic system: On the one hand, for bus or train route (Vehicle-to-Infrastructure, V2I) system, weight of the red green indicator light as bus or train route interaction Medium is wanted, can carry out broadcasting its status information by the peripherad mobile unit of the dedicated communication technology, vehicle is according to reception The road section information that arrives carries out intelligent decision, calculates and obtains optimal suggestions route and optimal suggestion speed, guarantee vehicle with speed compared with To be uniform, the time, more optimized mode passed through right-angled intersection;On the other hand, for vehicle vehicle (Vehicle-to-Vehicle, V2V) system, vehicle receive the status information of surrounding vehicles node, such as the Position And Velocity information of surrounding vehicles, carry out further Calculating analysis, intelligent decision is made with this and converts control signal for the result of decision, operating and controlling vehicle next step posture is realized Vehicle collision avoids, queue with speeding, lane-change overtake other vehicles, effectively reduce the travel time of driver and ensured driving to a certain extent The life security of member.
In intelligent bus or train route cooperative system, vehicle is the weight realizing vehicle and interacting with external information to extraneous environment sensing information Want one of information.In order to make vehicle obtain the panoramic picture of ambient enviroment in real time, vehicle-mounted imaging device is for real time video image It is complete splicing and the real-time ambient enviroment video interactive between vehicle vehicle, bus or train route come into being.In the video-splicing of mobile unit During direct transferring, the imaging device on mobile unit has certain visual angle limitation, and there are the high-speed motion of vehicle and sets The error of standby registration.Simple fusion method can bring the loss of video image high-frequency content, and complicated blending algorithm is then difficult to Meet real-time, how to be able to satisfy its demand to real-time again in the integrality that guarantee video image splices and realized as vehicle The big bottleneck that universe perception drives with remote auto.
General image mosaic technology is to be spliced into more sub-pictures using the similitude of the pixel of image lap The big image of one width, wherein the feature extraction of image is the most key, directly affects the effect of image mosaic.The feature of image is generally wrapped Include the angle point of image, profile and some not bending moments etc..Wherein, angle point is with calculation amount is small, adaptable, accuracy is high, information The advantages that amount is abundant, becomes the most commonly used feature of image mosaic technology.Wherein, Harris Robust Algorithm of Image Corner Extraction is based on angle point The common method of feature extraction, angle point corresponds to the high position of information content in image, for original image, using angle point into Row camera calibration, matching and reconstruction can greatly improve precision but there are a large amount of corners Matchings, this Method greatly reduces the speed of image procossing, influences the operational efficiency of algorithm.And the Scale invariant based on scale space is special Levy the one of the feature matching method and image mosaic of converting (Scale-Invariant Feature Transform, SIFT) Kind of common method, more steadily solves the image registration of different cameras, different time visual angle, but SIFT algorithm operation quantity Greatly, when 4 width image need to be handled simultaneously, it is not able to satisfy the requirement of real-time video image splicing.
Existing technical resource how is efficiently used as a result, is asked to solve the real-time splicing of vehicle-mounted panoramic video with direct transferring Topic is to need urgently to solve the problems, such as in current intelligent bus or train route cooperative system development.For vehicle individual, the panorama sense of vehicle Know that information can be used for realizing the functions such as fleet management, vehicle danger avoidance, collaborative truck collision avoidance;It is real for intelligent vehicle future development It is the following foundation stone realizing the perception of vehicle universe and remote auto and driving that existing Vehicular video, which splices in real time and direct transfers,.But with present Harris Robust Algorithm of Image Corner Extraction and SIFT feature matching algorithm are all difficult to the panorama for meeting mobile unit for video image information The requirement of real-time of splicing.
Summary of the invention
Have and proposes that a kind of Vehicular video based on V2X direct transfers method the purpose of the present invention is to solve the above problem, benefit With template matching and it is fade-in the integrality and real-time for gradually going out blending algorithm and realizing video-splicing, takes H.264 algorithm progress vehicle The video for carrying equipment room direct transfers, and solves the problems, such as that the real-time of vehicle-mounted panoramic video splices and direct transfer.
The present invention is that a kind of Vehicular video based on V2X direct transfers method, is realized by following step:
Step 1: splicing vehicle-mounted multi-angle video image in real time, and carry out beautification fusion treatment;
The mobile unit is spliced rear video packet in real time and is transmitted directly to other mobile units by step 2;
Preferential, the specific steps of step 1 are as follows:
A. can be in coverage area in vehicle-mounted imaging device, multiple imaging device random shootings are several, and to cover vehicle-surroundings complete The image is carried out mathematical modeling, is denoted as f (x, y), determines the resolution ratio and visibility of imaging device by the video image of scape;? With under the premise of imaging device panorama farthest visual relative distance measurement, mobile unit imaging system constantly receives polygonal mobile unit Spend real time video image;
B, imaging center is collected in from the real time video image that multiple imaging devices receive, repeatedly utilizes high speed template With method, using sequential similar test algorithm --- SSDA method calculates the non-similarity of different video image, is denoted as DSIi(u, v);
C, for vehicle-carrying communication unit i, the non-similarity DSI of the different video image of imaging centeri(u, v),
Wherein: DSIi (u, v) represents any one pixel in template as matching scale, i.e., non-similarity, t (p, q), (u, v) is template and picture registration part top left corner pixel coordinate, the size of template are as follows: m × n;
D, rate matched is improved as far as possible in the premise of guarantee matching stability, each pixel in calculation template and image weight The absolute value of the difference for closing part respective pixel uses dynamic thresholding method,
Thresh (n)=k1×θ+k2×n
θ=k1×θ+k2×N
Wherein, 0≤k1< 1,0≤k2< N, thresh (n) are threshold value selected by n-th, k1, k2For weighting coefficient, θ is initial Threshold value, N are the maximum times for the threshold value chosen;
E, the DSI obtained by high speed template matching methodi(u, v) value further calculates out similar on different video image The alternate position spike of this translation model is denoted as by the alternate position spike of pictureWith
F, video image to be spliced is translated according to assigned direction respectively according to template similitudeWithForm splicing Video image afterwards, the part for translating recombination are known as transition region T;
G, with being fade-in gradually, blending algorithm obtains finally splicing target video figure to beautification fusion is carried out in transition region T out Picture, is fade-in that gradually to go out blending algorithm handled for the gray value of pixel in overlapping region.
Preferably, step G be fade-in gradually go out blending algorithm calculation formula it is as follows:
In formula: FUSiThe gray value of (x, y) expression fused image pixel;f1(x, y) indicates left image pixel to be spliced The gray value of point;f2(x, y) indicates the gray value of right image pixel to be spliced;w1、w2It is corresponding weight, and has w1+w2= 1,0 < w11,0 < w of <2< 1.Gradually go out method, w according to being fade-in1、w2Calculation formula are as follows:
In formula: xiFor the abscissa of current pixel point;x1For overlapping region left margin;xrFor overlapping region right margin.
Preferably, the specific steps of step 2 are as follows:
A, vehicle-mounted client and onboard servers end are opened, create socket on the client and be arranged socket attribute, In information to socket including binding IP address, port, server is connected;
B, video data is then based on H.264 algorithm and carries out Real Time Compression and transmission processing;
C, the video image of each frame is after Real Time Compression, while in server end creation, setting socket, binding IP Address, port, the connection for being then turned on monitoring, receiving client, at this point, server-side obtains the position of client vehicle first, And judge whether block presence in the channel model transmission process of selection, if there is blocking, reduced rule is adjusted in time, is increased The I number of frames in H.264 is formed after big compression, guarantees transmission reliability, while carrying out the reception of data packet, receives data packet Afterwards real-time decoding, get video image information and play, after transmission process, close network connection, at this point, mobile unit Between video direct transfer work completion.
The present invention has the advantages that
(1) it direct transfers method the present invention is based on V2X Vehicular video, solves the real-time splicing of vehicle-mounted panoramic video and ask with direct transferring Topic.It takes full advantage of imaging device systems in intelligent bus or train route cooperative system, can satisfy accuracy within an acceptable range Under the premise of, using template matching and it is fade-in the integrality and real-time for gradually going out blending algorithm and realizing video-splicing, realizes that video is spelled The integrality and real-time connect, and using the vehicle vehicle of bus or train route cooperative system, bus or train route communication interaction advantage, by spliced aphorama Frequency is directly transmitted by local area network, greatly reduces transmission time;
(2) it direct transfers method the present invention is based on V2X Vehicular video, largely eliminates vehicle-mounted blind area, assist vehicle Rear blind area vehicle, the children of check frequency and pet etc. are checked when turning sharp turn.Especially for the isometric boxcar of truck Blind area eradicating efficacy it is significant.Long boxcar driver's seat is higher, and rearview mirror visual range is limited, and blind area is larger, And Vehicular video splicing is utilized, driver can ensure significantly it according to full-view video image auxiliary judgment blind area environment Safety.
(3) it direct transfers method the present invention is based on V2X Vehicular video, mobile unit on single or multiple programme paths can be passed through The real time panoramic that direct transfers splicing video shifts to an earlier date the feasibility of the intelligent decision route, agility and safety.Intelligent vehicle is according to front The video that direct transfers of vehicle, judges the operating condition of the route in advance on programme path, for vehicle encounters emergency case and changes certainly Plan reserves the plenty of time, is also conducive to selection mobile unit and selects optimal traffic path.
Detailed description of the invention
Fig. 1 is vehicle co-located method flow diagram of the present invention.
Specific embodiment
Below in conjunction with drawings and examples, the present invention is described in further detail.
The present invention is that a kind of Vehicular video based on V2X direct transfers method, is realized as shown in Figure 1, passing through following step:
Step 1: splicing vehicle-mounted multi-angle video image in real time, and carry out beautification fusion treatment;
A, can be in coverage area in vehicle-mounted imaging device, multiple imaging device random shootings are several, and to cover vehicle-surroundings complete The image is carried out mathematical modeling, is denoted as f (x, y), determines the resolution ratio and visibility of imaging device by the video image of scape;? With under the premise of imaging device panorama farthest visual relative distance measurement, mobile unit imaging system constantly receives polygonal mobile unit Spend real time video image;
B, imaging center is collected in from the real time video image that multiple imaging devices receive, repeatedly utilizes high speed template With method.To make template matching high speed, using sequential similar test algorithm --- SSDA method calculates the non-of different video image Similarity is denoted as DSI (u, v).
C, for vehicle-carrying communication unit i, the non-similarity DSIi (u, v) of the different video image of imaging center
Wherein: DSIi(u, v) is as matching scale, i.e., non-similarity.T (p, q) represents any one pixel in template. The centre coordinate of not instead of template and picture registration part that (u, v) is indicated, intersection top left corner pixel coordinate.Template Size are as follows: m × n.
If D, had at image (u, v) and the consistent pattern of template, DSIi(u, v) value very little, it is then larger on the contrary.It is special It is not under the completely inconsistent occasion of template and image lap, if each pixel and picture registration part in template The absolute value of the difference of respective pixel successively increases down, and will be radically increased.Therefore, during doing addition, such as Fruit absolute value of the difference part and when being more than a certain threshold value, be considered as on this position there is no and the consistent pattern of template, thus It is transferred on next position and calculates DSIi(u,v).Due to calculating DSIi(u, v) is plus and minus calculation, and this calculating is big Most position midways stop, therefore can significantly shorten and calculate the time, improve matching speed.In order to guarantee that matching is steady Qualitative premise improves matching as far as possible and divides rate, and the selection of this patent uses dynamic threshold:
Thresh (n)=k1×θ+k2×n (2)
θ=k1×θ+k2×N (3)
Wherein, 0≤k1< 1,0≤k2< N, thresh (n) are threshold value selected by n-th, k1, k2For weighting coefficient, θ is initial Threshold value, N are the maximum times for the threshold value chosen.
E, the DSI obtained by high speed template matching methodi(u, v) value further calculates out similar on different video image The alternate position spike of this translation model is denoted as by the alternate position spike of pictureWith
F, video image to be spliced is translated according to assigned direction respectively according to template similitudeWithForm splicing Video image afterwards, the part for translating recombination are known as transition region T.
G, since transition region inevitably has splicing seams, video image effect is influenced, so gradually go out blending algorithm using being fade-in, To beautification fusion is carried out in transition region T, obtain finally splicing target video image.It is fade-in that gradually to go out blending algorithm be for overlay region The gray value of pixel is handled in domain, and calculation formula is as follows:
In formula: FUSiThe gray value of (x, y) expression fused image pixel;f1(x, y) indicates left image pixel to be spliced The gray value of point;f2(x, y) indicates the gray value of right image pixel to be spliced;w1、w2It is corresponding weight, and has w1+w2= 1,0 < w11,0 < w of <2< 1.Gradually go out method, w according to being fade-in1、w2Calculation formula are as follows:
In formula: xiFor the abscissa of current pixel point;x1For overlapping region left margin;xrFor overlapping region right margin.
The mobile unit is spliced rear video packet in real time and is transmitted directly to other mobile units by step 2;
A, vehicle-mounted client and onboard servers end are opened, create socket on the client and be arranged socket attribute, In the information to socket such as binding IP address, port, server is connected.
B, it video data is then based on H.264 algorithm carries out Real Time Compression and transmission to handle, H.264 algorithm principle is such as Under: the division of macro block is first carried out, in adjacent a few width image frames, general differentiated pixel only has the point within 10%, bright It spends difference variation and is no more than 2%, and the variation of chroma difference only has within 1%, it is believed that such figure can assign to one group, It is compressed respectively according to frame data compression principle and interframe compression principle again.H.264 encoder can according to different frame in image it Between the amplitude size that changes of pixel value carry out selective compression, 3 kinds of formats of main boil down to, initial encoder is when compressing video Full dose compression is done to retain original base data, that is, forms the I frame in H.264 after compressing, it is and then subsequent multiple with I frame phase Delta compression formation P frame is done according to I frame than changing little picture frame;It then, can be dynamic when content frame is almost unchanged Using bi-directional predictive coding and then prevent packet loss and reduce bandwidth, i.e., using I the or P frame of front and subsequent P frame as reference frame into Row compression forms B frame.Since P frame and B frame have dependence to I frame, it may occur that I frame data loss leads to subsequent P frame, B frame The chain reaction that can not be decoded and cause video image wrong always introduces packet sequence (GOP) coding, that is, every several frames time A full dose I frame is encoded out, is one group of video packets sequence between two I frames.Accordingly even when part frame loss, influence are The video playing of current group sequence.Then DCT conversion is carried out to residual error data, is finally carried out with lossless compressiong CABAC Lossless compression.
C, the video image of each frame is after Real Time Compression, while in server end creation, setting socket, binding IP Address, port, the connection for being then turned on monitoring, receiving client.At this point, server-side obtains the position of client vehicle first, And judge whether block presence in the channel model transmission process of selection, if there is blocking, reduced rule is adjusted in time, is increased Big I number of frames, guarantees transmission reliability.The reception of data packet is carried out simultaneously, is received real-time decoding after data packet, is got view Frequency image information simultaneously plays.After transmission process, network connection is closed.At this point, the video between mobile unit direct transfers, work is complete At.
The foregoing is merely the preferred embodiments of the application, not to limit the application, all essences in the application Within mind and principle, any modification, equivalent substitution, improvement and etc. done be should be included within the scope of the application protection.

Claims (4)

  1. A kind of method 1. Vehicular video based on V2X direct transfers, which is characterized in that this method is realized by following step: step 1, Splice vehicle-mounted multi-angle video image in real time, and carries out beautification fusion treatment;The mobile unit is spliced backsight by step 2 in real time Frequency packet is transmitted directly to other mobile units.
  2. 2. the method according to claim 1, wherein the specific steps of step 1 are as follows:
    A. can be in coverage area in vehicle-mounted imaging device, multiple imaging device random shootings are several to cover vehicle-surroundings panorama The image is carried out mathematical modeling, is denoted as f (x, y), determines the resolution ratio and visibility of imaging device by video image;Vehicle-mounted For equipment under the premise of imaging device panorama farthest visual relative distance measurement, mobile unit imaging system constantly receives multi-angle reality When video image;
    B, imaging center is collected in from the real time video image that multiple imaging devices receive, repeatedly utilizes high speed template matching Method, using sequential similar test algorithm --- SSDA method calculates the non-similarity of different video image, is denoted as DSIi(u, v);
    C, for vehicle-carrying communication unit i, the non-similarity DSI of the different video image of imaging centeri(u, v),
    Wherein: DSIi(u, v) represents any one pixel, (u, v) in template as matching scale, i.e., non-similarity, t (P, q) It is template and picture registration part top left corner pixel coordinate, the size of template are as follows: m × n;
    D, rate matched is improved as far as possible in the premise of guarantee matching stability, each pixel in calculation template and picture registration portion The absolute value of the difference of respective pixel is divided to use dynamic thresholding method,
    Thresh (n)=k1×θ+k2×n
    θ=k1×θ+k2×N
    Wherein, 0≤k1< 1,0≤k2< N, thresh (n) are threshold value selected by n-th, k1, k2For weighting coefficient, θ is initial threshold Value, N are the maximum times for the threshold value chosen;
    E, the DSI obtained by high speed template matching methodi(u, v) value further calculates out similar picture on different video image The alternate position spike of this translation model is denoted as by alternate position spikeWith
    F, video image to be spliced is translated according to assigned direction respectively according to template similitudeWithForm spliced view Frequency image, the part for translating recombination are known as transition region T;
    G, with being fade-in gradually, blending algorithm obtains finally splicing target video image, gradually to beautification fusion is carried out in transition region T out Entering gradually to go out blending algorithm is handled for the gray value of pixel in overlapping region.
  3. 3. according to the method described in claim 2, it is characterized by: being fade-in for the step G gradually goes out blending algorithm calculation formula It is as follows:
    In formula: FUSiThe gray value of (x, y) expression fused image pixel;f1(x, y) indicates left image pixel to be spliced Gray value;f2(x, y) indicates the gray value of right image pixel to be spliced;w1、w2It is corresponding weight, and has w1+w2=1,0 < w11,0 < w of <2< 1.Gradually go out method, w according to being fade-in1、w2Calculation formula are as follows:
    In formula: xiFor the abscissa of current pixel point;x1For overlapping region left margin;xrFor overlapping region right margin.
  4. 4. according to the method described in claim 1, it is characterized by: the specific steps of step 2 are as follows:
    A, vehicle-mounted client and onboard servers end are opened, create socket on the client and socket attribute, binding are set In information to socket including IP address, port, server is connected;
    B, video data is then based on H.264 algorithm and carries out Real Time Compression and transmission processing;
    C, the video image of each frame creates, socket, binding IP is arranged after Real Time Compression, while in server end Location, port, the connection for being then turned on monitoring, receiving client, at this point, server-side obtains the position of client vehicle first, and Judge whether block presence in the channel model transmission process of selection, if there is blocking, adjust reduced rule in time, increases The I number of frames in H.264 is formed after compression, guarantees transmission reliability, while carrying out the reception of data packet, after receiving data packet Real-time decoding gets video image information and plays, and after transmission process, network connection is closed, at this point, between mobile unit Video direct transfer work completion.
CN201910295668.6A 2019-04-12 2019-04-12 Vehicle-mounted video direct transmission method based on V2X Active CN109889792B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910295668.6A CN109889792B (en) 2019-04-12 2019-04-12 Vehicle-mounted video direct transmission method based on V2X

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910295668.6A CN109889792B (en) 2019-04-12 2019-04-12 Vehicle-mounted video direct transmission method based on V2X

Publications (2)

Publication Number Publication Date
CN109889792A true CN109889792A (en) 2019-06-14
CN109889792B CN109889792B (en) 2020-07-03

Family

ID=66937265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910295668.6A Active CN109889792B (en) 2019-04-12 2019-04-12 Vehicle-mounted video direct transmission method based on V2X

Country Status (1)

Country Link
CN (1) CN109889792B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565606A (en) * 2020-12-02 2021-03-26 鹏城实验室 Panoramic video intelligent transmission method and equipment and computer storage medium
WO2023015925A1 (en) * 2021-08-12 2023-02-16 中兴通讯股份有限公司 Vehicle blind spot detection method, vehicle, server, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102244680A (en) * 2011-07-04 2011-11-16 东华大学 Generation method of panoramic video code stream based on body area sensing array
CN106314424A (en) * 2016-08-22 2017-01-11 乐视控股(北京)有限公司 Overtaking assisting method and device based on automobile and automobile
CN107274346A (en) * 2017-06-23 2017-10-20 中国科学技术大学 Real-time panoramic video splicing system
US20180241993A1 (en) * 2016-05-17 2018-08-23 Arris Enterprises Llc Template matching for jvet intra prediction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102244680A (en) * 2011-07-04 2011-11-16 东华大学 Generation method of panoramic video code stream based on body area sensing array
US20180241993A1 (en) * 2016-05-17 2018-08-23 Arris Enterprises Llc Template matching for jvet intra prediction
CN106314424A (en) * 2016-08-22 2017-01-11 乐视控股(北京)有限公司 Overtaking assisting method and device based on automobile and automobile
CN107274346A (en) * 2017-06-23 2017-10-20 中国科学技术大学 Real-time panoramic video splicing system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565606A (en) * 2020-12-02 2021-03-26 鹏城实验室 Panoramic video intelligent transmission method and equipment and computer storage medium
CN112565606B (en) * 2020-12-02 2022-04-01 鹏城实验室 Panoramic video intelligent transmission method and equipment and computer storage medium
WO2023015925A1 (en) * 2021-08-12 2023-02-16 中兴通讯股份有限公司 Vehicle blind spot detection method, vehicle, server, and storage medium

Also Published As

Publication number Publication date
CN109889792B (en) 2020-07-03

Similar Documents

Publication Publication Date Title
KR102343651B1 (en) Generating apparatus and generating method, and reproducing apparatus and playing method
CN111634234B (en) Remote driving vehicle end information display method and remote driving method
US11140378B2 (en) Sub-picture-based processing method of 360-degree video data and apparatus therefor
US20200014905A1 (en) Method and apparatus for transmitting and receiving metadata for coordinate system of dynamic viewpoint
JP5890294B2 (en) Video processing system
US11138784B2 (en) Image processing apparatus and image processing method
CN107896333A (en) The method and device that a kind of remote control panoramic video based on intelligent terminal plays
US20220147042A1 (en) Near Real-Time Data and Video Streaming System for a Vehicle, Robot or Drone
JP7095697B2 (en) Generation device and generation method, as well as playback device and playback method
CN109889792A (en) A kind of Vehicular video based on V2X direct transfers method
CN113992636B (en) Unmanned aerial vehicle multichannel video transmission and concatenation system based on 5G
CN113805509A (en) Remote driving system and method based on V2X
CN101198061A (en) Solid video stream encoding method based on sight point image mapping
US20220116052A1 (en) Systems and Methods for Compressing and Storing Sensor Data Collected by an Autonomous Vehicle
CN110351528A (en) A kind of maneuvering platform based on the polymerization of ARization outdoor scene
Muller et al. 3-D reconstruction of a dynamic environment with a fully calibrated background for traffic scenes
JP5872171B2 (en) Camera system
CN117472058A (en) Intelligent remote driving method and system based on 3D point cloud
US20180194465A1 (en) System and method for video broadcasting
WO2023207624A1 (en) Data processing method, device, medium, and roadside collaborative device and system
CN114501091B (en) Method and device for generating remote driving picture and electronic equipment
CN116320465A (en) Video compression and transmission method, device, gateway and storage medium
CN115412844A (en) Real-time alignment method for vehicle networking beams based on multi-mode information synaesthesia
CN113727073A (en) Method and system for realizing vehicle-mounted video monitoring based on cloud computing
CN114937367A (en) Intelligent camera system for cooperative monitoring of vehicle and road and control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant