CN109889792B - Vehicle-mounted video direct transmission method based on V2X - Google Patents

Vehicle-mounted video direct transmission method based on V2X Download PDF

Info

Publication number
CN109889792B
CN109889792B CN201910295668.6A CN201910295668A CN109889792B CN 109889792 B CN109889792 B CN 109889792B CN 201910295668 A CN201910295668 A CN 201910295668A CN 109889792 B CN109889792 B CN 109889792B
Authority
CN
China
Prior art keywords
vehicle
video
image
real
gradual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910295668.6A
Other languages
Chinese (zh)
Other versions
CN109889792A (en
Inventor
田大新
段续庭
孙成明
张创
田柯宇
袁昊东
刘天燕
陈忻恺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201910295668.6A priority Critical patent/CN109889792B/en
Publication of CN109889792A publication Critical patent/CN109889792A/en
Application granted granted Critical
Publication of CN109889792B publication Critical patent/CN109889792B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The application provides a vehicle-mounted video direct transmission method based on V2X, a vehicle carries out real-time splicing on video images shot by an imaging device in real time by utilizing a template matching and a gradual-in and gradual-out fusion algorithm to obtain panoramic video images of the vehicle-mounted device at the moment, and then the panoramic video is directly transmitted to other vehicle-mounted devices by utilizing an H.264 algorithm. The invention fully utilizes the imaging equipment system and the communication interaction resources in the intelligent vehicle-road cooperative system, and can effectively solve or relieve the problems of real-time splicing and direct transmission of the vehicle-mounted panoramic video.

Description

Vehicle-mounted video direct transmission method based on V2X
Technical Field
The invention relates to a V2X-based vehicle-mounted video direct transmission method, and belongs to the technical field of intersection of an intelligent vehicle vision technology and a vehicle road wireless communication technology.
Background
As the leading direction of the development of traffic science in recent years, an intelligent vehicle-road Cooperative System (I-VICS) senses the state information of a vehicle road by means of various sensing facilities, and realizes information interaction and sharing between vehicles and the vehicle road through a vehicle-road wireless communication technology. In this traffic system: on one hand, for a Vehicle-to-Infrastructure (V2I) system, a red-green indicator light is used as an important medium for Vehicle-to-Vehicle interaction, state information of the Vehicle-to-Vehicle interaction can be broadcasted to surrounding Vehicle-mounted equipment by means of a special communication technology, vehicles make intelligent decisions according to received road section information, an optimal proposed route and an optimal proposed Vehicle speed are obtained through calculation, and the vehicles are guaranteed to pass through a crossroad in a more uniform and time-optimized mode; on the other hand, for a Vehicle-to-Vehicle (V2V) system, the Vehicle receives state information of surrounding Vehicle nodes, such as position and speed information of surrounding vehicles, and performs further calculation and analysis, so as to make an intelligent decision and convert a decision result into a control signal, and control the next attitude of the Vehicle, thereby realizing Vehicle collision avoidance, queue following and lane change overtaking, effectively reducing the travel time of the driver, and ensuring the life safety of the driver to a certain extent.
In the intelligent vehicle-road cooperative system, the perception information of the vehicle to the external environment is one of important information for realizing the interaction between the vehicle and the external information. In order to enable a vehicle to obtain a panoramic image of the surrounding environment in real time, the vehicle-mounted imaging equipment is used for completely splicing real-time video images and carrying out real-time surrounding environment video interaction between vehicles and vehicle roads. In the video splicing and direct transmission process of the vehicle-mounted equipment, imaging equipment on the vehicle-mounted equipment has certain visual angle limitation, and high-speed movement of a vehicle and equipment registration errors exist. The simple fusion method can cause the loss of the high-frequency content of the video image, the complex fusion algorithm is difficult to meet the real-time property, and how to ensure the integrity of the video image splicing and meet the requirement of the video image splicing on the real-time property becomes a bottleneck for realizing the global perception and the remote automatic driving of the vehicle.
In a general image stitching technology, a plurality of images are stitched into a large image by utilizing the similarity of pixels at the overlapped part of the images, wherein the feature extraction of the images is the most key, and the image stitching effect is directly influenced. The features of the image generally include the corners, contours, and some moments of invariance of the image. The angular points have the advantages of small calculation amount, strong adaptability, high accuracy, rich information content and the like, and become the most common characteristics of the image stitching technology. The Harris corner extraction algorithm is a common method for extracting features based on corners, the corners correspond to positions with high information content in an image, and compared with an original image, the method has the advantages that the accuracy can be greatly improved by using the corners to calibrate, match and reconstruct a camera, but the method greatly reduces the image processing speed and influences the operation efficiency of the algorithm under the condition that a large number of corners are matched. And a Scale-Invariant Feature Transform (SIFT) Feature matching method based on a Scale space is also a common method for image stitching, which robustly solves the problem of image registration of different cameras and different time viewing angles, but the SIFT algorithm has a large computation amount, and cannot meet the requirement of real-time video image stitching when 4 images need to be processed simultaneously.
Therefore, how to effectively utilize the existing technical resources to solve the problems of real-time splicing and direct transmission of the vehicle-mounted panoramic video is a problem which needs to be urgently solved in the development of the current intelligent vehicle-road cooperative system. For individual vehicles, the panoramic perception information of the vehicles can be used for realizing the functions of fleet management, vehicle danger avoidance, vehicle-vehicle cooperative collision avoidance and the like; for future development of intelligent vehicles, realization of real-time splicing and direct transmission of vehicle-mounted videos is a cornerstone for realizing vehicle global perception and remote automatic driving in the future. However, the existing Harris corner extraction algorithm and SIFT feature matching algorithm are difficult to meet the real-time requirement of vehicle-mounted equipment on panoramic stitching of video image information.
Disclosure of Invention
The invention aims to solve the problems, and provides a vehicle-mounted video direct transmission method based on V2X, which realizes the integrity and real-time performance of video splicing by utilizing a template matching and gradually-in and gradually-out fusion algorithm, adopts an H.264 algorithm to carry out video direct transmission among vehicle-mounted devices, and solves the problems of real-time splicing and direct transmission of a vehicle-mounted panoramic video.
The invention relates to a vehicle-mounted video direct transmission method based on V2X, which is realized by the following steps:
firstly, splicing vehicle-mounted multi-angle video images in real time, and beautifying and fusing;
step two, directly transmitting the video packet after real-time splicing of the vehicle-mounted equipment to other vehicle-mounted equipment;
preferentially, the specific steps of the first step are as follows:
A. in the coverage range of the vehicle-mounted imaging equipment, a plurality of imaging equipment randomly shoot a plurality of video images capable of covering the panorama around the vehicle, the images are subjected to mathematical modeling and recorded as f (x, y), and the resolution and the visibility of the imaging equipment are determined; on the premise of measuring the maximum visual relative distance between the vehicle-mounted equipment and the imaging equipment in a panoramic view, the imaging system of the vehicle-mounted equipment continuously receives multi-angle real-time video images;
B. real-time video images received from a plurality of imaging devices are collected in an imaging center, a high-speed template matching method is used for a plurality of times, a sequential similarity inspection algorithm-SSDA method is adopted, the non-similarity of different video images is calculated and is marked as DSIi(u,v);
C. For the vehicle-mounted communication unit i, the non-similarity DSI of different video images of the imaging centeri(u,v),
Figure BDA0002026399600000031
DSIi (u, v) is used as a matching scale, namely, non-similarity, t (p, q) represents any pixel point in the template, u, v is the pixel coordinate of the upper left corner of the overlapped part of the template and the image, and the size of the template is m × n;
D. the matching rate is improved as much as possible on the premise of ensuring the matching stability, the absolute value of the difference between each pixel in the template and the corresponding pixel of the image overlapping part is calculated by adopting a dynamic threshold value method,
thresh(n)=k1×θ+k2×n
θ=k1×θ+k2×N
wherein k is not less than 01<1,0≤k2<N, thresh (N) is the nth selected threshold, k1,k2Is a weighting coefficient, theta is an initial threshold value, and N is the maximum number of times of the selected threshold value;
E. DSI obtained by high speed template matching methodi(u, v) value, further calculating the position difference of similar pictures on different video images, and recording the position difference of the translation model as
Figure BDA0002026399600000032
And
Figure BDA0002026399600000033
F. respectively translating the video images to be spliced according to the similar points of the templates and the designated direction
Figure BDA0002026399600000034
And
Figure BDA0002026399600000035
forming a spliced video image, wherein the translation and recombination part is called a transition region T;
G. and beautifying and fusing the transition region T by using a gradual-in and gradual-out fusion algorithm to obtain a final spliced target video image, wherein the gradual-in and gradual-out fusion algorithm is used for processing the gray values of the pixels in the overlapping region.
Preferably, the calculation formula of the fade-in fade-out fusion algorithm in the step G is as follows:
Figure BDA0002026399600000041
in the formula: FUSi(x, y) represents the gray value of the image pixel point after fusion; f. of1(x, y) represents the gray value of the pixel point of the left image to be spliced; f. of2(x, y) represents the gray value of the pixel point of the right image to be spliced; w is a1、w2Is a corresponding weight and has w1+w2=1,0<w1<1,0<w2Is less than 1. According to the method of progressive addition and subtraction, w1、w2The calculation formula of (2) is as follows:
Figure BDA0002026399600000042
Figure BDA0002026399600000043
in the formula: x is the number ofiThe abscissa of the current pixel point is; x is the number of1Is the overlap region left boundary; x is the number ofrThe overlap region right boundary.
Preferably, the specific steps of the second step are as follows:
A. starting a vehicle-mounted client and a vehicle-mounted server, creating a socket on the client, setting socket attributes, binding information including IP addresses and ports to the socket, and connecting a server;
B. then, the video data is compressed and transmitted in real time based on an H.264 algorithm;
C. after the video image of each frame is compressed in real time, a socket is created and set at a server side, an IP address and a port are bound, then the connection of a monitoring and receiving client side is started, at the moment, the server side firstly obtains the position of a vehicle at the client side, judges whether shielding exists in the transmission process of a selected channel model, adjusts the compression rule in time if shielding exists, increases the number of I frames in H.264 formed after compression, ensures the transmission reliability, receives a data packet at the same time, decodes the received data packet in real time, acquires video image information and plays the video image information, and closes network connection after the transmission process is finished, at the moment, the video direct transmission work among the vehicle-mounted equipment is finished.
The invention has the advantages that:
(1) the invention is based on a V2X vehicle-mounted video direct transmission method, and solves the problems of real-time splicing and direct transmission of vehicle-mounted panoramic videos. The imaging equipment system in the intelligent vehicle-road cooperative system is fully utilized, on the premise that the accuracy can be within an acceptable range, the integrity and the real-time performance of video splicing are realized by utilizing a template matching and gradually-in and gradually-out fusion algorithm, the integrity and the real-time performance of the video splicing are realized, and the vehicle-vehicle and vehicle-road communication interaction advantages of the vehicle-road cooperative system are utilized to directly transmit the spliced panoramic video through a local area network, so that the transmission time is greatly reduced;
(2) the invention is based on the V2X vehicle video direct transmission method, eliminates the vehicle vision blind area to the utmost extent, assists the vehicle to check the vehicle in the rear blind area when turning a sharp corner, and detects children and pets in the blind area. Especially, the visual field blind area eliminating effect on long-box cars such as large trucks is remarkable. The field of vision of long-box vehicle driver is on the high side, and the visual scope of rear-view mirror is limited, and the field of vision blind area is great, and utilizes on-vehicle video mosaic technique, and the driver can be based on the supplementary judgement blind area environment of panoramic video image, has ensured its security greatly.
(3) Based on the V2X vehicle-mounted video direct transmission method, the feasibility, rapidity and safety of the route can be intelligently judged in advance through the direct transmission real-time panoramic mosaic video of the vehicle-mounted equipment on a single or a plurality of planned routes. The intelligent vehicle judges the running condition of the route in advance according to the direct-transmission video of the vehicle on the front planned route, reserves a large amount of time for changing the decision when the vehicle meets the outburst condition, and is also beneficial to selecting the vehicle-mounted equipment to select the optimal travel route.
Drawings
FIG. 1 is a flow chart of a vehicle co-location method of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
The invention relates to a vehicle-mounted video direct transmission method based on V2X, which is realized by the following steps as shown in figure 1:
firstly, splicing vehicle-mounted multi-angle video images in real time, and beautifying and fusing;
A. in the coverage range of the vehicle-mounted imaging equipment, a plurality of imaging equipment randomly shoot a plurality of video images capable of covering the panorama around the vehicle, the images are subjected to mathematical modeling and recorded as f (x, y), and the resolution and the visibility of the imaging equipment are determined; on the premise of measuring the maximum visual relative distance between the vehicle-mounted equipment and the imaging equipment in a panoramic view, the imaging system of the vehicle-mounted equipment continuously receives multi-angle real-time video images;
B. real-time video images received from a plurality of imaging devices are collected in an imaging center, and a high-speed template matching method is used for multiple times. In order to speed up template matching, a sequential similarity inspection algorithm, namely an SSDA (simple sequence analysis) method, is adopted to calculate the non-similarity of different video images, and the non-similarity is marked as DSI (u, v).
C. For the vehicle-mounted communication unit i, the non-similarity DSIi (u, v) of different video images of the imaging center
Figure BDA0002026399600000061
Wherein: DSIi(u, v) as a matching metric, i.e., dissimilarity. t (p, q) represents any pixel point in the template. (u, v) means not a template andthe coordinates of the center of the image overlap, but the coordinates of the top left pixel of the overlap, the size of the template is m × n.
D. DSI if there is a pattern at the image (u, v) that is consistent with the templateiThe (u, v) value is small, and conversely, is large. In particular, when the template and the image overlap portion do not coincide with each other at all, if the absolute value of the difference between each pixel in the template and the corresponding pixel in the image overlap portion increases in order, the sum thereof increases sharply. Therefore, in the process of adding, if the sum of the absolute value of the difference exceeds a certain threshold, the position is considered to have no pattern consistent with the template, and the DSI is calculated by transferring to the next positioni(u, v). Due to calculation of DSIi(u, v) is only addition and subtraction, and the calculation is stopped in most positions, so that the calculation time can be greatly shortened, and the matching speed can be improved. In order to improve the matching rate as much as possible on the premise of ensuring the matching stability, the patent selects to adopt a dynamic threshold:
thresh(n)=k1×θ+k2×n (2)
θ=k1×θ+k2×N (3)
wherein k is not less than 01<1,0≤k2<N, thresh (N) is the nth selected threshold, k1,k2For the weighting factor, θ is the initial threshold and N is the maximum number of times the threshold is selected.
E. DSI obtained by high speed template matching methodi(u, v) value, further calculating the position difference of similar pictures on different video images, and recording the position difference of the translation model as
Figure BDA0002026399600000062
And
Figure BDA0002026399600000063
F. respectively translating the video images to be spliced according to the similar points of the templates and the designated direction
Figure BDA0002026399600000064
And
Figure BDA0002026399600000065
the spliced video image is formed, and the part of the translation reorganization is called a transition region T.
G. And as the transition area is difficult to avoid the splicing seam, the effect of the video image is influenced, and the beautification and fusion are carried out in the transition area T by applying a gradual-in and gradual-out fusion algorithm to obtain the final spliced target video image. The gradual-in and gradual-out fusion algorithm is used for processing the gray value of the pixel points in the overlapping area, and the calculation formula is as follows:
Figure BDA0002026399600000071
in the formula: FUSi(x, y) represents the gray value of the image pixel point after fusion; f. of1(x, y) represents the gray value of the pixel point of the left image to be spliced; f. of2(x, y) represents the gray value of the pixel point of the right image to be spliced; w is a1、w2Is a corresponding weight and has w1+w2=1,0<w1<1,0<w2Is less than 1. According to the method of progressive addition and subtraction, w1、w2The calculation formula of (2) is as follows:
Figure BDA0002026399600000072
Figure BDA0002026399600000073
in the formula: x is the number ofiThe abscissa of the current pixel point is; x is the number of1Is the overlap region left boundary; x is the number ofrThe overlap region right boundary.
Step two, directly transmitting the video packet after real-time splicing of the vehicle-mounted equipment to other vehicle-mounted equipment;
A. and starting the vehicle-mounted client and the vehicle-mounted server, creating a socket on the client, setting socket attributes, binding information such as IP addresses and ports to the socket, and connecting the server.
B. Then, the video data is compressed and transmitted in real time based on an H.264 algorithm, and the principle of the H.264 algorithm is as follows: the macro block is divided firstly, in adjacent image pictures, generally, the difference pixels are only points within 10%, the brightness difference value is not more than 2%, and the chroma difference value is only within 1%, and the pictures can be divided into a group, and then compression is carried out according to the intra-frame compression principle and the inter-frame compression principle. The H.264 encoder selectively compresses the video into 3 formats according to the amplitude of pixel value change between different frames in the image, the encoder performs full compression to retain original basic data when compressing the video, namely an I frame in the H.264 is formed after compression, and a plurality of image frames which are not changed much compared with the I frame are subjected to incremental compression to form a P frame according to the I frame; then, when the frame content is almost unchanged, bidirectional predictive coding is dynamically adopted to prevent packet loss and reduce bandwidth, namely, the former I or P frame and the latter P frame are used as reference frames to be compressed to form a B frame. Because P frames and B frames have a dependency relationship on I frames, I frame data loss can occur, so that subsequent P frames and B frames cannot be decoded and video images can always have wrong chain reaction, block sequence (GOP) coding is introduced, namely, a full amount of I frames are coded every few frames, and a group of video block sequences is formed between two I frames. So that even if a portion of the frames are lost, only video playback of the current packet sequence is affected. And then performing DCT conversion on the residual data, and finally performing lossless compression by using a lossless compression technology CABAC.
C. After the video image of each frame is compressed in real time, a socket is created and set at the server side, an IP address and a port are bound, and then the connection of a monitoring and receiving client side is opened. At the moment, the server side firstly obtains the position of the vehicle at the client side, judges whether shielding exists in the transmission process of the selected channel model, and adjusts the compression rule in time if shielding exists, so that the number of I frames is increased, and the transmission reliability is ensured. And meanwhile, receiving the data packet, decoding in real time after receiving the data packet, acquiring video image information and playing the video image information. And after the transmission process is finished, closing the network connection. At this time, the video direct transmission work between the vehicle-mounted devices is completed.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (1)

1. A vehicle-mounted video direct transmission method based on V2X is characterized by comprising the following steps:
splicing vehicle-mounted multi-angle video images in real time, and beautifying and fusing;
step two, directly transmitting the video packet after real-time splicing of the vehicle-mounted equipment to other vehicle-mounted equipment, wherein the specific step of the step one is as follows:
A. in the coverage range of the vehicle-mounted imaging equipment, a plurality of imaging equipment randomly shoot a plurality of video images capable of covering the panorama around the vehicle, the images are subjected to mathematical modeling and recorded as f (x, y), and the resolution and the visibility of the imaging equipment are determined; on the premise of measuring the maximum visual relative distance between the vehicle-mounted equipment and the imaging equipment in a panoramic view, the imaging system of the vehicle-mounted equipment continuously receives multi-angle real-time video images;
B. real-time video images received from a plurality of imaging devices are collected in an imaging center, a high-speed template matching method is used for a plurality of times, a sequential similarity inspection algorithm-SSDA method is adopted, the non-similarity of different video images is calculated and is marked as DSIi(u,v);
C. For the vehicle-mounted communication unit i, the non-similarity of different video images of the imaging center
Figure FDA0002482346050000011
Wherein: DSIi(u, v) is taken as a matching scale, namely, the non-similarity, t (p, q) represents any pixel point in the template, and (u, v) is the pixel coordinate of the upper left corner of the overlapped part of the template and the image, and the size of the template is m × n;
D. the matching rate is improved as much as possible on the premise of ensuring the matching stability, the absolute value of the difference between each pixel in the template and the corresponding pixel of the image overlapping part is calculated by adopting a dynamic threshold value method,
thresh(n)=k1×θ+k2×n
θ=k1×θ+k2×N
wherein k is not less than 01<1,0≤k2< N, thresh (N) is the nth selected threshold, k1,k2Is a weighting coefficient, theta is an initial threshold value, and N is the maximum number of times of the selected threshold value;
E. DSI obtained by high speed template matching methodi(u, v) value, further calculating the position difference of similar pictures on different video images, and recording the position difference of the translation model as
Figure FDA0002482346050000024
And
Figure FDA0002482346050000025
F. respectively translating the video images to be spliced according to the similar points of the templates and the designated direction
Figure FDA0002482346050000026
And
Figure FDA0002482346050000027
forming a spliced video image, wherein the translation and recombination part is called a transition region T;
G. and G, beautifying and fusing the transition region T by using a gradual-in gradual-out fusion algorithm to obtain a final spliced target video image, wherein the gradual-in gradual-out fusion algorithm is used for processing gray values of pixel points in an overlapped region, and the calculation formula of the gradual-in gradual-out fusion algorithm in the step G is as follows:
Figure FDA0002482346050000021
in the formula: FUSi(x, y) represents the gray value of the image pixel point after fusion; f. of1(x, y) represents the gray value of the pixel point of the left image to be spliced; f. of2(x, y) represents the gray value of the pixel point of the right image to be spliced; w is a1、w2Is a corresponding weight and has w1+w2=1,0<w1<1,0<w2Less than 1; according to the method of progressive addition and subtraction, w1、w2The calculation formula of (2) is as follows:
Figure FDA0002482346050000022
Figure FDA0002482346050000023
in the formula: x is the number ofiThe abscissa of the current pixel point is; x is the number of1Is the overlap region left boundary; x is the number ofrAnd the right boundary of the overlapping area comprises the following specific steps:
A. starting a vehicle-mounted client and a vehicle-mounted server, creating a socket on the client, setting socket attributes, binding information including IP addresses and ports to the socket, and connecting a server;
B. then, the video data is compressed and transmitted in real time based on an H.264 algorithm;
C. after the video image of each frame is compressed in real time, a socket is created and set at a server side, an IP address and a port are bound, then the connection of a monitoring and receiving client side is started, at the moment, the server side firstly obtains the position of a vehicle at the client side, judges whether shielding exists in the transmission process of a selected channel model, adjusts the compression rule in time if shielding exists, increases the number of I frames in H.264 formed after compression, ensures the transmission reliability, receives a data packet at the same time, decodes the received data packet in real time, acquires video image information and plays the video image information, and closes network connection after the transmission process is finished, at the moment, the video direct transmission work among the vehicle-mounted equipment is finished.
CN201910295668.6A 2019-04-12 2019-04-12 Vehicle-mounted video direct transmission method based on V2X Active CN109889792B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910295668.6A CN109889792B (en) 2019-04-12 2019-04-12 Vehicle-mounted video direct transmission method based on V2X

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910295668.6A CN109889792B (en) 2019-04-12 2019-04-12 Vehicle-mounted video direct transmission method based on V2X

Publications (2)

Publication Number Publication Date
CN109889792A CN109889792A (en) 2019-06-14
CN109889792B true CN109889792B (en) 2020-07-03

Family

ID=66937265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910295668.6A Active CN109889792B (en) 2019-04-12 2019-04-12 Vehicle-mounted video direct transmission method based on V2X

Country Status (1)

Country Link
CN (1) CN109889792B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565606B (en) * 2020-12-02 2022-04-01 鹏城实验室 Panoramic video intelligent transmission method and equipment and computer storage medium
CN115705781A (en) * 2021-08-12 2023-02-17 中兴通讯股份有限公司 Vehicle blind area detection method, vehicle, server and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102244680A (en) * 2011-07-04 2011-11-16 东华大学 Generation method of panoramic video code stream based on body area sensing array
CN106314424A (en) * 2016-08-22 2017-01-11 乐视控股(北京)有限公司 Overtaking assisting method and device based on automobile and automobile
CN107274346A (en) * 2017-06-23 2017-10-20 中国科学技术大学 Real-time panoramic video splicing system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9948930B2 (en) * 2016-05-17 2018-04-17 Arris Enterprises Llc Template matching for JVET intra prediction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102244680A (en) * 2011-07-04 2011-11-16 东华大学 Generation method of panoramic video code stream based on body area sensing array
CN106314424A (en) * 2016-08-22 2017-01-11 乐视控股(北京)有限公司 Overtaking assisting method and device based on automobile and automobile
CN107274346A (en) * 2017-06-23 2017-10-20 中国科学技术大学 Real-time panoramic video splicing system

Also Published As

Publication number Publication date
CN109889792A (en) 2019-06-14

Similar Documents

Publication Publication Date Title
CN110268450B (en) Image processing apparatus, image processing method, and computer readable medium
JP7050683B2 (en) 3D information processing method and 3D information processing equipment
US20200226794A1 (en) Three-dimensional data creation method, client device, and server
US11112791B2 (en) Selective compression of image data during teleoperation of a vehicle
US11833966B2 (en) Switchable display during parking maneuvers
WO2018021067A1 (en) Image processing device and image processing method
WO2018025660A1 (en) Image processing device and image processing method
CN111709343A (en) Point cloud detection method and device, computer equipment and storage medium
KR20190008193A (en) GENERATING APPARATUS AND GENERATING METHOD
EP3404913B1 (en) A system comprising a video camera and a client device and a method performed by the same
JP6944138B2 (en) Image processing device and image processing method
CN109478347A (en) Image processing apparatus and image processing method
CN109889792B (en) Vehicle-mounted video direct transmission method based on V2X
WO2018021070A1 (en) Image processing device and image processing method
US11161456B1 (en) Using the image from a rear view camera in a three-camera electronic mirror system to provide early detection of on-coming cyclists in a bike lane
CN112637551A (en) Panoramic data management software system for multi-path 4K quasi-real-time spliced videos
US11876980B2 (en) Optimizing video encoding and/or transmission for remote driving applications
US11659154B1 (en) Virtual horizontal stereo camera
CN107371040A (en) A kind of unmanned plane image efficient process system
US11586843B1 (en) Generating training data for speed bump detection
US11645779B1 (en) Using vehicle cameras for automatically determining approach angles onto driveways
JPWO2018021066A1 (en) Image processing apparatus and image processing method
KR20190034525A (en) Image processing apparatus and image processing method
CN104756497B (en) Image delivery system
CN113727073A (en) Method and system for realizing vehicle-mounted video monitoring based on cloud computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant