CN114640801A - Vehicle-end panoramic view angle auxiliary driving system based on image fusion - Google Patents

Vehicle-end panoramic view angle auxiliary driving system based on image fusion Download PDF

Info

Publication number
CN114640801A
CN114640801A CN202210124847.5A CN202210124847A CN114640801A CN 114640801 A CN114640801 A CN 114640801A CN 202210124847 A CN202210124847 A CN 202210124847A CN 114640801 A CN114640801 A CN 114640801A
Authority
CN
China
Prior art keywords
image
panoramic
fisheye
video
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210124847.5A
Other languages
Chinese (zh)
Other versions
CN114640801B (en
Inventor
仇翔
赵嘉楠
应皓哲
禹鑫燚
欧林林
魏岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202210124847.5A priority Critical patent/CN114640801B/en
Publication of CN114640801A publication Critical patent/CN114640801A/en
Application granted granted Critical
Publication of CN114640801B publication Critical patent/CN114640801B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Optimization (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Signal Processing (AREA)
  • Pure & Applied Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Algebra (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)

Abstract

A vehicle end panoramic view angle auxiliary driving system based on image fusion comprises: the system comprises an image acquisition module for acquiring 360-degree road information around a vehicle body, an embedded image processing device for processing an image output by the image acquisition module in real time, and an image display device for displaying a panoramic image at the vehicle end. The embedded image processing equipment and other two kinds of equipment are physically connected by cables; three fisheye cameras which are arranged at different positions on a vehicle and have 180-degree visual angles are used for collecting images, and a video encoder and a video collecting card are adopted for integrating multiple paths of analog video images into one path. The embedded image processing equipment splices the digital video images at three different angles into a panoramic image by passing the integrated digital video images through the fisheye image processing module and the panoramic image splicer, and displays the spliced panoramic image on the image display equipment through the Web panoramic player. The invention can eliminate the visual field blind area when the large-scale special vehicle runs.

Description

Vehicle end panoramic view angle auxiliary driving system based on image fusion
Technical Field
The invention relates to the field of safe driving of large special vehicles, in particular to a vehicle end panoramic view angle auxiliary driving system based on image fusion.
Background
With the continuous promotion of the urbanization construction process of China in recent years, more and more large-scale special vehicles appear on urban roads, including but not limited to urban buses, cement tank cars, muck trucks and the like. The appearance of large-scale special vehicles greatly facilitates the daily production and life of people. However, these large special vehicles often have the characteristics of long vehicle body and high vehicle body, so that these vehicles have wide view field blind areas, and serious traffic accidents can occur because the view field blind areas are large when the vehicles run on roads, particularly when the vehicles turn, and thus, great potential safety hazards are brought to the running of other vehicles or the walking of pedestrians on the roads. At present, major cities in the country already implement the policy of 'turning right and stopping for a long time' of large special vehicles, but traffic accidents caused by the blind areas of the vision of the large special vehicles are always happened. Therefore, in order to improve the driving safety of the large-sized special vehicle and reduce the danger of driving by other road vehicles or walking pedestrians as much as possible, it is necessary to reduce or even completely eliminate the blind areas of the field of vision when the large-sized special vehicle is driven on the road.
In order to achieve the above object, various systems for reducing blind areas in the field of vision when a vehicle is traveling have been studied and developed. The automobile rearview mirror front mounted probe vehicle-mounted system [ P ] Chinese patent CN112776729A,2021-05-11. solves the problem of the automobile vision blind area by installing an automobile body foreign body detection component on an automobile body, but has the limitations of complex design, higher cost and the like. The patent of China is CN213948279U,2021-08-13, and the problem of the blind area of the automobile is solved by installing a fixed display on the automobile and adjusting the fixed angle of the display, but the blind area is fixed, the angle of the display needs to be manually adjusted, and the information of the environment around the automobile body cannot be completely displayed.
Disclosure of Invention
In order to overcome the problems in the prior art, the invention provides an auxiliary driving system based on image fusion for a vehicle-end panoramic viewing angle, which aims to reduce or even completely eliminate the blind area of the visual field of a large special vehicle, thereby improving the driving safety of the large special vehicle.
The invention discloses an auxiliary driving system based on an image fusion vehicle-end panoramic view angle.
The image acquisition equipment adopts three fisheye cameras to acquire a 360-degree road environment around a vehicle body, integrates output images of the three fisheye cameras into one analog video by using a video encoder, converts the analog video into a digital video by using a video acquisition card, and transmits the digital video to a fisheye image processing module in the embedded image processing equipment through a cable for image processing;
the embedded image processing equipment comprises a fisheye image processing module; a Web panorama player; a panoramic image splicer;
and the fisheye image processing module is used for processing the original fisheye image output by the video acquisition card. In order to shoot a larger field angle, an original fisheye camera causes serious distortion of pixel information around an image, so that the original fisheye image needs to be corrected into a ring view by adopting a longitude and latitude unfolding mode to improve the final panoramic stitching effect. The method mainly comprises the steps of transforming pixel coordinates in a 2D Cartesian coordinate system into a spherical Cartesian coordinate system by carrying out a series of changes on the pixel coordinates in a fisheye image, finally converting the coordinates in the spherical Cartesian coordinate system into longitude and latitude coordinates, and then mapping pixel points based on the longitude and latitude coordinates to achieve the purpose of converting the fisheye image into a circular view. The specific operation steps are as follows:
1) after an original fisheye image is obtained, writing a circular mask function by using the circle center and the radius of three-view fisheye imaging to intercept an image area of a target, wherein the coordinate range of a pixel point of the intercepted image area is as shown in formula (1):
x∈[0,cols-1],y∈[0,rows-1] (1)
wherein x and y are respectively an abscissa and an ordinate of a pixel coordinate of the intercepted image, cols is a transverse width of the original fisheye image, and rows is a longitudinal width of the original fisheye image;
2) in order to control the resolution of the video finally subjected to image fusion, the size of the output picture in the step 1) needs to be controlled;
3) converting the pixel coordinate point (x, y) of the intercepted image area from the 2D Cartesian coordinate system to a standard coordinate A (x)A,yA) The conversion relationship is shown in equation (2):
Figure BDA0003500054380000021
wherein x and y are respectively an abscissa and an ordinate of a pixel coordinate of the intercepted image, cols is a transverse width of the original fisheye image, and rows is a longitudinal width of the original fisheye image;
4) the standard coordinate A (x)A,yA) Three-dimensional Cartesian coordinates P (x) converted into spherical shapep,yp,zp) The conversion formula is shown in formulas (3) and (4):
P(p,φ,θ) (3)
Figure BDA0003500054380000031
wherein, P is the radial distance of the connecting line OP between one point coordinate on the sphere and the origin O, theta is the included angle between OP and the z axis, phi is the included angle between the projection of OP on the xOy plane and the x axis, r is the radius of the sphere, F is the focal length of the fisheye camera, and the spherical coordinate system is converted into a Cartesian coordinate system according to the formula (5):
xp=psinθcosφ,yp=psinθsinφ,zp=pcosθ (5)
5) converting the space coordinate system P into longitude and latitude coordinates, wherein the conversion relation is shown as a formula (6):
Figure BDA0003500054380000032
wherein x isp,yp,zpIs the coordinate of point P, latitude is the longitude coordinate, longtude is the latitude coordinate;
6) converting and mapping the longitude and latitude coordinates in the step 5) into pixel coordinates (x) of an expansion mapo,yo) The mapping relationship is shown in formula (7):
Figure BDA0003500054380000033
wherein x is0The pixel abscissa, y, in the expanded view is shown0Indicating the pixel ordinate in the unfolded image;
7) and after the pixel point mapping is completed, black gap points which are not mapped by the pixel points appear in the picture, and the cubic interpolation algorithm is utilized to fill the black regions so as to achieve the effect of outputting the complete image.
And the panoramic image splicer is used for splicing the panoramic images of the three fisheye images processed by the fisheye image processing module. In order to ensure the continuity of the visual field after splicing, the fisheye cameras in three different directions need to be numbered according to a fixed sequence, and the sequence is always kept unchanged in subsequent operations. In the image processing process, the SIFT algorithm is used for calculating the characteristic point of each image, and the characteristic point is used as an image local invariant description operator with unchanged scale space, scaling, rotation and affine transformation. Matching feature points between adjacent images are also required to be searched, and the RANSAC method is used for further screening out the feature matching points, so that the homography matrix is calculated by searching the mapping relation between the feature matching points. And finally, carrying out perspective change on the images according to the homography matrix obtained by calculation, and finally splicing the images after perspective change to realize the function of splicing the panoramic images at the vehicle end. The specific operation steps are as follows:
1) when the fisheye image processed by the fisheye image processing module is obtained, images at different angles are sequentially and fixedly numbered, and the consistency of the numbers of the subsequent images is kept;
2) calculating the characteristic points of each image by using an SIFT algorithm carried by OpenCV, and taking the characteristic points as image local invariant description operators for keeping scale space, scaling, rotation and affine transformation unchanged;
3) in the invention, a method for calculating Euclidean distance measure is adopted to carry out rough matching on fisheye images at three visual angles, then an SIFT matching mode for comparing nearest neighbor Euclidean distance with next neighbor Euclidean distance is used for screening in the characteristic points of two images, and when the ratio of the nearest neighbor Euclidean distance to the next neighbor Euclidean distance is less than 0.8, the characteristic points are selected as matching points;
4) further screening out mismatching points from the rough matching points processed in the step 3) by using a RANSAC method, thereby improving the precision of subsequent image processing, and then finding out the mapping relation among the characteristic points, thereby calculating a homography matrix;
5) and 4) carrying out perspective transformation on the fisheye image processed by the fisheye image processing module by using the homography matrix obtained by the calculation in the step 4), splicing the images subjected to the perspective transformation, and finally synthesizing a video stream to realize the function of panoramic splicing.
And the Web panoramic player is used for displaying the panoramic image output by the panoramic image splicer on a webpage. In order to reduce the time delay of video display, the panoramic player adopts rtc. In order to enable the front end to support the playing of the panoramic video, the panoramic player is implemented by adopting the technology of three. Js, and the video label is taken as a rendering material on the surface of the sphere to map the sphere, so that the panoramic video is projected onto the sphere. The panoramic player mainly establishes a spherical model through three.js, and maps a sphere by taking a video label as a rendering material on the surface of the sphere, so that the effect of projecting a panoramic video onto the sphere is achieved, and a browser is installed on embedded image processing equipment, so that the panoramic image can be browsed on image display equipment;
the image display device is in physical connection with the embedded image processing device and is used for displaying the panoramic image presented by the Web player.
Compared with the prior art, the invention has the beneficial effects that: the three fisheye cameras can obtain environment information of 360 degrees around the vehicle body, three paths of analog video data are integrated into one path of digital video data through the video encoder and the video acquisition card, and the digital video data are input into the embedded image processing equipment for further processing, so that port resources of the embedded equipment are greatly saved, and the overall design cost is low. Meanwhile, the real-time processing capability and the image output capability of the video are improved by using the embedded equipment integrated with the AI chip, and the panoramic video is played by combining with a self-designed panoramic player, so that the problem of wide visual field blind areas existing when a large-scale special vehicle runs is greatly reduced or even completely eliminated, and the auxiliary driving effect is good.
Drawings
FIG. 1 is an overall framework diagram of the system of the present invention;
FIG. 2 is a schematic view of the camera mounting of the present invention;
FIG. 3 is a flow chart of the fisheye image processing of the invention;
FIG. 4 is a process flow diagram of image stitching fusion in accordance with the present invention.
Detailed Description
The following examples are further detailed in conjunction with the accompanying drawings:
as shown in fig. 1, an auxiliary driving system based on image fusion for a panoramic viewing angle at a vehicle end comprises an image acquisition device, an embedded image processing device and an image display device. The embedded image processing equipment is in physical connection with the image acquisition equipment and the image display equipment through cables, the image acquisition equipment mainly integrates multiple paths of analog videos into one path of digital video in a hardware coding mode through hardware equipment such as a video encoder and a video acquisition card, and digital video data are input into the embedded image processing equipment for image processing while data transmission quantity is reduced. After the embedded image processing equipment acquires an input digital image, an original fisheye image with serious distortion is processed into a ring view by a coordinate change method, so that richer image information is obtained, and then the three fisheye images processed into the ring view are subjected to image splicing to form a video stream. Finally, the spliced panoramic image is displayed on the image display equipment at the Web end through the Web panoramic player, so that the function of greatly reducing or even completely eliminating the wide range of the visual field blind area when the large special vehicle runs is realized, and the running safety of the large special vehicle is improved.
As shown in fig. 2, the viewing angle of the fisheye camera used in the present invention is 180 °, in order to achieve the acquisition of environment information of 360 ° around the vehicle body and a better image stitching effect, three fisheye cameras need to be installed at different positions of the same height, and each fisheye camera needs to be spaced by 120 °. According to the installation schematic diagram of fig. 2, and the installation positions of the three cameras on the large-sized special vehicle are adjusted according to the final image display effect, 360-degree panoramic environment image information around the vehicle body can be obtained, and therefore a good driving assisting effect is obtained.
As shown in fig. 3, the fisheye image processing module achieves an optimal output image effect by performing a series of techniques such as transformation, pixel mapping, and gap point image padding on pixel coordinates in a fisheye image, and the main implementation steps are as follows:
1) after an original fisheye image is obtained, writing a circular mask function by using the circle center and the radius of three-view fisheye imaging to intercept an image area of a target, wherein the coordinate range of a pixel point of the intercepted image area is as shown in formula (1):
x∈[0,cols-1],y∈[0,rows-1] (1)
wherein x and y are respectively an abscissa and an ordinate of a pixel coordinate of the intercepted image, cols is a transverse width of the original fisheye image, and rows is a longitudinal width of the original fisheye image;
2) in order to control the resolution of the video finally subjected to image fusion, the size of the output picture in the step 1) needs to be controlled;
3) converting the pixel coordinate point (x, y) of the intercepted image area from the 2D Cartesian coordinate system to a standard coordinate A (x)A,yA) The conversion relationship is shown in equation (2):
Figure BDA0003500054380000061
wherein x and y are respectively an abscissa and an ordinate of a pixel coordinate of the intercepted image, cols is a transverse width of the original fisheye image, and rows is a longitudinal width of the original fisheye image;
4) the standard coordinate A (x)A,yA) Three-dimensional Cartesian coordinates P (x) converted into spherical shapep,yp,zp) The conversion formula is shown in formulas (3) and (4):
P(p,φ,θ) (3)
Figure BDA0003500054380000062
wherein, P is the radial distance of the connecting line OP between one point coordinate on the sphere and the origin O, theta is the included angle between OP and the z axis, phi is the included angle between the projection of OP on the xOy plane and the x axis, r is the radius of the sphere, F is the focal length of the fisheye camera, and the spherical coordinate system is converted into a Cartesian coordinate system according to the formula (5):
xp=psinθcosφ,yp=psinθsinφ,zp=pcosθ (5)
5) converting the space coordinate system P into longitude and latitude coordinates, wherein the conversion relation is shown as a formula (6):
Figure BDA0003500054380000071
wherein x isp,yp,zpIs the coordinate of point P, latitude is the longitude coordinate, longtude is the latitude coordinate;
6) converting and mapping the longitude and latitude coordinates in the step 5) into pixel coordinates (x) of an expansion mapo,yo) The mapping relationship is shown in formula (7):
Figure BDA0003500054380000072
wherein x is0Showing the pixel abscissa, y, of the unfolded drawing0Indicating the pixel ordinate in the unfolded image;
7) and after the pixel point mapping is completed, black gap points which are not mapped by the pixel points appear in the picture, and the cubic interpolation algorithm is utilized to fill the black regions so as to achieve the effect of outputting the complete image.
As shown in fig. 4, the technical method of the panoramic image splicer includes image feature point extraction calculation, matching of matching feature points between adjacent images, finding a mapping relationship between feature points, calculation of a homography matrix, perspective transformation of an image, image splicing, and the like, and the main implementation steps are as follows:
1) when the fisheye image processed by the fisheye image processing module is obtained, images at different angles are sequentially and fixedly numbered, and the consistency of the numbers of the subsequent images is kept;
2) calculating the characteristic point of each image by using an OpenCV self-contained SIFT algorithm, and taking the characteristic point as an image local invariant description operator with unchanged scale space, scaling, rotation and affine transformation;
3) in the invention, a method for calculating Euclidean distance measure is adopted to carry out rough matching on fisheye images at three visual angles, then an SIFT matching mode for comparing nearest neighbor Euclidean distance with next neighbor Euclidean distance is used for screening in the characteristic points of two images, and when the ratio of the nearest neighbor Euclidean distance to the next neighbor Euclidean distance is less than 0.8, the characteristic points are selected as matching points;
4) further screening out mismatching points from the rough matching points processed in the step 3) by using an RANSAC method, so that the precision of subsequent image processing is improved, and then finding out a mapping relation among characteristic points so as to calculate a homography matrix;
5) and 4) carrying out perspective transformation on the fisheye image processed by the fisheye image processing module by using the homography matrix obtained by the calculation in the step 4), splicing the images subjected to the perspective transformation, and finally synthesizing a video stream to realize the function of panoramic splicing.
The embodiments described in this specification are merely illustrative of implementations of the inventive concept and the scope of the present invention should not be considered limited to the specific forms set forth in the embodiments but rather by the equivalents thereof as may occur to those skilled in the art upon consideration of the present inventive concept.

Claims (3)

1. The utility model provides a car end panorama visual angle driver assistance system based on image fusion which characterized in that: the method comprises the following steps: the system comprises an image acquisition module, an embedded image processing device and an image display device, wherein the image acquisition module is used for acquiring 360-degree road environment information around a vehicle body;
the image acquisition equipment adopts three fisheye cameras to acquire a 360-degree road environment around a vehicle body, integrates output images of the three fisheye cameras into one analog video by using a video encoder, converts the analog video into a digital video by using a video acquisition card, and transmits the digital video to a fisheye image processing module in the embedded image processing equipment through a cable for image processing;
the embedded image processing equipment comprises a fisheye image processing module; a Web panorama player; a panoramic image splicer;
the fisheye image processing module is used for processing the original fisheye image output by the video acquisition card; the original fisheye image is corrected into a ring view by adopting a longitude and latitude unfolding mode to improve the final panoramic stitching effect: the method comprises the following steps of transforming pixel coordinates in a 2D Cartesian coordinate system into a spherical Cartesian coordinate system by carrying out a series of changes on the pixel coordinates in a fisheye image, finally converting the coordinates in the spherical Cartesian coordinate system into longitude and latitude coordinates, and then mapping pixel points based on the longitude and latitude coordinates to convert the fisheye image into a circular view, wherein the specific operation steps are as follows:
1) after an original fisheye image is obtained, writing a circular mask function by using the circle center and the radius of three-view fisheye imaging to intercept an image area of a target, wherein the coordinate range of a pixel point of the intercepted image area is as shown in formula (1):
x∈[0,cols-1],y∈[0,rows-1] (1)
wherein x and y are respectively an abscissa and an ordinate of a pixel coordinate of the intercepted image, cols is a transverse width of the original fisheye image, and rows is a longitudinal width of the original fisheye image;
2) in order to control the resolution of the video finally subjected to image fusion, the size of the output picture in the step 1) needs to be controlled;
3) converting the pixel coordinate point (x, y) of the intercepted image area from the 2D Cartesian coordinate system to a standard coordinate A (x)A,yA) The conversion relationship is shown in equation (2):
Figure FDA0003500054370000011
wherein x and y are respectively an abscissa and an ordinate of a pixel coordinate of the intercepted image, cols is a transverse width of the original fisheye image, and rows is a longitudinal width of the original fisheye image;
4) the standard coordinate A (x)A,yA) Three-dimensional Cartesian coordinates P (y) converted into spherical shapep,yp,zp) The conversion formula is shown in formulas (3) and (4):
P(p,φ,θ) (3)
Figure FDA0003500054370000021
wherein, P is the radial distance of the connecting line OP between one point coordinate on the sphere and the origin O, theta is the included angle between OP and the z axis, phi is the included angle between the projection of OP on the xOy plane and the x axis, r is the radius of the sphere, F is the focal length of the fisheye camera, and the spherical coordinate system is converted into a Cartesian coordinate system according to the formula (5):
xp=psinθcosφ,yp=psinθsinφ,zp=pcosθ (5)
5) converting the space coordinate system P into longitude and latitude coordinates, wherein the conversion relation is shown as a formula (6):
Figure FDA0003500054370000022
wherein x isp,yp,zpIs the coordinate of point P, latitude is the longitude coordinate, longtude is the latitude coordinate;
6) converting and mapping the longitude and latitude coordinates in the step 5) into pixel coordinates (x) of an expansion mapo,yo) The mapping relationship is shown in formula (7):
Figure FDA0003500054370000023
wherein x is0The pixel abscissa, y, in the expanded view is shown0Indicating the pixel ordinate in the unfolded image;
7) after the pixel point mapping is completed, black void points which are not mapped by the pixel points appear in the picture, and the cubic interpolation algorithm is utilized to fill the black areas so as to achieve the effect of outputting the complete image;
panoramic picture splicer carries out panoramic picture's concatenation with three fisheye image after fisheye image processing module handles, includes: in order to ensure the continuity of the spliced vision, the fisheye cameras in three different directions need to be numbered according to a fixed sequence, and the sequence is always kept unchanged in the subsequent operation; in the image processing process, calculating the characteristic points of each image by using an SIFT algorithm, and taking the characteristic points as image local invariant description operators for keeping scale space, scaling, rotation and affine transformation unchanged; matching feature points between adjacent images are also required to be searched, and the RANSAC method is used for further screening out the feature matching points, so that a homography matrix is calculated by searching the mapping relation between the feature matching points; finally, perspective change is carried out on the images according to the homography matrix obtained through calculation, and finally the images after perspective change are spliced to realize the function of splicing the panoramic images at the vehicle end; the specific operation steps are as follows:
(1) when the fisheye image processed by the fisheye image processing module is obtained, images at different angles are sequentially and fixedly numbered, and the consistency of the numbers of the subsequent images is kept;
(2) calculating the characteristic point of each image by using an OpenCV self-contained SIFT algorithm, and taking the characteristic point as an image local invariant description operator with unchanged scale space, scaling, rotation and affine transformation;
(3) matching feature points between adjacent images are searched for in image splicing, rough matching is carried out on fisheye images at three visual angles by adopting a method for calculating Euclidean distance measure, then screening is carried out on the feature points of the two images by using an SIFT matching mode for comparing nearest neighbor Euclidean distance with next neighbor Euclidean distance, and when the ratio of the nearest neighbor Euclidean distance to the next neighbor Euclidean distance is less than 0.8, the matching feature points are selected as the matching points;
(4) further screening out mismatching points from the rough matching points processed in the step (3) by using a RANSAC method, thereby improving the precision of subsequent image processing, and then finding out the mapping relation among the characteristic points, thereby calculating a homography matrix;
(5) carrying out perspective transformation on the fisheye image processed by the fisheye image processing module by using the homography matrix obtained by the calculation in the step (4), then splicing the images subjected to the perspective transformation, and finally synthesizing a video stream to realize the function of panoramic splicing;
the Web panoramic player displays the panoramic image output by the panoramic image splicer on a webpage; in order to reduce the time delay of video display, the Web panoramic player adopts rtc.js player plug-in to construct a front-end player to play the video; in order to enable the front end to support the playing of the panoramic video, the panoramic playing is realized by adopting the technology of three.js + video tag + rtc.js; js, the Web panoramic player establishes a spherical model, and maps a sphere by taking a video label as a sphere surface rendering material, so as to achieve the effect of projecting a panoramic video onto the sphere; the Web panoramic player establishes a spherical model through three.js, maps a sphere by taking a video label as a rendering material on the surface of the sphere, and projects a panoramic video onto the sphere; installing a browser on the embedded image processing equipment, and browsing the panoramic image on the image display equipment;
and the image display equipment and the embedded image processing equipment are in physical connection, and display the panoramic image presented by the Web panoramic player.
2. The vehicle-end panoramic view auxiliary driving system based on image fusion as claimed in claim 1, characterized in that: the embedded image processing apparatus selects Atlas 200 acceleration module as the AI computation chip.
3. The vehicle-end panoramic view auxiliary driving system based on image fusion is characterized in that: the viewing angle of the fisheye camera is 180 °, three fisheye cameras are mounted at different positions at the same height, and 120 ° is required between each fisheye camera.
CN202210124847.5A 2022-02-10 2022-02-10 Car end panoramic view angle assisted driving system based on image fusion Active CN114640801B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210124847.5A CN114640801B (en) 2022-02-10 2022-02-10 Car end panoramic view angle assisted driving system based on image fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210124847.5A CN114640801B (en) 2022-02-10 2022-02-10 Car end panoramic view angle assisted driving system based on image fusion

Publications (2)

Publication Number Publication Date
CN114640801A true CN114640801A (en) 2022-06-17
CN114640801B CN114640801B (en) 2024-02-20

Family

ID=81946324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210124847.5A Active CN114640801B (en) 2022-02-10 2022-02-10 Car end panoramic view angle assisted driving system based on image fusion

Country Status (1)

Country Link
CN (1) CN114640801B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116245748A (en) * 2022-12-23 2023-06-09 珠海视熙科技有限公司 Distortion correction method, device, equipment, system and storage medium for ring-looking lens
CN117893719A (en) * 2024-03-15 2024-04-16 鹰驾科技(深圳)有限公司 Method and system for splicing self-adaptive vehicle body in all-around manner
CN117935127A (en) * 2024-03-22 2024-04-26 国任财产保险股份有限公司 Intelligent damage assessment method and system for panoramic video exploration

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106357976A (en) * 2016-08-30 2017-01-25 深圳市保千里电子有限公司 Omni-directional panoramic image generating method and device
CN106683045A (en) * 2016-09-28 2017-05-17 深圳市优象计算技术有限公司 Binocular camera-based panoramic image splicing method
US20180176465A1 (en) * 2016-12-16 2018-06-21 Prolific Technology Inc. Image processing method for immediately producing panoramic images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106357976A (en) * 2016-08-30 2017-01-25 深圳市保千里电子有限公司 Omni-directional panoramic image generating method and device
CN106683045A (en) * 2016-09-28 2017-05-17 深圳市优象计算技术有限公司 Binocular camera-based panoramic image splicing method
US20180176465A1 (en) * 2016-12-16 2018-06-21 Prolific Technology Inc. Image processing method for immediately producing panoramic images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
何林飞;朱煜;林家骏;黄俊健;陈旭东;: "基于球面空间匹配的双目鱼眼全景图像生成", 计算机应用与软件, no. 02 *
曹立波;夏家豪;廖家才;张冠军;张瑞锋;: "基于3D空间球面的车载全景快速生成方法", 中国公路学报, no. 01 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116245748A (en) * 2022-12-23 2023-06-09 珠海视熙科技有限公司 Distortion correction method, device, equipment, system and storage medium for ring-looking lens
CN116245748B (en) * 2022-12-23 2024-04-26 珠海视熙科技有限公司 Distortion correction method, device, equipment, system and storage medium for ring-looking lens
CN117893719A (en) * 2024-03-15 2024-04-16 鹰驾科技(深圳)有限公司 Method and system for splicing self-adaptive vehicle body in all-around manner
CN117935127A (en) * 2024-03-22 2024-04-26 国任财产保险股份有限公司 Intelligent damage assessment method and system for panoramic video exploration
CN117935127B (en) * 2024-03-22 2024-06-04 国任财产保险股份有限公司 Intelligent damage assessment method and system for panoramic video exploration

Also Published As

Publication number Publication date
CN114640801B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
CN114640801B (en) Car end panoramic view angle assisted driving system based on image fusion
CN108263283B (en) Method for calibrating and splicing panoramic all-round looking system of multi-marshalling variable-angle vehicle
CN107133988B (en) Calibration method and calibration system for camera in vehicle-mounted panoramic looking-around system
CN109435852B (en) Panoramic auxiliary driving system and method for large truck
CN109961522B (en) Image projection method, device, equipment and storage medium
US9858639B2 (en) Imaging surface modeling for camera modeling and virtual view synthesis
CN104865578B (en) A kind of parking garage fine map creation device and method
Creß et al. A9-dataset: Multi-sensor infrastructure-based dataset for mobility research
US8319618B2 (en) Image processing apparatus, image processing method, and recording medium
US8553081B2 (en) Apparatus and method for displaying an image of vehicle surroundings
CN107424120A (en) A kind of image split-joint method in panoramic looking-around system
CN109087251B (en) Vehicle-mounted panoramic image display method and system
US20100171828A1 (en) Driving Assistance System And Connected Vehicles
CN110363085B (en) Method for realizing looking around of heavy articulated vehicle based on articulation angle compensation
CN104463778A (en) Panoramagram generation method
CN111768332B (en) Method for splicing vehicle-mounted panoramic real-time 3D panoramic images and image acquisition device
CN102291541A (en) Virtual synthesis display system of vehicle
Zhu et al. Monocular 3d vehicle detection using uncalibrated traffic cameras through homography
CN113362228A (en) Method and system for splicing panoramic images based on improved distortion correction and mark splicing
CN112348741A (en) Panoramic image splicing method, panoramic image splicing equipment, storage medium, display method and display system
CN108447042B (en) Fusion method and system for urban landscape image data
CN110736472A (en) indoor high-precision map representation method based on fusion of vehicle-mounted all-around images and millimeter wave radar
CN112750075A (en) Low-altitude remote sensing image splicing method and device
CN111968184B (en) Method, device and medium for realizing view follow-up in panoramic looking-around system
CN106627373A (en) Image processing method and system used for intelligent parking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant