CN114418851A - Multi-view 3D panoramic all-around viewing system and splicing method - Google Patents

Multi-view 3D panoramic all-around viewing system and splicing method Download PDF

Info

Publication number
CN114418851A
CN114418851A CN202210056109.1A CN202210056109A CN114418851A CN 114418851 A CN114418851 A CN 114418851A CN 202210056109 A CN202210056109 A CN 202210056109A CN 114418851 A CN114418851 A CN 114418851A
Authority
CN
China
Prior art keywords
model
cameras
camera
image
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210056109.1A
Other languages
Chinese (zh)
Inventor
蒋杰
周宇轩
李迅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Huilian Intelligent Technology Co ltd
Original Assignee
Changsha Huilian Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Huilian Intelligent Technology Co ltd filed Critical Changsha Huilian Intelligent Technology Co ltd
Priority to CN202210056109.1A priority Critical patent/CN114418851A/en
Publication of CN114418851A publication Critical patent/CN114418851A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a multi-view 3D panoramic all-round looking system and a splicing method, wherein the system comprises: the system comprises a plurality of cameras, a plurality of sensors, a plurality of control module and a plurality of control module, wherein the cameras are respectively arranged at the front, the back, the left side and the right side of a vehicle, and at least one side of each camera is provided with more than two cameras to eliminate blind areas; and the image splicing module is respectively connected with the cameras and is used for splicing images collected by the cameras based on a pre-constructed virtual 3D model to form a virtual 3D panoramic all-around image, and when the virtual 3D model is constructed, if more than two cameras are arranged on the same side, the texture vertex coincided by each camera on the same side is segmented by using a Y axis as a fusion segmentation curve. The invention can be suitable for realizing multi-view 3D panoramic viewing under various types of vehicles and effectively eliminating blind areas.

Description

Multi-view 3D panoramic all-around viewing system and splicing method
Technical Field
The invention relates to the technical field of auxiliary driving vision systems, in particular to a multi-view 3D panoramic all-around viewing system and a splicing method.
Background
The 3D-360 degree panoramic all-round vision is a key auxiliary driving function in an auxiliary driving system, in the prior art, the 3D-360 degree panoramic all-round vision system is generally that a fisheye camera is respectively arranged in the front direction, the rear direction, the left direction and the right direction of a vehicle body, then the panoramic view of the environment around the vehicle body and a three-dimensional model trolley are placed in a three-dimensional environment through image processing and computer vision technology, and a 3D-360 degree panoramic all-round vision image of the environment around the vehicle is formed virtually from a virtual visual angle.
However, only one camera is arranged on each side of the vehicle, and for the engineering machinery equipment with special-shaped structure or the engineering machinery equipment with long vehicle body, long box body and the like, a coverage blind area is generated, so that a 360-degree panoramic all-round image cannot be obtained. For the special-shaped structure engineering mechanical equipment, if only one camera is respectively arranged in the front direction, the rear direction, the left direction and the right direction of a vehicle, the engineering arm and other devices on the carrier can shield the view field of the camera arranged on the vehicle body of the carrier, and as shown in fig. 1, the splicing effect of 360-degree circular view cannot be finished; for engineering machinery equipment with a long car body and a long box body, as shown in fig. 2, because the car body of the carrier is too long, if only one camera is arranged at the left side and the right side, the field angle is often insufficient, the overlapped area between the left side and the right side and the front direction and the overlapped area between the left side and the right side cannot be covered, a blind area exists, and therefore the all-round spliced image around the car body of the carrier cannot be completely and clearly formed.
Because a 3D-360 ° panoramic system of a traditional 4-mesh camera generally constructs a virtual 3D model based on OpenGL, such as a flat-bottomed bowl model, a ship-shaped model, etc., real environment images of 4 cameras are mapped and rendered onto the virtual 3D model to form a display of 3D-360 ° panoramic around a carrier vehicle body, the 4 cameras respectively generate corresponding 3D texture vertex Grid for OpenGL rendering, and sequentially rotate 0 °, 90 °, 180 °, 270 ° according to front, rear, left and right positions based on the same coordinate origin (0,0,0), so as to realize that Grid corresponding to every two cameras are mutually perpendicular. If the problem of blind areas is solved by directly adding cameras in a 3D-360 degree all-round-looking system of a traditional 4-mesh camera, images among the cameras have overlapping areas when more than two cameras are installed on the same side, the overlapping areas among the cameras on the same side cannot be divided according to an image splicing method of the traditional 4-mesh camera, and therefore the 3D-360 all-round-looking effect cannot be achieved by directly adding the cameras on the basis of the 3D-360 degree all-round-looking method of the traditional 4-mesh camera.
As shown in fig. 3, in a 3D-360 ° panoramic system of a conventional 4-view camera, 8 boundary points p 1-p 8 are defined according to the same standard for a 3D texture vertex Grid corresponding to each camera; the Grid points of the 3D texture vertexes corresponding to the cameras form a virtual 3D flat-bottom bowl model after the Grid points are rotated, however, Grid points of two different cameras exist in Grid points of an overlapping area, the Grid points need to be divided into Grid parts corresponding to the cameras for texture rendering, namely, the Grid parts of the different cameras need to be divided, and if a fusion dividing line in the shape of y ═ ax + b is directly adopted for division, namely, the fusion dividing line is divided by S1And S2Two points determine the equation of a straight line, S1Is a line segment P1P8And a line segment P1’P8' intersection, S2Is a line segment P2And P7' since the 3D texture vertex Grid generated by the same-side mounted camera is identical and the rotation angles are also identical, the line segment P of the same-side camera1P8And a line segment P1’P8' coincidence, infinite intersection points result in uncertainty of point S1Therefore, the straight-line equation of the dividing line cannot be determined, that is, the division of the overlapping region cannot be achieved.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the technical problems in the prior art, the invention provides the multi-view 3D panoramic all-around system and the splicing method, which have the advantages of simple structure and low cost, and can be suitable for various vehicles to realize 3D panoramic all-around and eliminate blind areas.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
a multi-view 3D panoramic surround view system, comprising:
the system comprises a plurality of cameras, a plurality of sensors and a control module, wherein the cameras are respectively arranged at the front, the back, the left side and the right side of a vehicle, and at least one side of the cameras is provided with more than two cameras to eliminate blind areas;
and the image splicing module is respectively connected with the cameras and is used for splicing images acquired by the cameras based on a pre-constructed virtual 3D model to form a virtual 3D panoramic all-around image, and when the virtual 3D model is constructed, if more than two cameras are arranged on the same side, the texture vertex overlapped by the two cameras on the same side is segmented by using a Y axis as a fusion segmentation curve.
Further, when the special-shaped engineering machinery vehicle is applied to a special-shaped engineering machinery vehicle, more than two cameras are arranged on one side with the special-shaped structure, and when the special-shaped engineering machinery vehicle is applied to a vehicle with the length of the left side and the length of the right side of the vehicle body exceeding a preset threshold value, more than two cameras are respectively arranged on the left side and the right side of the vehicle.
Further, the image stitching module comprises:
the model building unit is used for building a virtual 3D model for each camera based on the 3D flat-bottom bowl model, partitioning an overlapped region by using a Y axis as a fusion partitioning curve, and then rendering texture grid points to build the virtual 3D model;
and the real-time model rendering unit is used for acquiring the images acquired by the cameras in real time and rendering the virtual 3D model in real time according to the acquired images to obtain a real-time virtual 3D panoramic all-around image.
Furthermore, the model construction unit comprises an internal and external parameter calibration subunit, a 3D model construction subunit, a fusion segmentation subunit and a texture network point rendering subunit which are connected in sequence, wherein the internal and external parameter calibration subunit is used for calibrating internal and external parameters of the camera (1); the 3D model building subunit is used for building a half-edge flat-bottom bowl model for each camera, the fusion and segmentation subunit is used for segmenting texture vertexes, overlapped by two cameras on the same side, of the half-edge flat-bottom bowl model of each camera by using a Y axis as a fusion and segmentation curve, and the texture network point rendering subunit is used for performing texture network point rendering on the half-edge flat-bottom bowl model of each camera according to calibrated internal parameters and external parameters to obtain the virtual 3D model.
A multi-view 3D panoramic view splicing method comprises the following steps:
s01, arranging cameras in front of, behind, on the left side and on the right side of the vehicle in advance, wherein at least one side of the cameras is provided with more than two cameras to eliminate blind areas;
and S02, splicing the images acquired by the cameras based on a pre-constructed virtual 3D model to form a virtual 3D panoramic all-around view image, wherein when the virtual 3D model is constructed, if more than two cameras are arranged on the same side, the texture vertex overlapped by the two cameras on the same side is segmented by using a Y axis as a fusion segmentation curve.
Further, the step S02 includes:
s201, reading a local parameter file, including a triangular texture file and a 3D vehicle model;
s202, receiving images collected by the cameras;
and S203, rendering the image of the environment around the vehicle body to a pre-constructed virtual 3D model according to the received image and the read file to form the virtual 3D panoramic all-around image.
Further, the constructing step of the virtual 3D model includes:
SA1, calibrating internal parameters and external parameters of the camera in advance to obtain internal and external parameter matrixes;
SA2, generating a half-edge flat-bottom bowl model spanning a quadrant for each camera;
SA3, in the half-edge flat-bottom bowl model of each camera, the coincident texture vertex of the two cameras on the same side is segmented by using a Y axis as a fusion segmentation curve;
and SA4, performing texture grid point rendering on the half-edge flat-bottom bowl model of each camera formed after the segmentation of the SA3 according to the internal and external reference matrixes to obtain the virtual 3D model.
Further, the step of step SA4 includes:
SA41, projecting the 3D grid points corresponding to the cameras into the 2D image according to the internal and external parameter matrixes, and calculating pixel coordinates of the image corresponding to the 3D grid points;
SA42, setting a segmentation boundary of each camera corresponding to the half-side flat-bottom bowl model so as to classify each half-side flat-bottom bowl model into a fusion region and a non-fusion region, wherein the fusion region is an overlapped region jointly shot by adjacent cameras, the non-fusion region is a region independently shot by each camera, and a binary image used for segmenting the fusion region and the non-fusion region is generated;
SA43, rotating each half-side flat-bottom model by a designated angle, wherein the 3D grid point models corresponding to the cameras arranged on the same side rotate by the same angle, and forming a triangular texture file by the grid point of each half-side flat-bottom bowl model and the corresponding image coordinate in a triangular texture mode for storage.
Further, when calibrating the internal reference of the camera, the method further comprises a distortion correction processing step, which comprises: and generating a distortion-removed mapping file according to the internal parameter matrix K, the distortion system matrix D and the scaling factor sf.
Further, the step of calibrating the external parameters of the camera comprises:
arranging external reference calibration objects around the vehicle body, and grabbing angular points of the external reference calibration objects: converting a camera distortion-removed image containing an external reference calibration object image into a binary image, positioning the position of the external reference calibration object, and calculating the image coordinates of each corner point of the external reference calibration;
and solving an external reference rotation matrix R and an external reference translation matrix T of each camera according to the image coordinates and world coordinates of the external reference calibration object corner points shot by each camera.
Compared with the prior art, the invention has the advantages that:
1. on the basis that 1 camera is respectively arranged in the front direction, the rear direction, the left direction and the right direction of a carrier vehicle body, more than 2 cameras are arranged on at least one side of the carrier vehicle body according to the actual structural characteristics of the carrier vehicle to eliminate blind areas, then the images collected by the cameras are spliced based on a virtual 3D model, the panorama of the environment around the vehicle body and a three-dimensional model trolley are placed in a three-dimensional environment to form a virtual visual angle with continuously adjustable position, angle and direction, and a 3D-360-degree panorama around spliced image of the environment around the vehicle is obtained.
2. According to the invention, the texture vertexes overlapped by the two cameras on the same side are segmented by using the Y axis as the fusion segmentation curve when the virtual 3D model is constructed, so that the overlapped area of the cameras on the same side can be effectively and averagely segmented without vertex omission, and the problem that the texture vertexes Grid overlapped by installing the two cameras on the same side cannot be segmented in the traditional mode is solved, so that the blind area can be effectively eliminated, and the multi-view 3D-360-degree panoramic looking-around effect can be realized.
Drawings
Fig. 1 is a schematic structural diagram of a special-shaped construction machine.
Fig. 2 is a schematic diagram illustrating the principle of a conventional long vehicle model in which a blind area exists in a system for arranging a look-around system.
Fig. 3 is a schematic diagram of the principle of implementing the segmentation of the overlapping region in the 3D panoramic looking-around system of the conventional 4-mesh camera.
Fig. 4 is a schematic structural diagram of the multi-view 3D panoramic looking-around system according to the embodiment.
Fig. 5 is a schematic diagram of the principle of the present embodiment to realize the division of the overlapping area between the same-side photographic images.
Fig. 6 is a schematic workflow diagram of the multi-view 3D panoramic looking-around system in the present embodiment.
Fig. 7 is a schematic diagram of an external reference calibration object pattern employed in the present embodiment.
Fig. 8 is a schematic diagram of the arrangement principle of the first external reference calibration object in the embodiment.
Fig. 9 is a schematic diagram illustrating a second arrangement principle of external reference calibration objects in the embodiment.
Fig. 10 is a schematic diagram of an automatic corner grabbing effect in this embodiment.
Fig. 11 is a schematic view of the 3D flat bottom bowl model employed in the present embodiment.
FIG. 12 is a schematic diagram illustrating the effect of generating the vertex of the 3D texture mesh of the half-edge flat-bottom bowl model in the present embodiment.
Fig. 13 is a schematic diagram illustrating an effect of segmenting vertices of a 3D texture mesh of the same side camera in the present embodiment.
Illustration of the drawings: 1. a camera; 2. an image stitching module; 21. a model construction unit; 211. an internal and external reference calibration subunit; 212. a 3D model construction subunit; 213. fusing the segmentation subunits; 214. a texture nexus rendering subunit; 22. and a real-time model rendering unit.
Detailed Description
The invention is further described below with reference to the drawings and specific preferred embodiments of the description, without thereby limiting the scope of protection of the invention.
As shown in fig. 4, the multi-view 3D panoramic looking around system of the present embodiment includes:
the system comprises a plurality of cameras 1, a plurality of sensors and a controller, wherein the cameras 1 are respectively arranged at the front, the back, the left side and the right side of a vehicle, and at least one side is provided with more than two cameras 1 to eliminate blind areas;
the image splicing module 2 is respectively connected with the cameras 1 and is used for splicing images collected by the cameras 1 based on a pre-constructed virtual 3D model to form a virtual 3D panoramic view image, and when the virtual 3D model is constructed, if more than two cameras 1 are arranged on the same side, the texture vertexes coincided by the cameras 1 on the same side are segmented by using the Y axis as a fusion segmentation curve.
The embodiment specifically installs 1 camera 1 (specifically adopts the fisheye camera) in the front of the carrier vehicle body, back, left and right four directions, simultaneously according to the actual structural characteristics of the carrier vehicle, install more than 2 cameras 1 in one side that can produce the blind area when using 1 camera in order to eliminate the blind area, then can arrange 5 ~ 8 different quantity cameras on the whole carrier vehicle, even more quantity cameras are in order to form many meshes 3D panorama system, then carry out image mosaic through the image that will gather each camera 1 based on the virtual 3D model of constructing in advance, place vehicle body surrounding environment panorama and a three-dimensional model dolly in three-dimensional environment, become a virtual visual angle that position, angle, direction can be adjusted in succession in a virtual way, obtain the 3D-360 panorama of vehicle surrounding environment and look around the concatenation image.
When arranging more than two cameras 1 on the same side, images acquired by the cameras arranged on the same side can have an overlapping area, and if the overlapping area is to be divided, the following requirements are required to be met:
1. performing texture vertex Grid segmentation in an overlapping area of every two cameras;
2. dividing the texture vertex Grid of the two cameras in the overlapping area to be completed, wherein the omission of the texture vertex Grid cannot exist;
3. relatively even segmentation of texture vertices Grid is required, and uneven segmentation can cause misalignment and distortion of the 3D-360 ° panoramic image in the camera overlap region.
Considering that for the camera 1 installed on the same side, after the external reference is determined by the same external reference calibration object on the ground in advance, the texture vertex Grid generated based on the same coordinate system is completely the same as the rotation angle, the Y axis in the coordinate system can meet the three segmentation requirements to perfectly segment the texture vertices Grid of the two cameras on the same side, as shown in fig. 5. In this embodiment, when constructing the virtual 3D model, the texture vertex where the two cameras 1 on the same side coincide is segmented using the Y axis as the fusion segmentation curve, that is, the linear equation of the fusion segmentation line is: y is 0, can satisfy above-mentioned three kinds of segmentation requirements and effectively carry out relatively even segmentation with the overlap region of homonymy camera 1 and do not have the summit to omit simultaneously, solve among the traditional mode to the problem that texture summit Grid coincidence that two cameras of homonymy installation correspond can not be cut apart to realize the effect of looking around of many meshes 3D-360 when making effectively to eliminate the blind area.
The embodiment can be particularly applied to special-shaped engineering mechanical vehicles, and as special-shaped structures such as engineering mechanical arms in the special-shaped engineering mechanical vehicles can shield the visual field of the camera 1, more than two cameras 1 are arranged on one side provided with the special-shaped structures such as the engineering mechanical arms, so that the shielding of a device with a special-shaped structure is avoided, the visual field blind area is eliminated, and a surrounding view splicing image of a shielded visual field area is formed; when also being applied to the vehicle that the length of automobile body left and right sides exceeded preset threshold value, like the engineering machine tool vehicle in long automobile body, long carriage, because the automobile body longer arranges a camera and also can cause the field of vision blind area, consequently set up two or more cameras 1 respectively in the vehicle left and right both sides to make and eliminate the field of vision blind area. It is understood that the present invention is applicable to various types of vehicles other than the above-described special-shaped construction machine vehicle, vehicle body, and long-cabin construction machine vehicle.
In this embodiment, the image stitching module 2 includes:
the model building unit 21 is configured to build a virtual 3D model for each camera 1 based on the 3D flat-bottomed bowl model, partition the overlapping region by using the Y axis as a fusion partition curve, and render texture grid points to build a virtual 3D model;
and the real-time model rendering unit 22 is configured to obtain images acquired by each camera 1 in real time, and render the virtual 3D model according to the obtained images to obtain a real-time virtual 3D panoramic all-around image.
The model building unit 21 is specifically configured to perform offline, that is, offline building and storing a virtual 3D model based on calibrated internal and external parameters, the model is segmented by using the Y axis for the overlapping region, and the real-time model rendering unit 32 is configured to perform real-time rendering, that is, after acquiring images acquired by each camera 1 in real time, the real-time acquired images are rendered and displayed in real time on the virtual 3D model.
In this embodiment, the model building unit 21 includes an internal and external parameter calibration subunit 211, a 3D model building subunit 212, a fusion segmentation subunit 213, and a texture network point rendering subunit 224, which are connected in sequence, where the internal and external parameter calibration subunit 211 is used to calibrate internal and external parameters of the camera 1; the 3D model constructing subunit 212 is configured to construct a half-side flat-bottom bowl model for each camera 1, the fusion dividing subunit 213 is configured to divide a texture vertex where two cameras 1 on the same side coincide, using the Y axis as a fusion dividing curve, in the half-side flat-bottom bowl model for each camera 1, and the texture mesh point rendering subunit 224 is configured to perform texture mesh point rendering on the half-side flat-bottom bowl model for each camera 1 according to the calibrated internal reference and external reference, so as to obtain a virtual 3D model.
The internal and external parameter calibration subunit 211 further generates a distortion-removed mapping file according to the internal parameter matrix K and the distortion system matrix D during internal parameter calibration.
In this embodiment, the real-time model rendering unit 22 includes a local parameter reading unit, an image data acquisition unit, a model texture rendering unit, and a display unit, which are connected in sequence, where the local parameter reading unit reads local parameter files, specifically including a triangle texture file, a 3D vehicle model, and a camera distortion removal mapping file; the image data acquisition unit receives images acquired by the cameras 1 through multiple threads; the model texture rendering unit renders the image of the environment around the vehicle body to a pre-constructed virtual 3D model according to the received image and the read file to form a virtual 3D panoramic all-around image, and the display unit is used for displaying the image.
As shown in fig. 6, in the multi-view 3D panoramic all-around system of the present embodiment, a plurality of fisheye cameras are arranged around a vehicle, wherein more than two fisheye cameras are arranged on more than one side of the vehicle to eliminate blind areas, and an internal reference calibration, an external reference calibration, a 3D flat-bottom bowl model construction and a texture grid point rendering are sequentially performed in an offline manner in advance to generate a virtual 3D model; the method comprises the steps of conducting real-time rendering after images are collected in real time, reading local parameter files in sequence, collecting camera image data through multiple threads, adopting OpenGL to render virtual 3D model textures, and then conducting OSD display, and therefore real-time 3D panoramic looking-around can be completed.
As shown in fig. 7, the present embodiment further includes a multi-view 3D panoramic stitching method, including the steps of:
s01, arranging cameras 1 at the front side, the rear side, the left side and the right side of the vehicle in advance, wherein at least one side of the vehicle is provided with more than two cameras 1 to eliminate blind areas;
and S02, splicing the images acquired by the cameras 1 based on a pre-constructed virtual 3D model to form a virtual 3D panoramic all-around view image, and when the virtual 3D model is constructed, if more than two cameras 1 are arranged on the same side, segmenting the texture vertex overlapped by the two cameras 1 on the same side by using a Y axis as a fusion segmentation curve.
In this embodiment, the virtual 3D model is constructed corresponding to the model constructing unit 21, and the constructing step of the virtual 3D model includes:
SA1, internal and external reference calibration: calibrating the internal reference and the external reference of the camera 1 in advance to obtain an internal reference matrix and an external reference matrix;
SA2.3D model construction: generating a half-edge flat-bottom bowl model spanning a quadrant two and a quadrant for each camera 1;
SA3. fusion segmentation: in the half-edge flat-bottom bowl model of each camera 1, the coincident texture vertex of two cameras 1 on the same side is divided by using a Y axis as a fusion dividing curve;
SA4 texture grid point rendering: and performing texture grid point rendering on the half-edge flat-bottom bowl model of each camera 1 formed after the segmentation of the step SA3 according to the internal and external parameter matrixes to obtain a virtual 3D model.
In this embodiment, when calibrating the internal reference of the camera 1, the method further includes a distortion correction processing step, and the distortion correction processing step specifically includes: and generating a distortion-removed mapping file according to the internal parameter matrix K, the distortion system matrix D and the scaling factor sf. The camera 1 of this embodiment specifically adopts a fisheye camera, and the fisheye camera needs to be subjected to distortion correction processing along with a large barrel-shaped distortion while having an ultra-large field angle, and this embodiment adopts a Zhang-friend scaling method, and by collecting 10-20 (specifically determined according to actual requirements) checkerboard calibration pictures, the calculation of an internal parameter matrix K and a distortion coefficient matrix D of the camera 1 is completed, and a distortion-removed mapping file is generated and stored according to a scaling factor sf.
The internal parameter matrix K is specifically:
Figure BDA0003476295230000071
the distortion coefficient matrix is specifically D ═ k1 k2p1 p2 k3]The scaling factor sf can be set according to the actual calibration scenario.
In this embodiment, the step of calibrating the external reference of the camera 1 includes:
arranging external reference calibration objects around the vehicle body, and grabbing angular points of the external reference calibration objects: converting a camera distortion-removed image containing an external reference calibration object image into a binary image, positioning the position of the external reference calibration object, and calculating the image coordinates of each corner point of the external reference calibration;
and then, according to the image coordinates and world coordinates of the angular points of the external reference calibration object shot by each camera 1, solving an external reference rotation matrix R and an external reference translation matrix T of each camera 1.
In a specific application embodiment, the detailed steps of calibrating the external parameters of the camera 1 are as follows:
a. the external reference is used to mark the object pattern, such as the external black border shown in fig. 8.
b. With the geometric center of the carrier vehicle body as an origin (0,0), laying 4-8 external reference calibration objects (the specific number can be determined according to actual requirements) around the vehicle body in an equidistant manner according to the position of the origin; when 2 cameras are installed on one side of the carrier body, 3 external reference calibration objects are placed on the ground in the direction of the side, as shown in (a) of fig. 9; if only 1 camera is installed on the side, only 2 external reference calibration objects need to be placed on the ground in the direction of the side for external reference calibration of the system, as shown in fig. 9 (b), a rectangular frame at the geometric center in fig. 7 is a carrier vehicle body, and a circle symbol represents a camera.
c. Automatically capturing the corners of the external reference calibration object, namely converting a camera distortion-removed image containing the external reference calibration object pattern into a binary image based on the Otsu method adaptive threshold algorithm, positioning the external reference calibration object position through a quadrilateral contour approximation algorithm, and calculating the coordinates of each corner image calibrated by the external reference, as shown in FIG. 10. Other algorithms can be adopted for the adaptive threshold algorithm based on the Otsu method and the quadrilateral contour approximation algorithm to realize similar functions.
d. And (3) solving an external reference rotation matrix R and an external reference translation matrix T of each camera through the image coordinates and world coordinates of the external reference calibration object corner points shot by each camera.
After calibration is completed, a model is constructed, in this embodiment, a virtual 3D model is constructed based on a 3D flat-bottom bowl model, the bowl bottom model is a ground plane of a near field around the vehicle body, the bowl edge is a height space of a far field around the vehicle body, as shown in fig. 11, the model is rendered by OpenGL, and a 3D texture vertex Grid represented by a point in the graph is a triangle texture vertex coordinate. Then, a half-side flat-bottom bowl model spanning two quadrants is generated for each camera, and each half-side flat-bottom bowl model is stored as a text file in the form of texture vertex Grid points, wherein the texture vertex Grid of the half-side flat-bottom bowl model is shown in fig. 12. When two fisheye cameras are installed on the same side of the carrier body, the vertical Y axis is used as a partition boundary, and the two camera texture vertexes Grid which are 100% overlapped on the same side are partitioned into two groups of texture vertexes Grid, as shown in fig. 13.
After the model is built, rendering texture grid points further, where the step SA4 of rendering the texture grid points specifically includes:
SA41, projecting the 3D grid points corresponding to the cameras 1 into the 2D image according to the internal and external reference matrixes, and calculating pixel coordinates of the image corresponding to the 3D grid points;
SA42, setting a segmentation boundary of a half-side flat-bottom bowl model corresponding to each camera 1 to classify each half-side flat-bottom bowl model into a fusion region and a non-fusion region, wherein the fusion region is a coincident region jointly shot by adjacent cameras 1, the non-fusion region is a region independently shot by each camera (1), and a binary image used for segmenting the fusion region and the non-fusion region is generated;
SA43, rotating each half-side flat-bottom model by a designated angle, wherein the 3D grid point models corresponding to the cameras 1 arranged on the same side rotate by the same angle, and forming a triangular texture file by the grid points of each half-side flat-bottom bowl model and the corresponding image coordinates in a triangular texture mode for storage.
In a specific application embodiment, when rendering texture grid points, firstly, according to pre-obtained internal and external reference matrixes of each camera: K. d, R, T, projecting the 3D grid points corresponding to the cameras into the 2D images of the cameras, and calculating the pixel coordinates of the fisheye images corresponding to the 3D grid points; then setting a segmentation boundary of each camera half-edge flat-bottom bowl model to classify each half-edge flat-bottom bowl model into a fusion region and a non-fusion region, and generating a binary mask image for segmenting the fusion region and the non-fusion region; rotating each half flat bottom model by a certain angle, when two cameras are installed on the same side, rotating the 3D grid point model generated by the pair of cameras by the same angle, and storing the grid points of each half flat bottom bowl model and the corresponding image coordinates into a text file in a triangular texture mode, wherein the triangular textures are V1-V2-V4 and V4-V2-V3:
Figure BDA0003476295230000091
wherein x, y, z are 3D grid points, and u, v are corresponding image coordinates.
In this embodiment, the step S02 of rendering in real time specifically includes:
s201, reading local parameter files, specifically comprising a triangular texture file, a 3D vehicle model, a camera distortion removal mapping file and the like;
s202, receiving images collected by the cameras 1;
and S203, rendering the image of the environment around the vehicle body to a pre-constructed virtual 3D model according to the received image and the read file to form a virtual 3D panoramic all-around image.
In a specific application embodiment, when performing real-time rendering, first reading a local parameter file: triangle texture files, 3D vehicle models, camera de-distortion mapping files, and the like; then, all fisheye camera image frames installed on the periphery of the vehicle body are simultaneously acquired through multiple threads, so that the computing efficiency of the system is improved, and the real-time frame rate of the images is ensured; and programming by adopting an OpenGL computer graphic library, calculating on a GPU based on a fisheye camera image and a triangular texture file, rendering an image of the environment around the vehicle body onto a 3D flat bottom bowl model, and finally realizing the display of a surround-view image on OSD.
Those skilled in the art will appreciate that the above description of a computer apparatus is by way of example only and is not intended to be limiting of computer apparatus, and that the apparatus may include more or less components than those described, or some of the components may be combined, or different components may be included, such as input output devices, network access devices, buses, etc. The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being the control center of the computer device and the various interfaces and lines connecting the various parts of the overall computer device.
The memory may be used to store the computer programs and/or modules, and the processor may implement various functions of the computer device by running or executing the computer programs and/or modules stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a flash memory Card (FlashCard), at least one magnetic disk storage device, a flash memory device, or other volatile solid state storage device.
The modules/units integrated by the computer device may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can be executed by a processor to implement the steps of the embodiments of the template tagging-based distributed crawler method described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, electrical signals, software distribution medium, and the like.
The foregoing is considered as illustrative of the preferred embodiments of the invention and is not to be construed as limiting the invention in any way. Although the present invention has been described with reference to the preferred embodiments, it is not intended to be limited thereto. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical spirit of the present invention should fall within the protection scope of the technical scheme of the present invention, unless the technical spirit of the present invention departs from the content of the technical scheme of the present invention.

Claims (10)

1. A multi-view 3D panoramic surround view system, comprising:
the system comprises a plurality of cameras (1) which are respectively arranged at the front, the rear, the left side and the right side of a vehicle, wherein at least one side of the cameras (1) is provided with more than two cameras to eliminate blind areas;
the image splicing module (2) is respectively connected with each camera (1) and is used for splicing images collected by each camera (1) based on a pre-constructed virtual 3D model to form a virtual 3D panoramic view image, and when the virtual 3D model is constructed, if more than two cameras (1) are arranged on the same side, texture vertexes overlapped by the two cameras (1) on the same side are segmented by using a Y axis as a fusion segmentation curve.
2. The multi-purpose 3D panoramic looking-around system according to claim 1, characterized in that when applied to a special-shaped construction machinery vehicle, more than two cameras (1) are arranged on one side with a special-shaped structure, and when applied to a vehicle with the length of the left and right sides of the vehicle body exceeding a preset threshold value, more than two cameras (1) are respectively arranged on the left and right sides of the vehicle.
3. The multi-purpose 3D panoramic surround view system according to claim 1 or 2, characterized in that the image stitching module (2) comprises:
the model building unit (21) is used for building a virtual 3D model for each camera (1) based on the 3D flat-bottom bowl model, dividing an overlapped area by using a Y axis as a fusion dividing curve, and rendering texture grid points to build the virtual 3D model;
and the real-time model rendering unit (22) is used for acquiring images acquired by each camera (1) in real time, and rendering the virtual 3D model in real time according to the acquired images to obtain a real-time virtual 3D panoramic all-around image.
4. The multi-view 3D panoramic looking-around system according to claim 1 or 2, wherein the model building unit (21) comprises an internal and external parameter calibration subunit (211), a 3D model building subunit (212), a fusion segmentation subunit (213) and a texture network point rendering subunit (214) which are connected in sequence, and the internal and external parameter calibration subunit (211) is used for calibrating the internal and external parameters of the camera (1); the 3D model building subunit (212) is configured to build a half-edge flat-bottom bowl model for each camera (1), the fusion and segmentation subunit (213) is configured to segment texture vertexes, which coincide with two cameras (1) on the same side, of the half-edge flat-bottom bowl model of each camera (1) by using a Y axis as a fusion and segmentation curve, and the texture network point rendering subunit (214) is configured to perform texture network point rendering on the half-edge flat-bottom bowl model of each camera (1) according to calibrated internal and external parameters to obtain the virtual 3D model.
5. A multi-view 3D panoramic view splicing method is characterized by comprising the following steps:
s01, arranging cameras (1) on the front side, the rear side, the left side and the right side of a vehicle in advance, wherein at least one side of the vehicle is provided with more than two cameras (1) to eliminate blind areas;
and S02, splicing images collected by the cameras (1) based on a pre-constructed virtual 3D model to form a virtual 3D panoramic view image, wherein when the virtual 3D model is constructed, if more than two cameras (1) are arranged on the same side, the texture vertex overlapped by the two cameras (1) on the same side is segmented by using a Y axis as a fusion segmentation curve.
6. The multi-view 3D panoramic stitching method according to claim 5, wherein the step S02 comprises:
s201, reading a local parameter file, including a triangular texture file and a 3D vehicle model;
s202, receiving images collected by the cameras (1);
and S203, rendering the image of the environment around the vehicle body to a pre-constructed virtual 3D model according to the received image and the read file to form the virtual 3D panoramic all-around image.
7. The multi-purpose 3D panoramic stitching method according to claim 5 or 6, wherein the virtual 3D model constructing step comprises:
SA1, calibrating internal parameters and external parameters of the camera (1) in advance to obtain internal and external parameter matrixes;
SA2, generating a half-edge flat-bottom bowl model spanning two quadrants for each camera (1);
SA3, in the half-edge flat-bottom bowl model of each camera (1), the coincident texture vertex of the two cameras (1) on the same side is segmented by using a Y axis as a fusion segmentation curve;
and SA4, performing texture grid point rendering on the half-edge flat-bottom bowl model of each camera (1) formed after the segmentation of the SA3 according to the internal and external reference matrixes to obtain the virtual 3D model.
8. The multi-purpose 3D panoramic stitching method according to claim 7, wherein the step of SA4 comprises:
SA41, projecting the 3D grid points corresponding to the cameras (1) into the 2D image according to the internal and external parameter matrixes, and calculating pixel coordinates of the image corresponding to the 3D grid points;
SA42, setting a segmentation boundary of each camera (1) corresponding to a half-side flat-bottom bowl model so as to classify each half-side flat-bottom bowl model into a fusion region and a non-fusion region, wherein the fusion region is a superposition region jointly shot by adjacent cameras (1), the non-fusion region is a region independently shot by each camera (1), and a binary image used for segmenting the fusion region and the non-fusion region is generated;
SA43, rotating each half-side flat-bottom model by a designated angle, wherein the 3D grid point models corresponding to the cameras (1) arranged on the same side rotate by the same angle, and forming a triangular texture file by the grid point of each half-side flat-bottom bowl model and the corresponding image coordinate in a triangular texture mode for storage.
9. The multi-view 3D panoramic stitching method according to claim 7, wherein the method further comprises a distortion correction processing step when calibrating the internal parameters of the camera (1), and the distortion correction processing step comprises the following steps: and generating a distortion-removed mapping file according to the internal parameter matrix K, the distortion system matrix D and the scaling factor sf.
10. The multi-view 3D panoramic stitching method according to claim 7, wherein the step of calibrating the external parameters of the camera (1) comprises:
arranging external reference calibration objects around the vehicle body, and grabbing angular points of the external reference calibration objects: converting a camera distortion-removed image containing an external reference calibration object image into a binary image, positioning the position of the external reference calibration object, and calculating the image coordinates of each corner point of the external reference calibration;
and (3) solving an external reference rotation matrix R and an external reference translation matrix T of each camera (1) according to the image coordinates and world coordinates of the external reference calibration object corner points shot by each camera (1).
CN202210056109.1A 2022-01-18 2022-01-18 Multi-view 3D panoramic all-around viewing system and splicing method Pending CN114418851A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210056109.1A CN114418851A (en) 2022-01-18 2022-01-18 Multi-view 3D panoramic all-around viewing system and splicing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210056109.1A CN114418851A (en) 2022-01-18 2022-01-18 Multi-view 3D panoramic all-around viewing system and splicing method

Publications (1)

Publication Number Publication Date
CN114418851A true CN114418851A (en) 2022-04-29

Family

ID=81272843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210056109.1A Pending CN114418851A (en) 2022-01-18 2022-01-18 Multi-view 3D panoramic all-around viewing system and splicing method

Country Status (1)

Country Link
CN (1) CN114418851A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955920A (en) * 2014-04-14 2014-07-30 桂林电子科技大学 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
CN107317998A (en) * 2016-04-27 2017-11-03 成都理想境界科技有限公司 Full-view video image fusion method and device
CN108765496A (en) * 2018-05-24 2018-11-06 河海大学常州校区 A kind of multiple views automobile looks around DAS (Driver Assistant System) and method
KR20190075034A (en) * 2019-06-20 2019-06-28 주식회사 아이닉스 Imaging Apparatus and method for Automobile
CN111768332A (en) * 2019-03-26 2020-10-13 深圳市航盛电子股份有限公司 Splicing method of vehicle-mounted all-around real-time 3D panoramic image and image acquisition device
CN112698306A (en) * 2020-12-17 2021-04-23 上海交通大学宁波人工智能研究院 System and method for solving map construction blind area by combining multiple laser radars and camera
CN113239735A (en) * 2021-04-15 2021-08-10 重庆利龙科技产业(集团)有限公司 Automobile transparent A column system based on binocular camera and implementation method
CN113345074A (en) * 2021-06-07 2021-09-03 苏州易航远智智能科技有限公司 Vehicle-mounted 3D (three-dimensional) all-around image display method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955920A (en) * 2014-04-14 2014-07-30 桂林电子科技大学 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
CN107317998A (en) * 2016-04-27 2017-11-03 成都理想境界科技有限公司 Full-view video image fusion method and device
CN108765496A (en) * 2018-05-24 2018-11-06 河海大学常州校区 A kind of multiple views automobile looks around DAS (Driver Assistant System) and method
CN111768332A (en) * 2019-03-26 2020-10-13 深圳市航盛电子股份有限公司 Splicing method of vehicle-mounted all-around real-time 3D panoramic image and image acquisition device
KR20190075034A (en) * 2019-06-20 2019-06-28 주식회사 아이닉스 Imaging Apparatus and method for Automobile
CN112698306A (en) * 2020-12-17 2021-04-23 上海交通大学宁波人工智能研究院 System and method for solving map construction blind area by combining multiple laser radars and camera
CN113239735A (en) * 2021-04-15 2021-08-10 重庆利龙科技产业(集团)有限公司 Automobile transparent A column system based on binocular camera and implementation method
CN113345074A (en) * 2021-06-07 2021-09-03 苏州易航远智智能科技有限公司 Vehicle-mounted 3D (three-dimensional) all-around image display method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
荣坚等: "基于双视角的真实感三维人脸重建***", 图学学报, vol. 33, no. 4, 31 August 2012 (2012-08-31), pages 1 - 4 *

Similar Documents

Publication Publication Date Title
US20170278293A1 (en) Processing a Texture Atlas Using Manifold Neighbors
EP3255604B1 (en) Image generation device, coordinate conversion table creation device and creation method
US9437034B1 (en) Multiview texturing for three-dimensional models
CN112651881B (en) Image synthesizing method, apparatus, device, storage medium, and program product
KR20130016335A (en) Processing target image generation device, processing target image generation method, and operation support system
CN113870161A (en) Vehicle-mounted 3D (three-dimensional) panoramic stitching method and device based on artificial intelligence
CN109509153A (en) A kind of panorama mosaic method and system of towed vehicle image
EP3905673A1 (en) Generation method for 3d asteroid dynamic map and portable terminal
CN109461197B (en) Cloud real-time drawing optimization method based on spherical UV and re-projection
CN112348741A (en) Panoramic image splicing method, panoramic image splicing equipment, storage medium, display method and display system
CN113658262A (en) Camera external parameter calibration method, device, system and storage medium
CN108765499B (en) Vehicle-mounted non-GPU rendering 360-degree stereoscopic panoramic realization method
CN116363290A (en) Texture map generation method for large-scale scene three-dimensional reconstruction
CN114418851A (en) Multi-view 3D panoramic all-around viewing system and splicing method
CN111179210A (en) Method and system for generating texture map of face and electronic equipment
CN114998496A (en) Orthoimage rapid generation method based on scene aerial photography image and sparse point cloud
WO2008072246A2 (en) Smooth shading and texture mapping using linear gradients
KR100490885B1 (en) Image-based rendering method using orthogonal cross cylinder
CN113313813A (en) Vehicle-mounted 3D panoramic all-around viewing system capable of actively early warning
CN112150621A (en) Aerial view generation method and system based on orthographic projection and storage medium
GB2605360A (en) Method, Apparatus and Storage Medium for Realizing Geometric Viewing Frustum of OCC Tree in Smart City
CN115176459A (en) Virtual viewpoint synthesis method, electronic device, and computer-readable medium
CN117011474B (en) Fisheye image sample generation method, device, computer equipment and storage medium
CN113096209B (en) Display method of vehicle-mounted image track line
CN116778127B (en) Panoramic view-based three-dimensional digital scene construction method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination