CN112308984B - Vehicle-mounted image stitching method, system and device - Google Patents

Vehicle-mounted image stitching method, system and device Download PDF

Info

Publication number
CN112308984B
CN112308984B CN202011211534.0A CN202011211534A CN112308984B CN 112308984 B CN112308984 B CN 112308984B CN 202011211534 A CN202011211534 A CN 202011211534A CN 112308984 B CN112308984 B CN 112308984B
Authority
CN
China
Prior art keywords
overlook
image
images
view
overlapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011211534.0A
Other languages
Chinese (zh)
Other versions
CN112308984A (en
Inventor
何恒
苏文凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haowei Technology Wuhan Co ltd
Original Assignee
Haowei Technology Wuhan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haowei Technology Wuhan Co ltd filed Critical Haowei Technology Wuhan Co ltd
Priority to CN202011211534.0A priority Critical patent/CN112308984B/en
Publication of CN112308984A publication Critical patent/CN112308984A/en
Application granted granted Critical
Publication of CN112308984B publication Critical patent/CN112308984B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image stitching method, an image stitching system and an image stitching device, which are characterized in that an acquired at least two initial images are mapped into a three-dimensional mathematical model to form at least two converted images by constructing the three-dimensional mathematical model, wherein two adjacent converted images respectively have overlapping areas with identical image contents and overlapping, one side of each overlapping image is provided with a first overlapping area with fusion weight smaller than a preset fusion weight threshold, and two first overlapping areas respectively positioned in the two adjacent converted images are correspondingly arranged. And the characteristic points in the first overlapping region are removed, so that the interference of the characteristic points in the first overlapping region with low fusion weight on the fusion result is reduced, and the problem of distortion of the spliced image caused by unsmooth fusion is prevented.

Description

Vehicle-mounted image stitching method, system and device
Technical Field
The invention relates to the field of image stitching, in particular to a vehicle-mounted image stitching method, system and device.
Background
Along with the popularization of automobiles, more and more automobiles enter thousands of households, the living consumption level of people is continuously improved, the number of automobiles is also continuously increased, the intelligent requirements on electric appliances in the automobiles are higher and higher, and ADAS (advanced automatic analysis system) and 360 panoramic images on the automobiles in the intelligent automobiles are important configurations of high-allocation automobiles. The vehicle-mounted 3D panoramic system utilizes wide-angle cameras arranged around the vehicle to reconstruct the vehicle and surrounding scenes, and generates a vehicle-mounted panoramic image. The driver can safely park, avoid obstacles and eliminate visual blind areas by observing the panoramic image, so that the aim of safe driving is fulfilled.
The concept of an in-vehicle look-around system was first proposed by k.kato et al in 2006. Various active safety techniques such as lane detection, parking space detection tracking, and auxiliary parking, and moving object detection are then applied to vehicle-mounted look-around systems. Byeongchaen Jeon et al in 2015 proposed a solution for a high resolution panoramic look-around system. These schemes all feature the use of multiple cameras to complete modeling of the actual scene, producing visual effects including 2D and pseudo 3D. The number of cameras is determined according to actual vehicle types, and a general household car is modeled by adopting 4-path fisheye cameras, so that the final purpose is to unify images of multiple paths of cameras under the same visual coordinate system, and a complete view field is formed for a driver to observe the situation around the vehicle.
When the images are spliced in the existing vehicle-mounted looking-around system, when the scenes of adjacent overlapping areas needing to be spliced have larger depth of field difference, objects with different depth of field are difficult to align by using a global homography matrix, so that the fusion result is not smooth, and finally, the generated distorted spliced images appear.
Disclosure of Invention
The invention aims to provide a vehicle-mounted image stitching method, a vehicle-mounted image stitching system and a vehicle-mounted image stitching device, so as to solve the problem that when an existing vehicle-mounted image stitching method is used for stitching images, the fusion result is not smooth enough, and the generated stitched images are distorted.
In order to solve the above problems, the present invention provides a vehicle-mounted image stitching method, including:
acquiring at least two initial images;
constructing a three-dimensional mathematical model with a world coordinate system, and sequentially mapping at least two initial images into the three-dimensional mathematical model to form at least two adjacent converted images, wherein the two adjacent converted images respectively have overlapping areas with identical image contents and overlapping, one side of each overlapping area is provided with a first overlapping area with fusion weight smaller than a preset fusion weight threshold value, and the two first overlapping areas respectively positioned in the two adjacent converted images are correspondingly arranged;
extracting a plurality of feature points located in the overlapping region, and removing the feature points located in the first overlapping region to update the converted image;
and fusing overlapping areas of two adjacent updated conversion images to generate a spliced image.
Optionally, before removing the feature points located in the first overlapping region, the vehicle-mounted image stitching method further includes:
mapping at least two adjacent converted images to form at least two adjacent overlook images, wherein the overlook overlapped areas are formed after being mapped, one side of each overlook overlapped area is provided with a first overlook overlapped area with fusion weight smaller than a preset fusion weight threshold, and the two first overlook overlapped areas respectively positioned in the two adjacent overlook images are correspondingly arranged; the method comprises the steps of,
the method for updating the converted image comprises the following steps:
extracting a plurality of overlooking feature points of the overlooking overlapped region, and removing the overlooking feature points positioned in the first overlooking overlapped region;
and updating the overlook image according to the overlook characteristic points remained in the overlook overlapping area, and mapping the updated overlook image into the three-dimensional mathematical model so as to update the conversion image.
Optionally, the method for removing the top-view feature points located in the first top-view overlapping area includes:
screening the top-view feature points of the top-view overlapping region by using a mask map to remove the top-view feature points located in the first top-view overlapping region, wherein the mask map comprises a feature point retaining region and a feature point removing region.
Optionally, the method for screening the plurality of top-view feature points of the top-view overlapping area by using a mask map includes:
aligning the mask map and the top-view overlap region such that the first top-view overlap region is located within a coordinate range of the feature point removal region;
and removing the overlook characteristic points in the coordinate range of the characteristic point removing area so as to remove the overlook characteristic points in the first overlook overlapping area.
Optionally, the method for updating the top view image according to the top view feature points remaining in the top view overlapping area includes:
matching a plurality of residual overlook characteristic points which are respectively positioned in the overlook overlapping areas of two adjacent overlook images so as to obtain a plurality of matching characteristic point pairs;
and obtaining a homography matrix according to the plurality of matching characteristic point pairs, and updating the overlook image according to the homography matrix.
Optionally, the method for matching the plurality of top-view feature points, which are respectively located in the top-view overlapping areas of the two adjacent top-view images, includes:
respectively calculating a plurality of feature descriptors corresponding to the overlooking feature points which are positioned in the overlooking overlapping areas of two adjacent converted images;
and measuring the similarity of a plurality of feature descriptors respectively positioned in the overlook overlapped areas of two adjacent conversion images, and matching the rest plurality of overlook feature points respectively positioned in the overlook overlapped areas of two adjacent overlook images according to the similarity of the feature descriptors.
Optionally, after obtaining the plurality of matching feature point pairs, the method further comprises:
adding constraint characteristic points at the other sides of the overlook overlapping areas of the two adjacent overlook images respectively, wherein the constraint characteristic points respectively positioned in the two adjacent overlook images form constraint characteristic point pairs;
updating a homography matrix according to the matching characteristic point pairs and the constraint characteristic point pairs, and updating the overlook image according to the homography matrix after updating.
Optionally, the method for adding the constraint feature points includes:
obtaining coordinates of the constraint characteristic points to be added in the overlook overlapping area;
and setting the constraint characteristic points at the coordinate positions in the overlook overlapping area.
Optionally, the method for obtaining the coordinates of the constraint feature points includes:
obtaining coordinates of the constraint feature points to be added in one of the adjacent overlook images according to the width of the overlook overlapping region in the one of the two adjacent overlook images and the number of the constraint feature points to be added;
and obtaining the coordinates of the constraint characteristic points to be added of the other adjacent overlook images according to the coordinates of the constraint characteristic points to be added of the other adjacent overlook images and the homography matrix.
In order to solve the above problems, the present invention provides a vehicle-mounted image stitching system, comprising:
the image acquisition module is used for acquiring at least two initial images;
the three-dimensional mathematical model construction module is used for constructing a three-dimensional mathematical model with a world coordinate system, at least two initial images are mapped into the three-dimensional mathematical model in sequence to form at least two adjacent converted images, the two adjacent converted images are respectively provided with overlapping areas with identical image contents and overlapping, one side of each overlapping area is provided with a first overlapping area with fusion weight smaller than a preset fusion weight threshold, and the two first overlapping areas respectively positioned in the two adjacent converted images are correspondingly arranged;
the data processing module is used for extracting a plurality of characteristic points positioned in the overlapping area and removing the characteristic points positioned in the first overlapping area so as to update the converted image;
and the image stitching module is used for fusing overlapping areas of two adjacent updated conversion images to generate stitched images.
In order to solve the problems, the invention provides a vehicle-mounted image stitching device, which comprises a central control host and a vehicle-mounted image stitching system;
the image acquisition module comprises image acquisition equipment which is connected with a central control host, and transmits an acquired initial image to the central control host for image processing so as to finish image stitching;
the three-dimensional mathematical model building module, the data processing module and the image splicing module are arranged in the central control host.
According to the image stitching method, at least two initial images are mapped into a three-dimensional mathematical model to form at least two converted images, wherein two adjacent converted images respectively have overlapping areas with identical image content and overlapping, one side of each overlapping area is provided with a first overlapping area with fusion weight smaller than a preset fusion weight threshold, and two first overlapping areas respectively located in the two adjacent converted images are correspondingly arranged. And the characteristic points in the first overlapping region are removed, so that the interference of the characteristic points in the first overlapping region with low fusion weight on the fusion result is reduced, and the problem of distortion of the spliced image caused by insufficient smoothness of fusion is prevented.
Drawings
FIG. 1 is a flow chart of a method of stitching in-vehicle images in an embodiment of the invention;
FIG. 2 is a schematic diagram of a construction equation of a three-dimensional mathematical model established in a vehicle-mounted image stitching method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a three-dimensional mathematical model established in a vehicle-mounted image stitching method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a mask diagram in a vehicle-mounted image stitching method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an in-vehicle image stitching system in an embodiment of the present invention;
FIG. 6 is a schematic diagram of an in-vehicle image stitching device in an embodiment of the present invention;
reference numerals
A1-bowl edge; a2-bowl bottom;
1-an image acquisition module; 2-a three-dimensional mathematical model building module;
3-a data processing module; 4-an image stitching module;
10-mask map; 101-a feature point reservation region;
102-a feature point removal area;
100-central control host.
Detailed Description
The method, the system and the device for stitching the vehicle-mounted image provided by the invention are further described in detail below with reference to the accompanying drawings and specific embodiments. The advantages and features of the present invention will become more apparent from the following description. It should be noted that the drawings are in a very simplified form and are all to a non-precise scale, merely for convenience and clarity in aiding in the description of embodiments of the invention. Furthermore, the structures shown in the drawings are often part of actual structures. In particular, the drawings are shown with different emphasis instead being placed upon illustrating the various embodiments. Fig. 1 is a flowchart of a vehicle-mounted image stitching method according to an embodiment of the present invention. In the present embodiment, the vehicle-mounted image stitching method of the present embodiment as shown in fig. 1 includes the following steps S10 to S40.
In step S10, at least two initial images are acquired. In this step, at least two initial images of the surroundings of the vehicle are acquired by at least two image acquisition apparatuses.
The at least two image capturing devices may be fisheye cameras, for example, in a specific embodiment, four fisheye cameras may be provided, and the four fisheye cameras are respectively disposed at front, rear, left and right positions of the vehicle body, for example, at a head, a tail, a left rear view mirror and a right rear view mirror of the vehicle body, so as to capture images of an area around the vehicle in real time. The image content of at least two initial images around the vehicle acquired by at least two image acquisition devices can comprise a ground part and an air part, the image of the ground part can comprise a pavement zebra crossing, a road edge and the like of the ground, and the image of the air part can comprise pedestrians, surrounding vehicles, traffic lights and the like.
In step S20, a three-dimensional mathematical model with a world coordinate system is constructed, at least two initial images are mapped into the three-dimensional mathematical model in sequence, so as to form at least two adjacent converted images, each of the two adjacent converted images has an overlapping region with the same image content and overlapping, one side of each overlapping region has a first overlapping region with a fusion weight smaller than a preset fusion weight threshold, and the two first overlapping regions respectively located in the two adjacent converted images are correspondingly arranged.
Fig. 2 is a schematic diagram of a construction equation of a three-dimensional mathematical model established in the vehicle-mounted image stitching method according to an embodiment of the present invention. Fig. 3 is a schematic diagram of a three-dimensional mathematical model established in the vehicle-mounted image stitching method according to an embodiment of the present invention. In this embodiment, as shown in fig. 2 and 3, the three-dimensional mathematical model is a three-dimensional bowl-shaped mathematical model, the construction equation for constructing the three-dimensional bowl-shaped mathematical model is shown in fig. 2, X, Y, Z is a world coordinate system, wherein X0Y represents the ground surface, 0 represents the geometric center of the projection of the vehicle on the ground surface, 0Y represents the forward direction of the vehicle, 0Z represents the rotation axis, and 0R 0 P represents a bus, the bowl-shaped curved surface is formed by rotating the bus around a rotating shaft, and a bus equation formula for constructing the three-dimensional bowl-shaped model is shown as a formula (1).
Wherein R is 0 Represents the radius of the bowl bottom A2, and the radius R of the bowl bottom A2 0 The radius R of the bowl bottom A2 is related to the vehicle size 0 Is typically about 100cm greater than one-half the vehicle size, in this embodiment, the radius R of the bowl bottom A2 0 The dimension of the bowl bottom A2 is 250 cm-350 cm, and the radius R of the bowl bottom A2 is better 0 Is 300cm in size; wherein, the units of the camera coordinate system and the world coordinate system are cm.
And K is an adjustment coefficient of the bowl edge A1, in this embodiment, the relative size between the bowl edge A1 and the bowl bottom A2 is adjusted by the adjustment coefficient K of the bowl edge A1, that is, the larger the K value is, the larger the area corresponding to the bowl edge A1 is. Whether the area of the bowl edge A1 is too large, the area of the bowl bottom A2 is too small, or the area of the bowl bottom A2 is too small, the area of the bowl edge A1 is too large, which results in poor splicing effect, so that a proper range of values needs to be given to the adjustment coefficient k of the bowl edge A1. In this embodiment, the k value ranges from 0.1 to 0.2. Preferably, in this embodiment, the K value is in the range of 0.15.
Furthermore, in the present embodiment, after mapping at least two of the initial images into the bowl-shaped three-dimensional mathematical model to form at least two converted images, the method further comprises:
mapping at least two adjacent converted images to form at least two adjacent overlook images, wherein the overlook overlapped areas are formed after being mapped, one side of each overlook overlapped area is provided with a first overlook overlapped area with fusion weight smaller than a preset fusion weight threshold, and the two first overlook overlapped areas respectively positioned in the two adjacent overlook images are correspondingly arranged.
Specifically, in this embodiment, the number of the initial images acquired may be four, which are four initial images acquired by being sequentially disposed at four positions of the front, right rear view mirror, rear and left rear view mirror of the vehicle. And mapping the four initial images acquired in sequence to form four conversion images distributed in a surrounding mode, wherein the four conversion images which are correspondingly defined and annularly arranged at the positions around the vehicle according to the image acquisition equipment are a front-view conversion image, a right-view conversion image, a rear-view conversion image and a left-view conversion image.
In this embodiment, the front-view converted image and the right-view converted image are adjacent to each other, and the front-view converted image and the right-view converted image have overlapping areas with the same image content and overlap each other, and the adjacent areas where the front-view converted image and the right-view converted image overlap each other have overlapping areas with the same image content and overlap each other. In this embodiment, since the forward-looking converted image is generally fixed and the right-looking converted image is adjusted in the direction of the forward-looking converted image based on the forward-looking converted image when image fusion is performed, the fusion weight of the right-looking converted image on the side near the overlapping region of the center position of the forward-looking converted image is relatively low. In this embodiment, a region with a lower fusion weight, that is, a region with a fusion weight smaller than a preset fusion weight threshold, is defined as a first overlapping region, and at this time, a side of an overlapping region, which is close to the center of the front view conversion image, which is correspondingly arranged, is also provided with the first overlapping region.
Of course, in an alternative embodiment, the right-view transformed image may be fixed, and the front-view transformed image may be adjusted to the right-view transformed image based on the right-view transformed image, so that the fusion weight of the front-view transformed image on the side of the overlapping region near the center of the right-view transformed image is lower. At this time, the first overlapping area with lower fusion weight in the front-view converted image is located at one side of the front-view converted image close to the center area of the right-view converted image, and the first overlapping area is also located at one side of the overlapping area of the right-view converted image, which is arranged corresponding to the first overlapping area of the front-view converted image and is close to the center position of the right-view converted image. That is, in this embodiment, the position of the first overlapping region is not fixed, and is related to the fusion strategy in the actual splicing and fusion, specifically, the actual situation is not limited too much. In addition, in this embodiment, the preset fusion weight threshold is 0.1.
Further, in the present embodiment, after forming the front view converted image and the right view converted image, the method further includes:
and mapping the forward-looking conversion image and the right-looking conversion image to form a forward-looking overlook image and a right-looking conversion image, wherein the overlapped areas in the forward-looking conversion image and the right-looking conversion image are mapped to form overlook overlapped areas, and the first overlapped areas are mapped to form first overlook overlapped areas which are respectively positioned on the forward-looking overlook image and the right-looking conversion image and are correspondingly arranged.
Further, as shown in fig. 1, in step S30, a plurality of feature points located in the overlapping region are extracted, and the feature points located in the first overlapping region are removed to update the converted image.
Since in the present embodiment, the extraction of the plurality of feature points of the overlapping region is performed in the top view image formed after mapping. Thus, the method for updating the converted image comprises the following step one and the step two.
In the first step, a plurality of top-view feature points of the top-view overlapping region are extracted, and the top-view feature points located in the first top-view overlapping region are removed.
In this embodiment, the feature points refer to points or blocks on the top view image that contain abundant local information, which often occur in corners, regions where texture changes drastically, and the like in the top view image. In this embodiment, a feature extraction algorithm is used to extract the feature points located in the top-view overlapping region. In this embodiment, the feature extraction algorithm is SIFT, SURF, ORB, AKAZE or the like.
Fig. 4 is a schematic diagram of a mask diagram in a vehicle-mounted image stitching method according to an embodiment of the present invention. As shown in fig. 4, in this embodiment, the method for removing the top-view feature point located in the first top-view overlapping region includes: screening a plurality of the top-view feature points of the top-view overlapping region using a mask map 10 to remove the top-view feature points located in the first top-view overlapping region, wherein the mask map 10 includes a feature point retaining region 101 and a feature point removing region 102.
In this embodiment, the method for screening the plurality of top-view feature points of the top-view overlapping region using the mask map 10 includes the following first and second steps.
In a first step, the mask map 10 and the top-view overlap region are aligned such that the top-view overlap region is within the coordinate range of the feature point removal region 102.
In this embodiment, the feature point retaining area 101 located in the mask map 10 may have a fan shape. With continued reference to fig. 4, the mask map 10 in this embodiment may be calculated according to the following formula (2).
Wherein I is the mask image pixel value; wi is the width of the top-down overlapping region; x is the distance of each point on the mask map 10 in the X direction; y is the distance in the Y direction of each point on the mask map 10. In the present embodiment, θ t The angle of the region 101 is reserved for the feature points of the mask map 10. In the present embodiment, the θ t Between 50 and 70, preferably, said θ t 60 deg..
In a second step, the top-view feature points located within the coordinate range of the feature point removal region 102 are removed to remove the top-view feature points located in the first top-view overlap region.
According to the image stitching method, an obtained at least two initial images are mapped into a three-dimensional mathematical model to form at least two converted images, the two adjacent converted images are respectively provided with overlapping areas with identical image contents and overlapping, one side of each overlapping area is provided with a first overlapping area with fusion weight smaller than a preset fusion weight threshold, and the two first overlapping areas are respectively arranged in the two adjacent converted images in a corresponding mode. And removing the characteristic points in the first overlapping region to reduce the interference of the characteristic points in the first overlapping region with lower fusion weight on the fusion result, thereby preventing the problem of distortion of the spliced image caused by insufficient smoothness of fusion.
In the second step, the top view image is updated according to the top view feature points remained in the top view overlapping area, and the updated top view image is mapped into the three-dimensional mathematical model so as to update the conversion image.
In this embodiment, the method for updating the top view image according to the top view feature points remaining in the top view overlapping region includes the following steps.
First, matching a plurality of top-view feature points which are respectively positioned in the top-view overlapping areas of two adjacent top-view images to obtain a plurality of matching feature point pairs (P1, P2).
The method for matching the rest of the top-view feature points comprises the following step one and step two.
In the first step, a plurality of feature descriptors corresponding to the top-view feature points are respectively calculated, wherein the feature descriptors are located in the top-view overlapping areas of two adjacent converted images.
In this embodiment, the plurality of feature descriptors are encoded according to a certain rule, so that the extracted top-view feature points have invariance such as illumination, rotation, and size. In this embodiment, the AKAZE algorithm may be used in the method for calculating the feature descriptors.
In the second step, the similarity of a plurality of feature descriptors respectively located in the overlook overlapped areas of two adjacent transformation images is measured, and according to the similarity of the feature descriptors, the rest of overlook feature points are matched respectively located in the overlook overlapped areas of two adjacent overlook images.
In this embodiment, the distance between feature vectors is used to measure the similarity of a plurality of feature descriptors adjacent to the top view image. If the S I FT and SURF algorithms are adopted in feature extraction, measuring the similarity of feature descriptors of the overlook overlapping areas of adjacent overlook images by using an L1 or L2 distance algorithm; if feature extraction is performed, performing a measurement of similarity of feature descriptors of the top-view overlapping region adjacent to the top-view image by using an ORB and AKAZE algorithm by using a Hamming distance algorithm.
In this embodiment, according to the similarity of the feature descriptors, the method for matching the remaining plurality of top-view feature points, which are respectively located in the top-view overlapping areas of the two adjacent top-view images, includes: a brute force search method or a proximity search method.
Secondly, a homography matrix is obtained according to a plurality of matching characteristic point pairs (P1, P2), and the top view image is updated according to the homography matrix.
In this embodiment, a homography matrix is calculated from the matching feature point pairs (P1, P2). Wherein the formula for calculating the homography matrix is shown in the following formula (3).
In the present embodiment, the above formula (3) is applied to establish a linear equation set for a plurality of matching feature point pairs (P1, P2), and then the homography matrix H is solved using the least square method. Finally, the top view image is updated according to the homography matrix H.
Further, after updating the top view image, and mapping the updated top view image into the three-dimensional mathematical model, the method for updating the converted image includes the following first to third steps.
In the first step, coordinates of all the sampling points P1 corresponding to all the depression points Pt1 on the depression image mapped to the three-dimensional mathematical model are obtained.
In this embodiment, coordinates P1 (x 1, y1, z 1) of the top view point Pt1 mapped to the corresponding sampling point P1 in the three-dimensional mathematical model are obtained according to a bus equation of the three-dimensional mathematical model and coordinates Pt1 (x 1, y 1) of the top view point Pt 1.
In the second step, a plurality of first target images corresponding to the plurality of sampling points P1 are obtained according to the coordinates of the plurality of sampling points P1.
In this embodiment, the method for acquiring the plurality of first target images corresponding to the plurality of sampling points P1 according to the coordinates of the sampling points P1 includes: based on coordinates P1 (x 1, y1, z 1) of a plurality of sampling points P1, texture coordinates Te (u, v) corresponding to the sampling points P1 are calculated, and finally a first lookup table (LUT 1) is generated. In the present embodiment, the texture coordinates Te (u, v) represent coordinates corresponding to points in the image capturing apparatus coordinate system when one of the sampling points P1 is converted into the image capturing apparatus coordinate system in the world coordinate system.
The method for calculating the first texture coordinate Te (u, v) corresponding to the first sampling point P1 includes the following steps.
Firstly, the internal and external parameter information of the first image acquisition equipment can be obtained through calibration. For the sampling point P1 (x 1, y1, z 1) in the world coordinate system, the coordinates of the corresponding first initial sampling point Vc in the coordinate system of the image acquisition apparatus can be calculated by the formula (4).
Vc=RP 1+T-equation (4)
Wherein R and T are rotation matrix and translation matrix in the external parameter information of the image acquisition equipment respectively.
Next, the texture coordinates Te (u, v) are calculated from the imaging model of the image acquisition apparatus.
In this embodiment, the image acquisition apparatus is a fisheye camera, and the texture coordinates Te (u, v) are calculated from an imaging model of the fisheye camera. Wherein the imaging model calculation formula is shown in the following formula (5).
Wherein k is 1 ,k 2 ,k 3 ,k 4 Is the distortion coefficient f in the reference information of the fisheye camera x ,f y Focal length of fish eye camera, c x ,c y Is the position of the optical center of the fish-eye camera.
Finally, searching is performed according to the first lookup table (LUT 1) to obtain image content corresponding to each overlook point Pt1, wherein the image content corresponding to the overlook point Pt1 is the first target image corresponding to the sampling point P1.
In a third step, the transformed image is updated based on a plurality of the target images.
Further, in this embodiment, after obtaining the plurality of matching feature point pairs, the method further includes: and adding constraint characteristic points at the other sides of the overlook overlapping areas of the two adjacent overlook images respectively, wherein the constraint characteristic points respectively positioned in the two adjacent overlook images form constraint characteristic point pairs. And updating a homography matrix according to the plurality of matching characteristic point pairs and the constraint characteristic point pairs, and updating the overlook image according to the updated homography matrix. Finally, the updated top view image is mapped into the three-dimensional mathematical model to update the transformed image.
Specifically, taking the case of adjacent forward-looking converted images and right-looking converted images as an example, when the images are fused, the forward-looking converted images are usually fixed, and the right-looking converted images are adjusted in the direction of the forward-looking converted images based on the forward-looking converted images, so that the fusion weight of the right-looking converted images on the side of the overlapping region near the center of the forward-looking converted images is relatively low. Correspondingly, the overlapping part of the visual field at the position far away from the center of the front-view converted image in the overlapping region of the right-view converted image and the front-view converted image is insufficient, the fusion weight is relatively low, and the number of the screened matched characteristic point pairs is insufficient. Therefore, constraint characteristic points can be respectively arranged on one side, far away from the center position of the front view conversion image, of the overlapping area of the right view conversion image and the front view conversion image so as to form constraint characteristic point pairs, so that the number of matched characteristic point pairs is increased, the fusion smoothness is further improved, and the problem of distortion of the spliced image is further reduced.
Of course, in an alternative embodiment, the right-view transformation image may be fixed, and the front-view transformation image is adjusted to the right-view transformation image based on the right-view transformation image, so that the overlapping degree of the overlapping area of the right-view transformation image and the front-view transformation image near the center position of the front-view transformation image is insufficient, the fusion weight ratio is low, and the number of the screened matching feature point pairs is insufficient. Therefore, a constraint characteristic point pair can be arranged on one side, close to the center position of the front view conversion image, of the overlapping area of the right view conversion image and the front view conversion image, so that the problem that the spliced image is distorted is further solved. That is, in this embodiment, the positions where the constraint feature point pairs are set are not fixed, and are related to the fusion strategy in the actual fusion, specifically, the actual situation is taken as the reference, and no excessive limitation is made here. Optionally, the constraint feature point pair increases a region with a fusion weight smaller than a preset fusion weight threshold value, for example, a region with the fusion weight smaller than 0.1, on the other side of the top overlapping region adjacent to the top image.
In addition, in the present embodiment, if the overlapping area where the constraint feature points need to be added includes a human eye image, a plurality of full black and full white images may be set as the constraint feature points to increase the positions of the beads of the full black images in the top-view overlapping areas adjacent to the top-view images, respectively, and to increase the positions of the full white images in the white of the top-view overlapping areas adjacent to the top-view images, respectively.
Further, the method for adding the constraint characteristic points comprises the following steps one to two.
In step one, coordinates of the constraint feature points to be added in the top-view overlapping region are obtained. Wherein the method of obtaining the coordinates of the constraint feature points comprises the following steps.
First, according to the width of the top-view overlapping region of one of the two adjacent top-view images and the number of the constraint feature points to be added, coordinates of the constraint feature points to be added of the one of the two adjacent top-view images are obtained.
In this embodiment, taking an example of adjusting the direction of the right-view converted image toward the forward-view converted image as an example, the constraint feature points to be added are preferably obtained in this step and set at the coordinates of the right-view top-view image. Acquiring the coordinate K (x) of the Kth constraint feature point to be added of the right-view top-view image k 0) can be calculated according to the following formula (6).
Wherein, width represents the width of the overlooking overlapped area image; n represents the number of constraint feature points. Wherein the number of n is between 10 and 20. In this embodiment, preferably, n=15. And k has a value of 1 to n.
And secondly, obtaining the coordinates of the constraint characteristic points to be added of the other adjacent overlook images according to the coordinates of the constraint characteristic points to be added of the other adjacent overlook images and the homography matrix. In this step, the coordinates K (x) of the kth constraint feature point of the top view front image are acquired k ',y k ') can be obtained by calculation according to the following formula (7), wherein H is a homography matrix corresponding to the right-view top view image.
In addition, in an alternative embodiment, the right view transformation image may be stretched toward the direction of the front view transformation image, so that overlapping areas of adjacent right view transformation images and the front view transformation image overlap and the image contents overlap, so as to solve the problem that double images appear in fusion between the front view transformation image and the right view transformation image, so that when the right view top view image needs to be stretched, a homography matrix corresponding to the right view top view image may be obtained through calculation according to the following formula (8).
Wherein Cx represents: half of the width of the top-view projected image; cy is half the height θ of the top view projection image, and the rotation angle. In this embodiment, θ is 2.5 °.
In a second step, the constraint feature points are set at coordinate locations within the top-view overlap region.
In step S40, the overlapping areas of the converted images after adjacent updating are fused to generate a stitched image. In this step, the method of fusing overlapping areas of the converted images after adjacent updating is not described in detail herein.
Fig. 5 is a schematic diagram of an in-vehicle image stitching system in an embodiment of the present invention. As shown in fig. 5, the present embodiment further discloses a vehicle-mounted image stitching system, which includes:
an image acquisition module 1 for acquiring at least two initial images.
The three-dimensional mathematical model construction module 2 is configured to construct a three-dimensional mathematical model with a world coordinate system, map at least two initial images into the three-dimensional mathematical model in sequence to form at least two adjacent converted images, wherein the two adjacent converted images respectively have overlapping areas with identical image contents and overlapping, one side of each overlapping area is provided with a first overlapping area with fusion weight smaller than a preset fusion weight threshold, and the two first overlapping areas respectively located in the two adjacent converted images are correspondingly arranged.
And the data processing module 3 is used for extracting a plurality of characteristic points in the overlapping area and removing the characteristic points in the first overlapping area so as to update the converted image.
And the image stitching module 4 is used for fusing overlapping areas of two adjacent updated converted images to generate a stitched image.
Fig. 6 is a schematic diagram of an in-vehicle image stitching device in an embodiment of the present invention. As shown in figure 6 of the drawings,
further, as shown in fig. 6, in this embodiment, there is also provided a vehicle-mounted image stitching device, where the vehicle-mounted image stitching device includes the central control host 100 and the above-mentioned vehicle-mounted image stitching system; the image acquisition device is connected with the central control host 100, and transmits the acquired initial image to the central control host 100 for image processing, so as to complete image stitching. And the three-dimensional mathematical model construction module 2, the data processing module 3 and the image stitching module 4 are located in the central control host 100.
In the present embodiment, the image capturing apparatus 1 is installed around a vehicle, and the image capturing apparatus 1 may be fisheye cameras, the number of which is 4, wherein the 4 image capturing apparatuses 1 are installed at front, rear, left, and right positions of a vehicle body, respectively.
The above description is only illustrative of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention, and any alterations and modifications made by those skilled in the art based on the above disclosure shall fall within the scope of the appended claims.

Claims (7)

1. The vehicle-mounted image stitching method is characterized by comprising the following steps of:
acquiring at least two initial images;
constructing a three-dimensional mathematical model with a world coordinate system, and sequentially mapping at least two initial images into the three-dimensional mathematical model to form at least two adjacent converted images, wherein the two adjacent converted images respectively have overlapping areas with identical image contents and overlapping, one side of each overlapping area is provided with a first overlapping area with fusion weight smaller than a preset fusion weight threshold, and the first overlapping area in one converted image and the first overlapping area in the other converted image in the two adjacent converted images are correspondingly arranged;
extracting a plurality of feature points located in the overlapping region, and removing the feature points located in the first overlapping region to update the converted image;
fusing overlapping areas of two adjacent updated conversion images to generate a spliced image;
the vehicle-mounted image stitching method further comprises the following steps of: mapping at least two adjacent converted images to form at least two adjacent overlook images, wherein the overlook overlapped areas are formed after being mapped, one side of each overlook overlapped area is provided with a first overlook overlapped area with fusion weight smaller than a preset fusion weight threshold, and the two first overlook overlapped areas respectively positioned in the two adjacent overlook images are correspondingly arranged; the method comprises the steps of,
the method for updating the converted image comprises the following steps: extracting a plurality of overlooking feature points of the overlooking overlapped region, and removing the overlooking feature points positioned in the first overlooking overlapped region;
updating the overlook image according to the overlook characteristic points remained in the overlook overlapping area, and mapping the updated overlook image into the three-dimensional mathematical model so as to update the conversion image;
the method for removing the top-view characteristic points in the first top-view overlapping area comprises the following steps: screening the plurality of top-view feature points of the top-view overlapping region by using a mask map to remove the top-view feature points located in the first top-view overlapping region, wherein the mask map comprises a feature point retaining region and a feature point removing region;
the method for screening the plurality of top-view feature points of the top-view overlapping area by using a mask map comprises the following steps: aligning the mask map and the top-view overlap region such that the first top-view overlap region is located within a coordinate range of the feature point removal region; removing the overlooking feature points located in the coordinate range of the feature point removing area to remove the overlooking feature points located in the first overlooking overlapping area;
the method for updating the top view image according to the top view feature points remained in the top view overlapping area comprises the following steps: matching a plurality of residual overlook characteristic points which are respectively positioned in the overlook overlapping areas of two adjacent overlook images so as to obtain a plurality of matching characteristic point pairs; and obtaining a homography matrix according to the plurality of matching characteristic point pairs, and updating the overlook image according to the homography matrix.
2. The vehicle-mounted image stitching method according to claim 1, wherein the method for matching the remaining plurality of top-view feature points, which are respectively located in the top-view overlapping areas of two adjacent top-view images, includes:
respectively calculating a plurality of feature descriptors corresponding to the overlooking feature points which are positioned in the overlooking overlapping areas of two adjacent converted images;
and measuring the similarity of a plurality of feature descriptors respectively positioned in the overlook overlapped areas of two adjacent conversion images, and matching the rest plurality of overlook feature points respectively positioned in the overlook overlapped areas of two adjacent overlook images according to the similarity of the feature descriptors.
3. The vehicle-mounted image stitching method according to claim 1, wherein after obtaining a plurality of matching feature point pairs, the method further comprises:
adding constraint characteristic points at the other sides of the overlook overlapping areas of the two adjacent overlook images respectively, wherein the constraint characteristic points respectively positioned in the two adjacent overlook images form constraint characteristic point pairs;
updating a homography matrix according to the matching characteristic point pairs and the constraint characteristic point pairs, and updating the overlook image according to the homography matrix after updating.
4. A vehicle-mounted image stitching method as claimed in claim 3, wherein the method of adding the constraint feature points comprises:
obtaining coordinates of the constraint characteristic points to be added in the overlook overlapping area;
and setting the constraint characteristic points at the coordinate positions in the overlook overlapping area.
5. The vehicle-mounted image stitching method according to claim 4, wherein the method of obtaining coordinates of the constraint feature points includes:
obtaining coordinates of the constraint feature points to be added in one of the adjacent overlook images according to the width of the overlook overlapping region in the one of the two adjacent overlook images and the number of the constraint feature points to be added;
and obtaining the coordinates of the constraint characteristic points to be added of the other adjacent overlook images according to the coordinates of the constraint characteristic points to be added of the other adjacent overlook images and the homography matrix.
6. A vehicle-mounted image stitching system, characterized by comprising:
the image acquisition module is used for acquiring at least two initial images;
the three-dimensional mathematical model construction module is used for constructing a three-dimensional mathematical model with a world coordinate system, at least two initial images are mapped into the three-dimensional mathematical model in sequence to form at least two adjacent converted images, the two adjacent converted images are respectively provided with overlapping areas with identical image contents and overlapping, one side of each overlapping area is provided with a first overlapping area with fusion weight smaller than a preset fusion weight threshold, and in the two adjacent converted images, the first overlapping area in one converted image is correspondingly arranged with the first overlapping area in the other converted image;
mapping at least two adjacent converted images to form at least two adjacent overlook images, wherein the overlook overlapped areas are formed after being mapped, one side of each overlook overlapped area is provided with a first overlook overlapped area with fusion weight smaller than a preset fusion weight threshold, and the two first overlook overlapped areas respectively positioned in the two adjacent overlook images are correspondingly arranged;
the data processing module is used for extracting a plurality of characteristic points positioned in the overlapping area and removing the characteristic points positioned in the first overlapping area so as to update the converted image; updating the transformed image includes: extracting a plurality of overlooking feature points of the overlooking overlapped region, and removing the overlooking feature points positioned in the first overlooking overlapped region; updating the overlook image according to the overlook characteristic points remained in the overlook overlapping area, and mapping the updated overlook image into the three-dimensional mathematical model so as to update the conversion image;
removing the top-down feature points located in the first top-down overlap region includes: screening the plurality of top-view feature points of the top-view overlapping region by using a mask map to remove the top-view feature points located in the first top-view overlapping region, wherein the mask map comprises a feature point retaining region and a feature point removing region;
screening the plurality of top-view feature points of the top-view overlap region using a mask map includes: aligning the mask map and the top-view overlap region such that the first top-view overlap region is located within a coordinate range of the feature point removal region; removing the overlooking feature points located in the coordinate range of the feature point removing area to remove the overlooking feature points located in the first overlooking overlapping area;
the step of updating the top view image according to the top view feature points remaining in the top view overlapping region includes: matching a plurality of residual overlook characteristic points which are respectively positioned in the overlook overlapping areas of two adjacent overlook images so as to obtain a plurality of matching characteristic point pairs; acquiring a homography matrix according to a plurality of the matching characteristic point pairs, and updating the overlooking image according to the homography matrix;
and the image stitching module is used for fusing overlapping areas of two adjacent updated conversion images to generate stitched images.
7. A vehicle-mounted image stitching device, comprising a central control host and a vehicle-mounted image stitching system according to claim 6;
the image acquisition module comprises image acquisition equipment which is connected with a central control host, and transmits an acquired initial image to the central control host for image processing so as to finish image stitching;
the three-dimensional mathematical model building module, the data processing module and the image splicing module are arranged in the central control host.
CN202011211534.0A 2020-11-03 2020-11-03 Vehicle-mounted image stitching method, system and device Active CN112308984B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011211534.0A CN112308984B (en) 2020-11-03 2020-11-03 Vehicle-mounted image stitching method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011211534.0A CN112308984B (en) 2020-11-03 2020-11-03 Vehicle-mounted image stitching method, system and device

Publications (2)

Publication Number Publication Date
CN112308984A CN112308984A (en) 2021-02-02
CN112308984B true CN112308984B (en) 2024-02-02

Family

ID=74332668

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011211534.0A Active CN112308984B (en) 2020-11-03 2020-11-03 Vehicle-mounted image stitching method, system and device

Country Status (1)

Country Link
CN (1) CN112308984B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791623A (en) * 2016-12-09 2017-05-31 深圳市云宙多媒体技术有限公司 A kind of panoramic video joining method and device
CN110517202A (en) * 2019-08-30 2019-11-29 的卢技术有限公司 A kind of vehicle body camera calibration method and its caliberating device
CN111223038A (en) * 2019-12-02 2020-06-02 上海赫千电子科技有限公司 Automatic splicing method and display device for vehicle-mounted all-around images
CN111783671A (en) * 2020-07-02 2020-10-16 郑州迈拓信息技术有限公司 Intelligent city ground parking space image processing method based on artificial intelligence and CIM

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017122294A1 (en) * 2016-01-13 2017-07-20 株式会社ソシオネクスト Surroundings monitoring apparatus, image processing method, and image processing program
CN107295256A (en) * 2017-06-23 2017-10-24 华为技术有限公司 A kind of image processing method, device and equipment
US10881371B2 (en) * 2018-12-27 2021-01-05 Medtronic Navigation, Inc. System and method for imaging a subject

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791623A (en) * 2016-12-09 2017-05-31 深圳市云宙多媒体技术有限公司 A kind of panoramic video joining method and device
CN110517202A (en) * 2019-08-30 2019-11-29 的卢技术有限公司 A kind of vehicle body camera calibration method and its caliberating device
CN111223038A (en) * 2019-12-02 2020-06-02 上海赫千电子科技有限公司 Automatic splicing method and display device for vehicle-mounted all-around images
CN111783671A (en) * 2020-07-02 2020-10-16 郑州迈拓信息技术有限公司 Intelligent city ground parking space image processing method based on artificial intelligence and CIM

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Image splicing detection method based on particle swarm optimization(PSO) algorithm;G Ling, L Xiao, K Zou;2016 ICME-MTC;全文 *
低空林地航拍图像拼接的改进缝合线算法;张帆;付慧;杨刚;;北京林业大学学报(05);全文 *
基于贝叶斯最大化后验估计方法的图片合成模型研究;杨琳;徐慧英;王艳洁;;计算机科学(06);全文 *

Also Published As

Publication number Publication date
CN112308984A (en) 2021-02-02

Similar Documents

Publication Publication Date Title
JP5926228B2 (en) Depth detection method and system for autonomous vehicles
CN104185009B (en) enhanced top-down view generation in a front curb viewing system
KR102267562B1 (en) Device and method for recognition of obstacles and parking slots for unmanned autonomous parking
JP5455124B2 (en) Camera posture parameter estimation device
JP2019106159A (en) Method and apparatus for intelligent terrain identification, on-vehicle terminal, and vehicle
EP1403615B1 (en) Apparatus and method for processing stereoscopic images
US11887336B2 (en) Method for estimating a relative position of an object in the surroundings of a vehicle and electronic control unit for a vehicle and vehicle
CN110717445B (en) Front vehicle distance tracking system and method for automatic driving
JP4344860B2 (en) Road plan area and obstacle detection method using stereo image
CN111768332A (en) Splicing method of vehicle-mounted all-around real-time 3D panoramic image and image acquisition device
CN110088801A (en) It can travel regional detection device and travel assist system
JP6969738B2 (en) Object detection device and method
CN107220632B (en) Road surface image segmentation method based on normal characteristic
CN107145828B (en) Vehicle panoramic image processing method and device
KR101697229B1 (en) Automatic calibration apparatus based on lane information for the vehicle image registration and the method thereof
CN112308987B (en) Vehicle-mounted image stitching method, system and device
CN111862210B (en) Object detection and positioning method and device based on looking-around camera
CN111860270B (en) Obstacle detection method and device based on fisheye camera
CN112308986B (en) Vehicle-mounted image stitching method, system and device
CN110738696B (en) Driving blind area perspective video generation method and driving blind area view perspective system
JP4696925B2 (en) Image processing device
CN112308984B (en) Vehicle-mounted image stitching method, system and device
CA3122865A1 (en) Method for detecting and modeling of object on surface of road
CN114926331A (en) Panoramic image splicing method applied to vehicle
CN112308985B (en) Vehicle-mounted image stitching method, system and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant