CN112308985B - Vehicle-mounted image stitching method, system and device - Google Patents

Vehicle-mounted image stitching method, system and device Download PDF

Info

Publication number
CN112308985B
CN112308985B CN202011211542.5A CN202011211542A CN112308985B CN 112308985 B CN112308985 B CN 112308985B CN 202011211542 A CN202011211542 A CN 202011211542A CN 112308985 B CN112308985 B CN 112308985B
Authority
CN
China
Prior art keywords
image
initial
images
weight
adjacent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011211542.5A
Other languages
Chinese (zh)
Other versions
CN112308985A (en
Inventor
何恒
杨帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haowei Technology Wuhan Co ltd
Original Assignee
Haowei Technology Wuhan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haowei Technology Wuhan Co ltd filed Critical Haowei Technology Wuhan Co ltd
Priority to CN202011211542.5A priority Critical patent/CN112308985B/en
Publication of CN112308985A publication Critical patent/CN112308985A/en
Application granted granted Critical
Publication of CN112308985B publication Critical patent/CN112308985B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a vehicle-mounted image stitching method, an image stitching system and an image stitching device, which are used for mapping at least two acquired initial images into a three-dimensional mathematical model to form at least two converted images, wherein overlapping areas of two adjacent converted images are overlapped and the image content is the same. And stretching one side of one of the two adjacent converted images towards the other until the image contents of the overlapping areas of the two adjacent converted images coincide; and then, according to the fusion weight graph, searching a first weight value corresponding to each first sampling point in the overlapping area of one of the two adjacent converted images, and subtracting the first weight value by 1 to obtain a second weight value corresponding to a second sampling point with the same target image content as the first sampling point in the overlapping area of the other of the two adjacent converted images. The weight sum of the same target image in two adjacent converted images is 1 when the two converted images are fused, so that the problem of abnormal fusion brightness is prevented.

Description

Vehicle-mounted image stitching method, system and device
Technical Field
The invention relates to the field of image stitching, in particular to a vehicle-mounted image stitching method, system and device.
Background
Along with the popularization of automobiles, more and more automobiles enter thousands of households, the living consumption level of people is continuously improved, the number of automobiles is also continuously increased, the intelligent requirements on electric appliances in the automobiles are higher and higher, and ADAS (advanced automatic analysis system) and 360 panoramic images on the automobiles in the intelligent automobiles are important configurations of high-allocation automobiles. The vehicle-mounted 3D panoramic system utilizes wide-angle cameras arranged around the vehicle to reconstruct the vehicle and surrounding scenes, and generates a vehicle-mounted panoramic image. The driver can safely park, avoid obstacles and eliminate visual blind areas by observing the panoramic image, so that the aim of safe driving is fulfilled.
The concept of an in-vehicle look-around system was first proposed by k.kato et al in 2006. Various active safety techniques such as lane detection, parking space detection tracking, and auxiliary parking, and moving object detection are then applied to vehicle-mounted look-around systems. Byeongchaen Jeon et al in 2015 proposed a solution for a high resolution panoramic look-around system. These schemes all feature the use of multiple cameras to complete modeling of the actual scene, producing visual effects including 2D and pseudo 3D. The number of cameras is determined according to actual vehicle types, and a general household car is modeled by adopting 4-path fisheye cameras, so that the final purpose is to unify images of multiple paths of cameras under the same visual coordinate system, and a complete view field is formed for a driver to observe the situation around the vehicle.
When the existing images are spliced, the images of the adjacent cameras adopt a gradual-in gradual-out fusion strategy in a splicing fusion area, and the strategy is realized through an image mixing function of OpenGL. The color mixing formula in OpenGL is: crosult=Fsrc+Fdst Cdst, where Crc represents the source color from the texture; cdst represents the target color stored in the buffer; fdst represents the source factor value, i.e., the fusion weight of the source color. Cdst represents the target factor value, i.e., the fusion weight of the target color. The a-channel values of the source and target images are taken as fusion weights in OpenGL. When image stitching is performed based on the OpenGL platform, when the problem of double images occurring during adjacent image stitching is solved, one of the adjacent images is stretched towards the other image, so that texture coordinates of points on the stretched image are updated, a fusion weight map corresponding to the stretched conversion image is not updated, the weight sum of the same target image in the two adjacent conversion images is not equal to 1, and brightness of the two adjacent conversion images is abnormal during fusion.
Disclosure of Invention
The invention aims to provide a vehicle-mounted image stitching method, a vehicle-mounted image stitching system and a vehicle-mounted image stitching device, so as to solve the problem of abnormal fusion brightness caused by mismatching of images in a fusion area and corresponding fusion weights when the images are stitched by the conventional vehicle-mounted image stitching method.
In order to solve the above problems, the present invention provides a vehicle-mounted image stitching method, including:
acquiring at least two initial images;
constructing a three-dimensional mathematical model with a world coordinate system, and sequentially mapping at least two initial images into the three-dimensional mathematical model to form at least two converted images, wherein overlapping areas of two adjacent converted images are overlapped and the image contents are the same;
obtaining a fusion weight map corresponding to each conversion image, and stretching one of two adjacent conversion images towards the other conversion image until the image contents of the overlapping areas of the two adjacent conversion images coincide;
searching a first weight value corresponding to each first sampling point in an overlapping area of one of the two adjacent converted images in the fusion weight map corresponding to the other one of the two adjacent converted images to calculate a second weight value of each second sampling point with the same content as the target image corresponding to the first sampling point in the overlapping area of the other one of the two adjacent converted images; wherein the sum of the first weight value and the second weight value is 1;
and fusing adjacent overlapping areas according to the first weight value and the second weight value to generate a spliced image.
Optionally, the method for obtaining the fusion weight map corresponding to each converted image includes:
and acquiring an initial fusion weight map according to the splicing seam corresponding to each initial image, and updating the initial fusion weight map to acquire the fusion weight map.
Optionally, the method for obtaining the initial fusion weight map according to the splice seam corresponding to each initial image includes:
acquiring the position of a splicing seam corresponding to each initial image;
obtaining an image area corresponding to the initial fusion weight map according to the splice joint position corresponding to the initial image;
and resetting the weight of each mapping point in the image area corresponding to the initial fusion weight map to be 1 so as to obtain the initial fusion weight map.
Optionally, the method for obtaining the image area corresponding to the initial fusion weight map according to the splice seam position corresponding to the initial image includes:
mapping the stitching seam corresponding to the initial image into the three-dimensional mathematical model, and forming an initial image area together with the model boundary and the model origin of the three-dimensional mathematical model;
and mapping each initial point in the initial image area into an image acquisition equipment model to obtain a plurality of mapping points, wherein the plurality of mapping points form an image area corresponding to the initial fusion weight map.
Optionally, the method for updating the initial fusion weight map to obtain the fusion weight map includes:
and calculating the weight of the overlapping region of the initial fusion weight map and the corresponding region of the overlapping region of the initial image according to a weight calculation formula, and updating the weight value of the initial fusion weight map and the corresponding region of the overlapping region of the initial image according to the weight of the overlapping region so as to acquire the fusion weight map.
Optionally, the weight calculation formula is: w=0.25θ+0.5-0.25θ s
Wherein θ is an included angle between a line between a point on an area corresponding to the overlapping area of the initial fusion weight map and the initial image and the origin of the three-dimensional mathematical model and the X axis, and θ s Is the included angle between the splice joint and the X axis.
Optionally, the method for fusing adjacent overlapping areas according to the first weight value and the second weight value includes:
calculating pixel values after fusing adjacent overlapping areas according to a formula RGBresult= (1-Adst): RGBsrc+Adst: RGBdst, so as to fuse the adjacent overlapping areas;
wherein rgbbresult represents a pixel value after fusing adjacent overlapping areas, RGBsrc represents a pixel value adjacent one of the converted images, RGBdst represents a pixel value corresponding to the other of the two converted images, and Adst represents a second weight value.
Optionally, the vehicle-mounted image stitching method further includes:
rendering the converted images in a predetermined order prior to generating the stitched image;
or generating the spliced image, and simultaneously, setting the rendering weight value of the spliced image to be 1.
Optionally, the method for rendering the weight value of the spliced image to be 1 includes:
calculating the rendering weight value according to a formula Aresult=1×Asrc+0×Adst, and enabling the rendering weight value to be 1;
wherein Areult represents the rendering weight value, adst represents the first weight value, and Astc represents the second weight value; the value of Asrc is 1 and Adst is not 1.
Optionally, the order of rendering the converted images in the predetermined order is:
and firstly rendering a part of the converted image, corresponding to the first weight value, which is not 1, and then rendering a part of the converted image, corresponding to the second weight value, which is 1.
The present invention also provides a vehicle-mounted image stitching system, which does not solve the above problems, comprising:
the image acquisition module is used for acquiring at least two initial images;
the three-dimensional mathematical model construction module is used for constructing a three-dimensional mathematical model with a world coordinate system, at least two initial images are sequentially mapped into the three-dimensional mathematical model to form at least two converted images, and overlapping areas of two adjacent converted images are overlapped and have the same image content;
the data processing module is used for obtaining a fusion weight graph corresponding to each conversion image, stretching one of the two adjacent conversion images towards the other until the image contents of the overlapping areas of the two adjacent conversion images coincide, and searching a first weight value corresponding to each first sampling point in the overlapping area of one of the two adjacent conversion images in the fusion weight graph corresponding to one of the two adjacent conversion images so as to calculate a second weight value of each second sampling point which is positioned in the overlapping area of the other of the two adjacent conversion images and has the same target image content corresponding to the first sampling point; wherein the sum of the first weight value and the second weight value is 1;
and the image splicing module is used for fusing the adjacent overlapping areas according to the first weight value and the second weight value so as to generate a spliced image.
In order to solve the problems, the invention also provides a vehicle-mounted image splicing device which comprises a central control host and the vehicle-mounted image splicing system;
the image acquisition module comprises image acquisition equipment which is connected with a central control host, and the acquired initial image is transmitted to the central control host for image processing so as to finish image stitching;
the three-dimensional mathematical model building module, the data processing module and the image splicing module are arranged in the central control host.
According to the image stitching method, at least two initial images are mapped into a three-dimensional mathematical model to form at least two converted images, and overlapping areas of two adjacent converted images overlap and the image content is the same. And stretching one side of one of the two adjacent converted images towards the other until the image contents of the overlapping areas of the two adjacent converted images coincide; and then, according to the fusion weight graph, searching a first weight value corresponding to each first sampling point in the overlapping area of one of the two adjacent conversion images, and subtracting the first weight value by 1 to obtain a second weight value corresponding to a second sampling point with the same target image content as the first sampling point in the overlapping area of the other of the two adjacent conversion images. Therefore, the sum of weights of the same target image in two adjacent converted images is 1 when the two converted images are fused, so that the problem of abnormal fusion brightness is prevented.
Drawings
FIG. 1 is a flow chart of a method of stitching in-vehicle images in an embodiment of the invention;
FIG. 2 is a schematic diagram of a construction equation of a three-dimensional mathematical model established in a vehicle-mounted image stitching method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a three-dimensional mathematical model established in a vehicle-mounted image stitching method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a fusion weight chart in a vehicle-mounted image stitching method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an in-vehicle image stitching system in an embodiment of the present invention;
FIG. 6 is a schematic diagram of an in-vehicle image stitching device in an embodiment of the present invention;
reference numerals
A1-bowl edge; a2-bowl bottom;
1-an image acquisition module; 2-a three-dimensional mathematical model building module;
3-a data processing module; 4-an image stitching module;
100-a central control host; 200-on-board ethernet network.
Detailed Description
The method, the system and the device for stitching the vehicle-mounted image provided by the invention are further described in detail below with reference to the accompanying drawings and specific embodiments. The advantages and features of the present invention will become more apparent from the following description. It should be noted that the drawings are in a very simplified form and are all to a non-precise scale, merely for convenience and clarity in aiding in the description of embodiments of the invention. Furthermore, the structures shown in the drawings are often part of actual structures. In particular, the drawings are shown with different emphasis instead being placed upon illustrating the various embodiments.
Fig. 1 is a flowchart of a vehicle-mounted image stitching method according to an embodiment of the present invention. In the present embodiment, the vehicle-mounted image stitching method of the present embodiment as shown in fig. 1 includes the following steps S10 to S50.
In step S10, at least two initial images are acquired. In this step, at least two initial images are acquired by at least two image acquisition devices, wherein at least two of the initial images are located around the vehicle.
The at least two image capturing devices may be fisheye cameras, for example, in a specific embodiment, four fisheye cameras may be provided, and the four fisheye cameras are respectively disposed at front, rear, left and right positions of the vehicle body, for example, at a head, a tail, a left rearview mirror and a right rearview mirror of the vehicle body, so as to capture images of an area around the vehicle in real time. The image content of at least two initial images around the vehicle acquired by at least two image acquisition devices can comprise a ground part and an air part, the image of the ground part can comprise a pavement zebra crossing, a road edge and the like of the ground, and the image of the air part can comprise pedestrians, surrounding vehicles, traffic lights and the like.
In step S20, a three-dimensional mathematical model with a world coordinate system is constructed, at least two initial images are mapped into the three-dimensional mathematical model in sequence, so as to form at least two converted images, and overlapping areas of two adjacent converted images overlap and the image content is the same.
Fig. 2 is a schematic diagram of a construction equation of a three-dimensional mathematical model established in the vehicle-mounted image stitching method according to an embodiment of the present invention. Fig. 3 is a schematic diagram of a three-dimensional mathematical model established in the vehicle-mounted image stitching method according to an embodiment of the present invention. In this embodiment, as shown in fig. 2 and 3, the three-dimensional mathematical model is a three-dimensional bowl-shaped mathematical model, the construction equation for constructing the three-dimensional bowl-shaped mathematical model is shown in fig. 2, X, Y, Z is a world coordinate system, wherein X0Y represents the ground surface, 0 represents the geometric center of the projection of the vehicle on the ground surface, 0Y represents the forward direction of the vehicle, 0Z represents the rotation axis, and 0R 0 P represents a bus, the bowl-shaped curved surface is formed by rotating the bus around a rotating shaft, and a bus equation formula for constructing the three-dimensional bowl-shaped model is shown as a formula (1).
Wherein R is 0 Represents the radius of the bowl bottom A2, and the radius R of the bowl bottom A2 0 The radius R of the bowl bottom A2 is related to the vehicle size 0 Is typically about 100cm greater than one-half the vehicle size, in this embodiment, the radius R of the bowl bottom A2 0 The dimension of the bowl bottom A2 is 250 cm-350 cm, and the radius R of the bowl bottom A2 is better 0 Is 300cm in size; wherein, the units of the camera coordinate system and the world coordinate system are cm.
And K is an adjustment coefficient of the bowl edge A1, in this embodiment, the relative size between the bowl edge A1 and the bowl bottom A2 is adjusted by the adjustment coefficient K of the bowl edge A1, that is, the larger the K value is, the larger the area corresponding to the bowl edge A1 is. Whether the area of the bowl edge A1 is too large, the area of the bowl bottom A2 is too small, or the area of the bowl bottom A2 is too small, the area of the bowl edge A1 is too large, which results in poor splicing effect, so that a proper range of values needs to be given to the adjustment coefficient k of the bowl edge A1. In this embodiment, the k value ranges from 0.1 to 0.2. Preferably, in this embodiment, the K value is in the range of 0.15.
Further, after the three-dimensional mathematical model with the world coordinate system pair is constructed, at least two initial images around the acquired vehicle are sequentially mapped into the three-dimensional mathematical model to form at least two converted images, wherein overlapping areas of adjacent conversion overlap and the image content is the same. For example, the images acquired by the front view camera and the images acquired by the right view camera each have the same overlapping region with the same image content, such as traffic lights.
In step S30, a fusion weight map corresponding to each of the converted images is obtained, and one side of each of the two adjacent converted images is stretched toward the other until the image contents of the overlapping areas of the two adjacent converted images overlap.
In this embodiment, the method for obtaining the fusion weight map of each converted image includes: and acquiring an initial fusion weight map according to the splicing seam corresponding to each initial image, and updating the initial fusion weight map to acquire the fusion weight map.
Further, in this embodiment, the method for obtaining the initial fusion weight map according to the stitching seam corresponding to each initial image includes the following steps one to three.
In the first step, the position of the stitching seam L corresponding to each initial image is obtained. In this embodiment, the position of the vehicle is taken as the center, one of the front, rear, left and right corners of the vehicle is taken as the starting point A1 of the splice seam, and the connecting line between the starting point A1 and the ending point B1 located at the edge of the initial image far from the vehicle is taken as the splice seam L. In this embodiment, the position of the stitching seam L corresponding to each initial image is defined by an included angle θ between the stitching seam L and the X-axis of the three-dimensional bowl-shaped mathematical model s Determining; wherein said θ s The angle of (2) is in the range of 40 DEG to 50 deg.
In the second step, an image area corresponding to the initial fusion weight map is obtained according to the position of the stitching seam L corresponding to the initial image.
In this embodiment, the method for obtaining the image area corresponding to the initial fusion weight map may include the following steps.
Firstly, mapping a stitching seam L corresponding to the initial image into the three-dimensional mathematical model, and forming an initial image area together with a model boundary and a model origin of the three-dimensional mathematical model.
Specifically, in this embodiment, the splice seam L is divided into a first segment L1 and a second segment L2, where the first segment L1 is a splice seam of a portion located on the ground, and the second segment L2 is a splice seam of a portion located in the air. And mapping the first section L1 to the bowl bottom of the three-dimensional bowl-shaped mathematical model, and mapping the second section L2 to the bowl edge part of the three-dimensional bowl-shaped mathematical model according to a bus equation of the three-dimensional bowl-shaped mathematical model. According to the mapping method, the left and right splice lines L corresponding to the initial image are mapped into the three-dimensional bowl-shaped mathematical model. And then, forming an initial image area by the left and right mapped splicing seams L, the model boundaries of the three-dimensional bowl-shaped mathematical model and a model origin, wherein in the embodiment, the model origin is a connecting line of two adjacent corners in four corners of the vehicle.
And then, mapping each initial point in the initial image area into an image acquisition equipment model to obtain a plurality of mapping points, wherein the plurality of mapping points form an image area corresponding to the initial fusion weight map.
In the third step, the weight of each mapping point in the image area is reset to 1, so that the initial fusion weight map is obtained.
Fig. 4 is a schematic diagram of a weight chart in a vehicle-mounted image stitching method according to an embodiment of the invention. Further, in this embodiment, the method for updating the initial fusion weight map to obtain the fusion weight map includes: and calculating the weight of the overlapping region of the initial fusion weight map and the region corresponding to the overlapping region of the initial image according to a weight calculation formula (2), and updating the weight value of the region corresponding to the overlapping region of the initial fusion weight map and the initial image according to the weight of the overlapping region so as to acquire the fusion weight map.
In this embodiment, the weight calculation formula is: w=0.25θ+0.5-0.25θ s -equation (2).
Wherein θ is the angle between the X-axis and the line connecting the point on the region corresponding to the overlapping region of the initial fusion weight map and the initial image and the origin of the three-dimensional mathematical model, θ s Is the angle between the splice joint and the X axis. The weight value of the region corresponding to the overlapping region of the initial fusion weight map and the initial image is updated by the weight calculation formula to obtain the fusion weight map shown in fig. 4, in this embodiment, the full white part in fig. 4 represents the region with the weight of 1, the full black part represents the region with the weight of 0, and the gray part between black and white represents the weight of the overlapping region as obtained by the calculation of formula (2).
Further, one of the two adjacent converted images is stretched toward the other until the image contents of the overlapping areas of the two adjacent converted images overlap. In this embodiment, after stretching, so that the image contents of the overlapping areas of two adjacent converted images overlap, the image contents of the points where the overlapping areas are fused in the subsequent fusion process may be the same, so as to prevent the problem of poor stitching. The method of stretching the converted image is not described here too much. In addition, in the present embodiment, the order of obtaining the fusion weight map corresponding to each of the converted images and stretching the adjacent two of the converted images is not particularly limited. The fusion weight map corresponding to each converted image can be obtained first, and two adjacent converted images can be stretched first.
In step S40, in the fused weight map corresponding to one of the two adjacent converted images, a first weight value D1 corresponding to each first sampling point P1 located in an overlapping area of one of the two adjacent converted images is searched for, so as to calculate a second weight value D2 of each second sampling point P2 located in the overlapping area of the other and having the same target image content as the first sampling point P1, where the sum of the first weight value D1 and the second weight value D2 is 1. In this embodiment, since d1+d2=1, the second weight value d2=1 to D1.
According to the vehicle-mounted image stitching method, the three-dimensional mathematical model is built, at least two obtained initial images are mapped into the three-dimensional mathematical model to form at least two converted images, and overlapping areas of two adjacent converted images overlap and the image content is the same. And stretching one side of each adjacent two converted images towards the other until the image contents of the overlapping areas of the two adjacent converted images coincide. And then, according to the fusion weight graph, searching a first weight value D1 corresponding to each first sampling point P1 in the overlapping region of one of the two adjacent converted images, and subtracting the first weight value D1 from 1 to obtain a second weight value D2 corresponding to a second sampling point P2 with the same target image content as the first sampling point P1 in the overlapping region of the other of the two adjacent converted images. And the weight sum of the same target image in two adjacent converted images is 1 when the two converted images are fused, so that the problem of abnormal fusion brightness is prevented.
Furthermore, the image stitching method of the embodiment is performed based on an OpenGL platform, wherein in OpenGL, a texture image adopts an RGBA format, and a channel a stores a fusion weight. And during fusion, the other of the two adjacent converted images is stored in a buffer area and used as a target image, one of the two adjacent converted images is used as a source image and fused with the other of the two adjacent converted images, and the fused image is displayed in display equipment, so that all the converted images are fused and then the spliced image is displayed in the display equipment.
In this embodiment, the number of the obtained initial images is 2N, and the initial images are mapped to the three-dimensional mathematical model in sequence to form 2N sequentially arranged converted images, where N is greater than or equal to 1. Specifically, for example, when the number of image acquisition apparatuses is 4, the number of initial images acquired is 4, and the number of converted images formed after mapping is 4. The front-view converted image corresponding to the front-view camera and the rear-view converted image corresponding to the rear-view camera can be taken as target images stored in a buffer area, and then the right-view converted image corresponding to the right-view camera and the left-view converted image corresponding to the left-view camera are taken as source images so as to be fused with the front-view converted image and the rear-view converted image.
In this embodiment, the method for fusing adjacent overlapping regions according to the first weight value and the second weight value includes: and (3) calculating the pixel value after the fusion of the adjacent overlapping areas according to the formula (3) so as to fuse the adjacent overlapping areas.
Rgbbresult= (1-Adst) ×rgbsrc+adst×rgbdst-equation (3)
Where rgbbresult represents the pixel value after fusing adjacent to the overlapping region, RGBsrc represents the pixel value of the source image (i.e., adjacent to one of the converted images), rgbbdst represents the pixel value of the target image (i.e., adjacent to the other of the converted images), and Adst represents the fusion weight value (i.e., the second weight value) of the target image.
In step S50, the adjacent overlapping areas are fused according to the first weight value D1 and the second weight value D2, so as to generate a stitched image. In this embodiment, the method of fusing to generate the stitched image is not described in detail.
Further, before generating the stitched image, the method further comprises: and rendering the converted images according to a preset sequence. In this embodiment, after rendering by the OpenGL platform, the OpenGL platform is applied to display in a display device. The display device can be a computer, a mobile phone, a tablet, and the like. The method comprises the steps of rendering the converted images according to a preset sequence, wherein the sequence of rendering the converted images is that a part of the converted images, corresponding to a first weight value, is not 1, is rendered, and then a part of the converted images, corresponding to a second weight value, is rendered.
And, while generating the stitched image, the method further comprises: and the rendering weight value of the spliced image is 1. In this embodiment, the rendering weight value of the stitched image is 1, and the background fusion weight is 0 when the converted image is fused, so that the problem of abnormal platform rendering can be avoided when the OpenGL platform is applied to image stitching.
In this embodiment, the method for rendering the spliced image with a rendering weight value of 1 includes: calculated according to formula (4) such that the predetermined rendering weight value is set to 1.
Aresult=1×ascc+0×adst-equation (4)
Wherein, areult represents the predetermined weight value, adst represents the first weight value, asrc represents the second weight value, asrc has a value of 1, and Adst is not 1.
Fig. 5 is a schematic diagram of an in-vehicle image stitching system in an embodiment of the present invention. As shown in fig. 5, the present embodiment further discloses a vehicle-mounted image stitching system, which includes:
an image acquisition module 1 for acquiring at least two initial images;
the three-dimensional mathematical model construction module 2 is used for constructing a three-dimensional mathematical model with a world coordinate system, and sequentially mapping at least two initial images into the three-dimensional mathematical model to form at least two converted images, wherein overlapping areas of two adjacent converted images are overlapped and the image contents are the same;
the data processing module 3 is configured to obtain a fusion weight map corresponding to each of the converted images, stretch one of the two adjacent converted images toward the other until the image contents of the overlapping areas of the two adjacent converted images overlap, and search a first weight value corresponding to each first sampling point located in the overlapping area of one of the two adjacent converted images in the fusion weight map corresponding to one of the two adjacent converted images, so as to calculate a second weight value corresponding to each second sampling point located in the overlapping area of the other of the two adjacent converted images and having the same content as the target image corresponding to the first sampling point; wherein the sum of the first weight value and the second weight value is 1.
And the image stitching module 4 is used for fusing the adjacent overlapping areas according to the first weight value and the second weight value so as to generate a stitched image.
Fig. 6 is a schematic diagram of an in-vehicle image stitching device in an embodiment of the present invention. As shown in fig. 6, the present embodiment further discloses a vehicle-mounted image stitching device, where the vehicle-mounted image stitching device includes the central control host 100 and the above-mentioned vehicle-mounted image stitching system. The image acquisition module 1 comprises image acquisition equipment, wherein the image acquisition equipment is connected with the hollow host, and transmits the acquired initial image to the central host for image processing so as to finish image stitching; the three-dimensional mathematical model construction module 2, the data processing module 3 and the image stitching module 4 are located in the central control host.
In this embodiment, the image capturing devices 1 are fisheye cameras, and the number of the fisheye cameras is 4, wherein the 4 image capturing devices 1 are respectively mounted at the front, rear, left and right positions of the vehicle body.
The above description is only illustrative of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention, and any alterations and modifications made by those skilled in the art based on the above disclosure shall fall within the scope of the appended claims.

Claims (10)

1. The vehicle-mounted image stitching method is characterized by comprising the following steps of:
acquiring at least two initial images;
constructing a three-dimensional mathematical model with a world coordinate system, and sequentially mapping at least two initial images into the three-dimensional mathematical model to form at least two converted images, wherein overlapping areas of two adjacent converted images are overlapped and the image contents are the same;
obtaining a fusion weight map corresponding to each conversion image, and stretching one of two adjacent conversion images towards the other conversion image until the image contents of the overlapping areas of the two adjacent conversion images coincide;
searching a first weight value corresponding to each first sampling point in an overlapping area of one of the two adjacent converted images in the fusion weight map corresponding to the one of the two adjacent converted images to calculate a second weight value corresponding to each second sampling point with the same content as the target image corresponding to the first sampling point in the overlapping area of the other of the two adjacent converted images; wherein the sum of the first weight value and the second weight value is 1;
fusing adjacent overlapping areas according to the first weight value and the second weight value to generate a spliced image;
the method for obtaining the fusion weight map corresponding to each converted image comprises the following steps:
acquiring an initial fusion weight map according to the splicing seam corresponding to each initial image, and updating the initial fusion weight map to acquire the fusion weight map;
the method for updating the initial fusion weight map to acquire the fusion weight map comprises the following steps:
and calculating the weight of the overlapping region of the initial fusion weight map and the corresponding region of the overlapping region of the initial image according to a weight calculation formula, and updating the weight value of the initial fusion weight map and the corresponding region of the overlapping region of the initial image according to the weight of the overlapping region so as to acquire the fusion weight map.
2. The method for stitching vehicle-mounted images according to claim 1, wherein the method for obtaining an initial fusion weight map according to the stitching seam corresponding to each initial image comprises the following steps:
acquiring the position of a splicing seam corresponding to each initial image;
obtaining an image area corresponding to the initial fusion weight map according to the splice joint position corresponding to the initial image;
and resetting the weight of each mapping point in the image area corresponding to the initial fusion weight map to be 1 so as to obtain the initial fusion weight map.
3. The vehicle-mounted image stitching method according to claim 2, wherein the method for obtaining the image area corresponding to the initial fusion weight map according to the stitching seam position corresponding to the initial image comprises the following steps:
mapping the stitching seam corresponding to the initial image into the three-dimensional mathematical model, and forming an initial image area together with the model boundary and the model origin of the three-dimensional mathematical model;
and mapping each initial point in the initial image area into an image acquisition equipment model to obtain a plurality of mapping points, wherein the plurality of mapping points form an image area corresponding to the initial fusion weight map.
4. The vehicle-mounted image stitching method according to claim 1, wherein the weight calculation formula is: w=0.25θ+0.5-0.25θ s
Wherein θ is an included angle between a line between a point on an area corresponding to the overlapping area of the initial fusion weight map and the initial image and the origin of the three-dimensional mathematical model and the X axis, and θ s Is the included angle between the splice joint and the X axis.
5. The vehicle-mounted image stitching method according to claim 1, wherein the method of fusing adjacent overlapping areas according to the first weight value and the second weight value includes:
calculating pixel values after fusing adjacent overlapping areas according to a formula RGBresult= (1-Adst): RGBsrc+Adst: RGBdst, so as to fuse the adjacent overlapping areas;
wherein rgbbresult represents a pixel value after fusing adjacent overlapping areas, RGBsrc represents a pixel value adjacent one of the converted images, RGBdst represents a pixel value corresponding to the other of the two converted images, and Adst represents a second weight value.
6. The in-vehicle image stitching method according to claim 1, characterized in that the in-vehicle image stitching method further comprises:
rendering the converted images in a predetermined order prior to generating the stitched image;
and generating the spliced image, wherein the rendering weight value of the spliced image is 1.
7. The method for stitching on-vehicle images according to claim 6, wherein the method for stitching the images with a rendering weight value of 1 comprises:
calculating the rendering weight value according to a formula Aresult=1×Asrc+0×Adst, and enabling the rendering weight value to be 1;
wherein Areult represents the rendering weight value, adst represents the first weight value, and Astc represents the second weight value; the value of Asrc is 1 and Adst is not 1.
8. The in-vehicle image stitching method according to claim 6, wherein the order in which the converted images are rendered in a predetermined order is:
and firstly rendering a part of the converted image, corresponding to the first weight value, which is not 1, and then rendering a part of the converted image, corresponding to the second weight value, which is 1.
9. A vehicle-mounted image stitching system, characterized by comprising:
the image acquisition module is used for acquiring at least two initial images;
the three-dimensional mathematical model construction module is used for constructing a three-dimensional mathematical model with a world coordinate system, at least two initial images are sequentially mapped into the three-dimensional mathematical model to form at least two converted images, and overlapping areas of two adjacent converted images are overlapped and have the same image content;
the data processing module is used for obtaining a fusion weight graph corresponding to each conversion image, stretching one of the two adjacent conversion images towards the other until the image contents of the overlapping areas of the two adjacent conversion images coincide, and searching a first weight value corresponding to each first sampling point in the overlapping area of one of the two adjacent conversion images in the fusion weight graph corresponding to one of the two adjacent conversion images so as to calculate a second weight value corresponding to each second sampling point which is positioned in the overlapping area of the other of the two adjacent conversion images and has the same target image content corresponding to the first sampling point; wherein the sum of the first weight value and the second weight value is 1;
the image stitching module is used for fusing the adjacent overlapping areas according to the first weight value and the second weight value so as to generate a stitched image;
the method for obtaining the fusion weight map corresponding to each converted image by the data processing module comprises the following steps:
acquiring an initial fusion weight map according to the splicing seam corresponding to each initial image, and updating the initial fusion weight map to acquire the fusion weight map;
the method for updating the initial fusion weight map to acquire the fusion weight map comprises the following steps:
and calculating the weight of the overlapping region of the initial fusion weight map and the corresponding region of the overlapping region of the initial image according to a weight calculation formula, and updating the weight value of the initial fusion weight map and the corresponding region of the overlapping region of the initial image according to the weight of the overlapping region so as to acquire the fusion weight map.
10. A vehicle-mounted image stitching device, characterized by comprising a central control host and the vehicle-mounted image stitching system of claim 9;
the image acquisition module comprises image acquisition equipment which is connected with the central control host and transmits the acquired initial image to the central control host for image processing so as to finish image splicing;
the three-dimensional mathematical model building module, the data processing module and the image splicing module are arranged in the central control host.
CN202011211542.5A 2020-11-03 2020-11-03 Vehicle-mounted image stitching method, system and device Active CN112308985B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011211542.5A CN112308985B (en) 2020-11-03 2020-11-03 Vehicle-mounted image stitching method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011211542.5A CN112308985B (en) 2020-11-03 2020-11-03 Vehicle-mounted image stitching method, system and device

Publications (2)

Publication Number Publication Date
CN112308985A CN112308985A (en) 2021-02-02
CN112308985B true CN112308985B (en) 2024-02-02

Family

ID=74332715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011211542.5A Active CN112308985B (en) 2020-11-03 2020-11-03 Vehicle-mounted image stitching method, system and device

Country Status (1)

Country Link
CN (1) CN112308985B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732485A (en) * 2015-04-21 2015-06-24 深圳市深图医学影像设备有限公司 Method and system for splicing digital X-ray images
CN107784632A (en) * 2016-08-26 2018-03-09 南京理工大学 A kind of infrared panorama map generalization method based on infra-red thermal imaging system
CN108510445A (en) * 2018-03-30 2018-09-07 长沙全度影像科技有限公司 A kind of Panorama Mosaic method
CN111028190A (en) * 2019-12-09 2020-04-17 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111369486A (en) * 2020-04-01 2020-07-03 浙江大华技术股份有限公司 Image fusion processing method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680501B (en) * 2013-12-03 2018-12-07 华为技术有限公司 The method and device of image mosaic

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732485A (en) * 2015-04-21 2015-06-24 深圳市深图医学影像设备有限公司 Method and system for splicing digital X-ray images
CN107784632A (en) * 2016-08-26 2018-03-09 南京理工大学 A kind of infrared panorama map generalization method based on infra-red thermal imaging system
CN108510445A (en) * 2018-03-30 2018-09-07 长沙全度影像科技有限公司 A kind of Panorama Mosaic method
CN111028190A (en) * 2019-12-09 2020-04-17 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111369486A (en) * 2020-04-01 2020-07-03 浙江大华技术股份有限公司 Image fusion processing method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Towards the automatic selection of optimal seam line locations when merging optical remote-sensing images;Le Yu;Journal of remote sensing;第33卷(第4期);1000-1014 *
图像拼接中权重的改进设计研究;谢晶梅;广东工业大学学报;第34卷(第6期);49-53,67 *
基于权重量化与信息压缩的车载图像超分辨率重建;许德智;计算机应用;第39卷(第12期);3644-3649 *

Also Published As

Publication number Publication date
CN112308985A (en) 2021-02-02

Similar Documents

Publication Publication Date Title
CN107792179B (en) A kind of parking guidance method based on vehicle-mounted viewing system
US20200108775A1 (en) Vehicle-trailer rearview vision system and method
CA3069114C (en) Parking assistance method and parking assistance device
US8514282B2 (en) Vehicle periphery display device and method for vehicle periphery image
CN104185009B (en) enhanced top-down view generation in a front curb viewing system
JP5455124B2 (en) Camera posture parameter estimation device
CN108621948A (en) Vehicle panoramic viewing system and panoramic looking-around image generating method
JP3286306B2 (en) Image generation device and image generation method
CN103802725B (en) A kind of new vehicle carried driving assistant images generation method
CN110381255A (en) Using the Vehicular video monitoring system and method for 360 panoramic looking-around technologies
CN111376895B (en) Around-looking parking sensing method and device, automatic parking system and vehicle
JP6213567B2 (en) Predicted course presentation device and predicted course presentation method
CN103871071A (en) Method for camera external reference calibration for panoramic parking system
KR20190047027A (en) How to provide a rearview mirror view of the vehicle's surroundings in the vehicle
CN105321160B (en) The multi-camera calibration that 3 D stereo panorama is parked
CN210139859U (en) Automobile collision early warning system, navigation and automobile
CN104057882A (en) System For Viewing A Curb In A Front Region On The Basis Of Two Cameras
CN112224132A (en) Vehicle panoramic all-around obstacle early warning method
CN107240065A (en) A kind of 3D full view image generating systems and method
CN108174089B (en) Backing image splicing method and device based on binocular camera
CN107146255A (en) Panoramic picture error calibration method and device
JP2022095303A (en) Peripheral image display device, display control method
CN110400255B (en) Vehicle panoramic image generation method and system and vehicle
CN115273525B (en) Parking space mapping display method and system
CN113658262A (en) Camera external parameter calibration method, device, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant