CN115880142A - Image generation method and device of trailer, storage medium and terminal - Google Patents

Image generation method and device of trailer, storage medium and terminal Download PDF

Info

Publication number
CN115880142A
CN115880142A CN202211685736.8A CN202211685736A CN115880142A CN 115880142 A CN115880142 A CN 115880142A CN 202211685736 A CN202211685736 A CN 202211685736A CN 115880142 A CN115880142 A CN 115880142A
Authority
CN
China
Prior art keywords
camera
image
parameters
compartment
scatter diagram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211685736.8A
Other languages
Chinese (zh)
Inventor
刘锋
康逸儒
李林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Haikang Auto Software Co ltd
Original Assignee
Hangzhou Haikang Auto Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Haikang Auto Software Co ltd filed Critical Hangzhou Haikang Auto Software Co ltd
Priority to CN202211685736.8A priority Critical patent/CN115880142A/en
Publication of CN115880142A publication Critical patent/CN115880142A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image generation method, an image generation device, a storage medium and a terminal of a trailer, wherein a carriage of the trailer is replaceable, and a preset number of cameras are deployed on the carriage, and the method comprises the following steps: reading compartment camera parameters stored by each camera on a compartment, wherein the compartment camera parameters comprise external parameters of each camera and internal parameters of each camera; acquiring a first image shot by each camera on a carriage; participating in generating a panoramic image of the towed vehicle based on the car camera parameters and the first image. Because the compartment camera parameters are stored in the cameras, the compartment camera parameters stored by each camera in the compartment can be directly spliced without dislocation according to the read compartment camera parameters stored by each camera in the compartment without re-calibrating the external parameters after the compartment is replaced, and therefore the image splicing efficiency of the panoramic images of the trailer is improved.

Description

Image generation method and device of trailer, storage medium and terminal
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to an image generation method and apparatus for a trailer, a storage medium, and a terminal.
Background
The panoramic system obtains a complete 360-degree 3D panoramic image through the processing of correction, rotation, splicing and the like by obtaining images collected by a plurality of cameras arranged on the periphery of the truck. However, the existing panoramic system only supports vehicles with fixed carriages, and seamless panoramic stitching can be realized after calibration is carried out before vehicles leave a factory.
For the trailer, the trailer is formed by connecting a traction head and carriages, one traction head can be freely combined with different carriages, and after the carriages are replaced by the traction heads, the position of a camera calibrated in advance can be changed, so that the generated 3D panoramic image has local image dislocation.
Disclosure of Invention
The embodiment of the application provides an image generation method and device of a trailer, a storage medium and a terminal. The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
In a first aspect, an embodiment of the present application provides an image generation method for a towed vehicle, where a compartment of the towed vehicle is replaceable and a preset number of cameras are deployed on the compartment, the method including:
reading compartment camera parameters stored by each camera on a compartment, wherein the compartment camera parameters comprise external parameters of each camera and internal parameters of each camera;
acquiring a first image shot by each camera on a carriage;
participating in generating a panoramic image of the towed vehicle based on the car camera parameters and the first image.
Optionally, based on the car camera parameter and the first image, participating in generating a panoramic image of the towed vehicle, including:
under the condition that the read compartment camera parameters are inconsistent with the pre-stored historical compartment camera parameters, generating an image splicing lookup table according to the read compartment camera parameters, wherein the image splicing lookup table comprises a mapping relation between world coordinates in a scatter diagram corresponding to each camera on the compartment and image coordinates corresponding to each camera, coordinate points in the scatter diagram corresponding to each camera are located in the respective shooting visual angle range of each camera, the coordinate points in the scatter diagram have determined world coordinate values, and the image coordinates are coordinate points represented based on an image coordinate system;
generating a second image corresponding to each first image according to the mapping relation in the image splicing lookup table, wherein the second image is an image represented by a world coordinate system;
and at least splicing the second images to generate a panoramic image of the trailer.
Optionally, generating an image stitching lookup table according to the read car camera parameters, including:
establishing a world coordinate system according to a preset center;
constructing a scatter diagram corresponding to each camera in a world coordinate system according to the shooting visual angle range of each camera on the carriage;
determining camera coordinates corresponding to coordinate points in a scatter diagram corresponding to each camera according to external parameters of each camera in the car camera parameters, wherein the camera coordinates are coordinate points expressed based on a camera coordinate system;
determining image coordinates corresponding to coordinate points in the scatter diagram corresponding to each camera according to internal parameters of each camera in the carriage camera parameters and camera coordinates corresponding to the coordinate points in the scatter diagram corresponding to each camera;
and establishing a corresponding relation between the world coordinates in the scatter diagram corresponding to each camera and the image coordinates corresponding to the coordinate points in the scatter diagram to generate an image stitching lookup table.
Optionally, constructing a scatter diagram corresponding to each camera in a world coordinate system according to a shooting angle range of each camera on the carriage, including:
determining the number of key points in a scatter diagram corresponding to each camera according to the shooting visual angle range of each camera on the carriage and the resolution of a preset trailer panoramic image;
and respectively constructing a scatter diagram corresponding to each camera in a world coordinate system according to the number of the key points, so as to determine the camera coordinates and the image coordinates respectively corresponding to the key points meeting the number of the key points in the scatter diagram corresponding to each camera.
Optionally, determining the number of key points included in the scatter diagram corresponding to each camera according to the shooting angle range of each camera on the vehicle and the resolution of the preset panoramic image of the trailer, including:
calculating the ratio of the view angle width of the shooting view angle range of the first camera to the width of a preset panoramic range of the trailer to obtain a transverse view angle ratio; calculating the ratio of the angle of view length of the shooting angle of view range of the first camera to the length of a preset panoramic range to obtain a longitudinal angle of view ratio; the first camera is any one of cameras on the carriage;
calculating the product of the transverse visual angle ratio and the transverse resolution of the preset panoramic image of the trailer to obtain the transverse resolution of the scatter diagram of the first camera; determining the ratio of the transverse resolution of the scatter diagram of the first camera to a preset sampling interval as the number of transverse sampling key points included in the scatter diagram of the first camera;
calculating the product of the longitudinal visual angle ratio and the longitudinal resolution of the preset panoramic image resolution to obtain the longitudinal resolution of the scatter diagram of the first camera, and determining the ratio of the longitudinal resolution of the scatter diagram of the first camera to the preset sampling interval as the number of longitudinal sampling key points in the scatter diagram of the first camera;
and calculating the product of the number of the transverse sampling key points and the number of the longitudinal sampling key points to obtain the number of the key points included in the scatter diagram of the first camera.
Optionally, generating a second image corresponding to each first image according to the mapping relationship in the image stitching lookup table includes:
reading a mapping relation between world coordinates in a scatter diagram corresponding to each camera in an image splicing lookup table and image coordinates corresponding to each camera, and calculating a first check value of the read mapping relation;
for any camera, if the first check value is the same as the read second check value of the mapping relation, generating a second image corresponding to the first image shot by the current camera according to the mapping relation in the image stitching lookup table; the second check value of the mapping relation is generated into a check value stored in the process of generating the image splicing lookup table;
alternatively, the first and second electrodes may be,
for any camera, if the first check value is different from the read second check value of the mapping relationship, reading a pre-stored mapping relationship from a preset storage partition, and generating a second image corresponding to a first image shot by the current camera by using the pre-stored mapping relationship, wherein the pre-stored mapping relationship is pre-stored and is used for representing the mapping relationship between world coordinates and image coordinates within a camera shooting visual angle range, and the attribute of the preset storage partition is a read-only storage partition.
Optionally, generating a second image corresponding to each first image according to the mapping relationship in the image stitching lookup table includes:
interpolating the mapping relation pairs in the image splicing lookup table to obtain an extended image splicing lookup table, wherein the number of the mapping relation pairs in the extended image splicing lookup table is greater than that of the mapping relation pairs in the image splicing lookup table, and a world coordinate and a corresponding image coordinate form a mapping relation pair;
and determining first world coordinates corresponding to the first image coordinates in the first image by utilizing the mapping relation in the extended image splicing lookup table, and generating a second image corresponding to the first image based on the first world coordinates.
In a second aspect, an embodiment of the present application provides an image generation apparatus for a towed vehicle, including:
the system comprises a carriage camera parameter acquisition module, a camera parameter acquisition module and a camera parameter acquisition module, wherein the carriage camera parameter acquisition module is used for reading carriage camera parameters stored by each camera on a carriage, and the carriage camera parameters comprise external parameters of each camera and internal parameters of each camera;
the image acquisition module is used for acquiring a first image shot by each camera on the carriage;
and the panoramic image generation module is used for participating in generating the panoramic image of the trailer based on the compartment camera parameters and the first image.
In a third aspect, embodiments of the present application provide a computer storage medium having stored thereon a plurality of instructions adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a fourth aspect, an embodiment of the present application provides a terminal, which may include: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
in the embodiment of the application, an image generation device of a trailer firstly reads compartment camera parameters stored by each camera on a compartment, wherein the compartment camera parameters comprise external parameters of each camera and internal parameters of each camera; acquiring a first image shot by each camera on a carriage; participating in generating a panoramic image of the towed vehicle based on the car camera parameters and the first image. Because this application is with carriage camera parameter storage in the camera, need not to mark the condition of camera external parameter again after the carriage is changed, can directly realize the no dislocation concatenation of trailer's panoramic image according to the carriage camera parameter of every camera storage on the carriage that reads to trailer panoramic image's image concatenation efficiency has been promoted.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic flowchart of an image generation method for a towed vehicle according to an embodiment of the present disclosure;
FIG. 2 is a schematic view of the camera and calibration plate mounting locations on a trailer according to an embodiment of the present application;
FIG. 3 is a schematic illustration of the positioning of calibration plates distributed around a trailer according to an embodiment of the present application;
fig. 4 is a schematic view of a checkerboard corner recognition scene during internal reference calibration according to an embodiment of the present application;
FIG. 5 is a scatter diagram constructed by each camera within a preset range according to an embodiment of the present disclosure;
FIG. 6 is a schematic view of a visual range and a panoramic image resolution of a trailer according to an embodiment of the present application;
fig. 7 is a schematic structural component diagram of a panorama stitching lookup table provided in an embodiment of the present application;
FIG. 8 is a schematic block diagram of a process of generating an image of a towed vehicle according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an image generation apparatus of a trailer according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
The following description and the drawings sufficiently illustrate specific embodiments of the application to enable those skilled in the art to practice them.
It should be understood that the embodiments described are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art. Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The application provides an image generation method, an image generation device, a storage medium and a terminal of a trailer, which are used for solving the problems in the related technical problems. In the technical scheme provided by the application, because the compartment camera parameters are stored in the camera, the compartment camera parameters stored in each camera in the compartment can be directly spliced without dislocation according to the read compartment camera parameters stored in each camera in the compartment without re-calibrating the external parameters after the compartment is replaced, so that the image splicing efficiency of the panoramic image of the trailer is improved, and the following detailed description is carried out by adopting an exemplary embodiment.
The following describes in detail an image generation method of a trailer according to an embodiment of the present application with reference to fig. 1 to 8. The method may be implemented in dependence on a computer program, executable on an image generation device of a towed vehicle based on the von neumann architecture. The computer program may be integrated into the application or may run as a separate tool-like application.
Referring to fig. 1, a schematic flow chart of an image generation method of a trailer according to an embodiment of the present application is provided. As shown in fig. 1, the method of the embodiment of the present application may include the following steps:
s101, reading compartment camera parameters stored by each camera on a compartment, wherein the compartment camera parameters comprise external parameters of each camera and internal parameters of each camera;
the parameters of the car camera comprise external parameters of each camera and internal parameters of each camera, and the external parameters of each camera and the internal parameters of each camera are stored in a Flash memory of the car camera and also stored in a controller of the trailer. The extrinsic parameters of the camera are parameters of the camera in a world coordinate system, such as the world coordinate position of the camera. The intrinsic parameters of the camera are related parameters used by the camera when acquiring images, and may be, for example, a focal length, an optical center position, a distortion coefficient, and the like of the camera.
Generally, in order to realize panoramic image stitching of a trailer, a sufficient overlapping area is required to be arranged between adjacent cameras, the definition is high, a fisheye camera can be selected for use in the application, and the horizontal field angle of the fisheye camera is set to exceed 180 degrees.
In one possible implementation, before reading the stored car camera parameters of each camera on the car, it is first necessary to calibrate the car camera parameters for each camera on the car of the trailer, and then store the calibrated car camera parameters in the memory of each camera, while storing the calibrated car camera parameters in the controller of the trailer. Wherein the car of the trailer may be a plurality of cars of different sizes.
Furthermore, in addition to calibrating the carriage camera parameters of each camera on the carriage, the vehicle head camera parameters of each camera on the tractor head of the trailer need to be calibrated, and the calibrated vehicle head camera parameters are respectively stored in the memory of each camera, and the calibrated vehicle head camera parameters are stored in the controller of the trailer.
For example, as shown in fig. 2, the arrangement diagram of the cameras on the trailer is that 3 cameras are arranged on the tractor head and distributed at the front end and two sides of the tractor head respectively, and 3 cameras are arranged on the carriage and distributed at the rear end and two sides of the carriage respectively. When calibrating the internal parameters and the external parameters of the camera for each camera on the trailer, firstly adjusting the original internal parameters of each camera on the trailer to enable the image acquisition parameters of each camera to be within a preset range, obtaining the internal parameters of each camera, then obtaining the calibration image shot by each camera on the trailer, then determining the world coordinates of the calibration plate which is arranged on the ground in advance and distributed around the trailer body of the trailer, then determining the image coordinates of the calibration plate in the calibration image, finally calculating the parameters of each camera in the world coordinate system, namely the world coordinate position of the camera according to the world coordinates of the calibration plate and the image coordinates of the calibration plate in the calibration image, obtaining the external parameters of each camera, storing the calibrated internal parameters and the external parameters of the camera into the corresponding camera memory, and storing the calibrated internal parameters and the external parameters of the camera into the trailer controller. A checkerboard, which is pre-installed on the ground and distributed around the body of the trailer, is one type of calibration board, as shown in fig. 3, for example.
Specifically, when the parameters of each camera in the world coordinate system are calculated according to the world coordinates of the calibration board and the image coordinates of the calibration board in the calibration image, and the camera external parameters of each camera are obtained, firstly, a covariance matrix is constructed according to the world coordinates of the calibration board, then, a feature vector and a coordinate mean value corresponding to a minimum feature value of the covariance matrix are calculated, then, an initial rotation matrix of the calibration board is obtained by utilizing transformation of the feature vector and the coordinate mean value, then, an initial translation vector of the calibration board is calculated according to the initial rotation matrix and the coordinate mean value, then, a reprojection error between the coordinates of the calibration board in the calibration image and the world coordinates of the calibration board is calculated, then, iterative optimization is carried out on the initial rotation matrix and the initial translation vector based on the reprojection error, the mapping relation between the coordinates of the calibration board in the calibration image and the world coordinates of the calibration board is obtained, and finally, the parameters of each camera in the world coordinate system are determined according to the mapping relation, and the camera external parameters of each camera are obtained.
Specifically, in an implementation manner, due to a process error of the camera, there may be a deviation in each camera internal parameter, and in order to improve the camera internal parameter accuracy and ensure the subsequent splicing integrity, each camera needs to be calibrated and then stored in the camera memory. The camera internal reference calibration of the fisheye camera adopts a scheme of one-image multi-plate, iterative solution is carried out by identifying the checkerboard angular points collected in the image 4, the camera internal reference is calculated, and the camera internal reference is stored in a camera memory in an i2c communication mode. The internal parameters of the fisheye camera mainly comprise optical centers cx and cy, focal lengths fx and fy, and distortion coefficients p0, p1, p2 and p 3.
It should be noted that, since the scheme of using one figure and multiple boards for camera internal reference calibration is the prior art, it is not described herein.
In a practical application scenario of the embodiment of the present application, when a controller of a towed vehicle is powered on, the controller first reads vehicle cabin camera parameters stored by each camera on a vehicle cabin, where the vehicle cabin camera parameters include external parameters of each camera and internal parameters of each camera.
S102, acquiring a first image shot by each camera on a carriage;
the first image taken by each camera may be, for example, an image captured by each fisheye camera in the vehicle cabin.
In an actual application scenario of the embodiment of the present application, after car camera parameters stored by each camera on a car are read, a first image captured by each camera on the car can be acquired.
And S103, generating a panoramic image of the trailer based on the compartment camera parameters and the acquired first image.
In the embodiment of the application, after the parameters of the car camera and the acquired first image are obtained, the car panoramic image can be generated according to the parameters of the car camera and the acquired first image; meanwhile, according to the parameters of the traction vehicle head camera stored by each camera on the head of the traction vehicle, acquiring a third image shot by each camera on the head of the traction vehicle, and generating a panoramic image of the traction vehicle head according to the parameters of the traction vehicle head camera and the acquired third image; and finally, the panoramic image of the carriage and the panoramic image of the traction vehicle head can be fused to generate the panoramic image of the trailer.
In one embodiment, when a panoramic image of a trailer is generated based on a compartment camera parameter and an acquired first image, firstly, under the condition that a read compartment camera parameter is inconsistent with a pre-stored historical compartment camera parameter, an image stitching lookup table is generated according to the read compartment camera parameter, wherein the image stitching lookup table comprises a mapping relation between a world coordinate in a scatter diagram corresponding to each camera on the compartment and an image coordinate corresponding to each camera, a coordinate point in the scatter diagram corresponding to each camera is within a respective shooting visual angle range of each camera, the coordinate point in the scatter diagram has a determined world coordinate value, and the image coordinate is a coordinate point represented based on an image coordinate system; then generating a second image corresponding to each first image according to the mapping relation in the image splicing lookup table, wherein the second image is an image represented by a world coordinate system; and finally, at least splicing the second image to generate a panoramic image of the trailer. Under the condition that the read carriage camera parameters are inconsistent with the prestored historical carriage camera parameters, the carriage of the trailer is changed, the image splicing lookup table is regenerated through the read carriage camera parameters, the carriage panoramic image can be quickly generated based on the newly generated image splicing lookup table, parameter calibration on the new changed carriage is avoided, and accordingly the panoramic image generation efficiency is improved.
In an embodiment, under the condition that the read compartment camera parameters are consistent with the pre-stored historical compartment camera parameters, a pre-stored image stitching lookup table is obtained, second images corresponding to the first images are generated according to the mapping relation in the pre-stored image stitching lookup table, and at least stitching processing is performed on the second images to generate a panoramic image of the trailer. Under the condition that the read compartment camera parameters are consistent with the prestored historical compartment camera parameters, the compartment of the trailer is not replaced, the image splicing lookup table does not need to be generated again at the moment, and the compartment panoramic image can be quickly generated based on the historically prestored image splicing lookup table.
Illustratively, when the image stitching lookup table is generated according to the read parameters of the car cameras, a world coordinate system is firstly established according to a preset center, then a scatter diagram corresponding to each camera is established in the world coordinate system according to the range of the shooting visual angle of each camera on the car, the scatter diagram corresponding to each camera is shown in fig. 5, then the camera coordinates corresponding to the coordinate points in the scatter diagram corresponding to each camera are determined according to the external parameters of each camera in the parameters of the car cameras, wherein the camera coordinates are the coordinate points expressed based on the camera coordinate system, then the image coordinates corresponding to the coordinate points in the scatter diagram corresponding to each camera are determined according to the internal parameters of each camera in the parameters of the car cameras and the camera coordinates corresponding to the coordinate points in the scatter diagram corresponding to each camera, and finally the corresponding relationship between the world coordinates in the scatter diagram corresponding to each camera and the image coordinates corresponding to the coordinate points in the scatter diagram is established to generate the image stitching lookup table. The preset center can be the center of the trailer body or the center of the carriage, the center of the rear wheel base of the carriage, or the center arranged at other positions.
According to the method, the corresponding relation between the world coordinate in the scatter diagram corresponding to each camera and the image coordinate corresponding to the coordinate point in the scatter diagram is established to generate the image splicing lookup table, and the second image corresponding to the first image shot by the current camera can be generated according to the mapping relation in the image splicing lookup table only by the aid of the panorama splicing lookup table when the panoramic images of the trailer are spliced, so that the panoramic images of the trailer can be generated quickly, the calculation efficiency is greatly improved, and the time is saved.
For example, when constructing a scatter diagram corresponding to each camera in a world coordinate system according to a shooting view range of each camera on a carriage, the number of key points included in the scatter diagram corresponding to each camera may be first determined according to the shooting view range of each camera on the carriage and a preset trailer panoramic image resolution, and then the scatter diagram corresponding to each camera may be constructed in the world coordinate system according to the number of key points, respectively, so as to determine camera coordinates and image coordinates corresponding to key points satisfying the number of key points in the scatter diagram corresponding to each camera, respectively. By determining the key points, the size of the panoramic stitching lookup table can be greatly reduced, and the table lookup efficiency is improved.
For example, when the number of key points included in the scatter diagram corresponding to each camera is determined according to the shooting view angle range of each camera on the vehicle compartment and the resolution of the preset panoramic image of the trailer, the ratio between the view angle width of the shooting view angle range of the first camera and the width of the preset panoramic range of the trailer can be calculated to obtain the transverse view angle ratio; calculating the ratio of the angle of view length of the shooting angle of view range of the first camera to the length of a preset panoramic range to obtain a longitudinal angle of view ratio; the first camera is any one of cameras on the carriage; calculating the product of the transverse visual angle ratio and the transverse resolution of the preset panoramic image of the trailer to obtain the transverse resolution of the scatter diagram of the first camera; determining the ratio of the transverse resolution of the scatter diagram of the first camera to a preset sampling interval as the number of transverse sampling key points included in the scatter diagram of the first camera; calculating the product of the longitudinal visual angle ratio and the longitudinal resolution of the preset panoramic image resolution to obtain the longitudinal resolution of the scatter diagram of the first camera, and determining the ratio of the longitudinal resolution of the scatter diagram of the first camera to the preset sampling interval as the number of longitudinal sampling key points in the scatter diagram of the first camera; and calculating the product of the number of the transverse sampling key points and the number of the longitudinal sampling key points to obtain the number of the key points included in the scatter diagram of the first camera.
For example, as shown in fig. 6, the trailer has a length of 5 meters and a width of 2 meters, the range of the shooting angles of the cameras in the front-rear direction of the trailer is 2.5 meters, the range of the shooting angles of the cameras in the left-right direction of the trailer is 2 meters, the preset panoramic image resolution is 720 × 1280, the horizontal resolution of the preset panoramic image resolution is 720, and the vertical resolution of the preset panoramic image resolution is 1280.
When the first camera is a camera in the front-rear direction of the trailer, the view angle width of the shooting view angle range of the first camera at this time is: 2+2=6, the width of the preset panoramic range is 6, and the ratio of the transverse viewing angles is 6 ÷ 6=1; at this time, the length of the angle of view of the shooting angle of view range of the first camera is 2.5, the length of the preset panoramic range is 10, and the ratio of the longitudinal angles of view is
Figure BDA0004020921310000101
At this time, the transverse resolution of the scattergram of the first camera is 1 × 720=720, and the longitudinal resolution of the scattergram of the first camera is
Figure BDA0004020921310000102
When the preset sampling interval is 8, the number of transverse sampling key points included in the scatter diagram of the first camera is 720/8=90, the number of longitudinal sampling key points included in the scatter diagram of the first camera is 320/8=40, and finally the number of key points included in the scatter diagram of the first camera is 90 × 40= 3600.
When the first camera is the left side of the trailerWhen the camera is in the right direction, the length of the shooting angle of view of the first camera is: 2.5+2.5=10, the length of the preset panoramic range is 10, and the longitudinal view angle ratio is 10 ÷ 10=1; at this time, the view angle width of the shooting view angle range of the first camera is 2, the width of the preset panoramic range is 6, and the transverse view angle ratio is
Figure BDA0004020921310000103
At this time, the longitudinal resolution of the scattergram of the first camera is 1 × 1280=1280, and the lateral resolution of the scattergram of the first camera is
Figure BDA0004020921310000104
When the preset sampling interval is 8, the number of longitudinal sampling key points included in the scatter diagram of the first camera is 1280/8=160, the number of lateral sampling key points included in the scatter diagram of the first camera is 240/8=30, and finally the number of key points included in the scatter diagram of the first camera is 160 × 30= 4800.
Further, view list information can be generated according to the mapping relation between the world coordinates in the scatter diagram corresponding to each camera and the image coordinates corresponding to each camera, then a surround view file header is generated, the surround view file header comprises identification information of each camera on the trailer, and then the surround view file header and the view list information are packaged into a panorama stitching lookup table. Specifically, a panoramic file header is constructed according to information such as the identification of each camera, the size of a lookup table file, a crc check code and a version, wherein the panoramic file header comprises identification information of each camera on a trailer, private information is constructed according to a preset image pixel calculation formula and calculation process debugging information, the panoramic file header, the private information and view list information are packaged into a panoramic stitching lookup table, and the final panoramic stitching lookup table is shown in fig. 7. The view list information comprises a plurality of view information, the view information is in one-to-one correspondence with the cameras, and the view information comprises mapping relations between world coordinates in the scatter diagram corresponding to each camera and image coordinates corresponding to each camera.
In an optional implementation manner, when generating the second image corresponding to each first image according to the mapping relationship in the image stitching lookup table, firstly, reading the mapping relationship between the world coordinate in the scatter diagram corresponding to each camera in the image stitching lookup table and the image coordinate corresponding to each camera, and calculating a first check value of the read mapping relationship, and then, for any camera, if the first check value is the same as a second check value of the read mapping relationship, generating the second image corresponding to the first image shot by the current camera according to the mapping relationship in the image stitching lookup table; and generating the second check value of the mapping relation into a check value stored in the process of generating the image splicing lookup table. According to the method and the device, the read check value of the mapping relation is calculated, and the check value is compared with the stored check value of the mapping relation, so that the accuracy of the mapping relation between the world coordinates in the read scatter diagram and the image coordinates corresponding to each camera is guaranteed, and the accuracy of the image is improved. Wherein the image coordinates may be coordinates of a fisheye image captured by a fisheye camera.
Or for any camera, if the first check value is different from the read second check value of the mapping relationship, reading a pre-stored mapping relationship from a preset storage partition, and generating a second image corresponding to the first image shot by the current camera by using the pre-stored mapping relationship, wherein the pre-stored mapping relationship is pre-stored and is used for representing the mapping relationship between world coordinates and image coordinates in the camera shooting visual angle range, and the attribute of the preset storage partition is a read-only storage partition. Under the condition that the check values are different, the fact that the currently read mapping relation is not accurate is shown, the mapping relation is possibly tampered or errors occur in the data reading process, and the like, at the moment, reading the pre-stored mapping relation from the preset storage partition can guarantee that image splicing cannot be interrupted under the condition that the mapping relation of one camera is read mistakenly, the pre-stored mapping relation is stored in the preset storage partition, the safety of data can be improved, and the data is guaranteed not to be tampered.
The calculation mode of the mapping relation check value in the mapping relation reading process is the same as the calculation mode of the check value in the image mosaic lookup table generation process, and the specific calculation mode is not specifically limited in the embodiment of the application and can be flexibly selected according to the calculation requirement.
In another optional implementation, when the second image corresponding to each first image is generated according to the mapping relationship in the image stitching lookup table, interpolation processing may be performed on mapping relationship pairs included in the image stitching lookup table to obtain an extended image stitching lookup table, the number of the mapping relationship pairs included in the extended image stitching lookup table is greater than the number of the mapping relationship pairs included in the image stitching lookup table, one world coordinate and one corresponding image coordinate form one mapping relationship pair, then, the mapping relationship in the extended image stitching lookup table is used to determine a first world coordinate corresponding to a first image coordinate in the first image, and the second image corresponding to the first image is generated based on the first world coordinate. The storage space required for storing all the mapping relations is large, so that the image splicing lookup table can be generated only based on the mapping relations of the key points, the number of the mapping relation point pairs in the image splicing lookup table is small, and the storage space is saved.
In the embodiment of the application, when the panoramic image of the trailer is generated by fusing the panoramic image of the compartment and the panoramic image of the traction head, the angular points of the calibration plates on the traction head and the compartment are firstly identified, the rotation angle between the traction head and the compartment is calculated based on the identified angular points, and then the panoramic image of the trailer is generated by splicing the panoramic image of the compartment and the panoramic image of the traction head according to the rotation angle.
For example, when the car panoramic image and the traction car panoramic image are spliced according to the rotation angle, the traction car panoramic image is rotated to an included angle of 0 degree with the car panoramic image by using a plane rotation formula and the rotation angle, and after the rotation is finished, the pixel fusion of the traction car panoramic image and the car panoramic image can be carried out, and the panoramic image of the trailer is obtained after the fusion.
It should be noted that, when the calibration plate is installed on the trailer, the calibration plate is arranged on two sides of the central axis in front of and behind the trailer, at least two calibration plates are installed on each side of the trailer, one of the two calibration plates installed on each side is located on the tractor head, and the other is located on the carriage. The installation position of the calibration plate on the trailer is shown in fig. 2, for example, the calibration plate No. 1 and the calibration plate No. 3 can be seen from the image collected by the camera on one side of the trailer, and the calculation of the rotation angle between the traction vehicle head and the carriage can be completed by identifying the angular point of the calibration plate. The No. 2 calibration plate and the No. 4 calibration plate can be seen from the image collected by the camera on the other side of the trailer, and the calculation of the rotation angle between the tractor head and the carriage can also be completed through calculation. Because the trailer is when turning, in order to calculate the angle under the different turn circumstances of traction locomotive and carriage, consequently adopt the mode of side respectively installing the calibration board about, according to the vehicle under the different turn circumstances, discern different calibration boards, guarantee under arbitrary circumstances, one side can accomplish the calibration all the time in the left and right sides, can calculate the contained angle between preceding car, the back car, guarantee to splice the accuracy.
For example, as shown in fig. 8, fig. 8 is a schematic block diagram of a process of generating an image of a trailer according to the present disclosure, after a vehicle computer of the trailer is powered on, first reading a vehicle cabin camera parameter of each camera in a vehicle cabin, where the vehicle cabin camera parameter includes an internal camera parameter and an external camera parameter, then reading a pre-stored historical vehicle cabin camera parameter in a controller, and then comparing whether the read vehicle cabin camera parameter is consistent with the pre-stored historical vehicle cabin camera parameter; if the parameters are inconsistent, generating a panoramic stitching lookup table according to the read carriage camera parameters, and finally obtaining a panoramic image of the trailer according to the regenerated panoramic stitching lookup table and the image shot by each camera on the trailer; and if the panoramic images are consistent, acquiring a prestored panoramic stitching lookup table, and generating the panoramic image of the trailer according to the prestored panoramic stitching lookup table and the images shot by each camera on the trailer.
In the embodiment of the application, an image generation device of a trailer firstly reads compartment camera parameters stored by each camera on a compartment, wherein the compartment camera parameters comprise external parameters of each camera and internal parameters of each camera; acquiring a first image shot by each camera on a carriage; participating in generating a panoramic image of the towed vehicle based on the car camera parameters and the first image. Because the compartment camera parameters are stored in the cameras, the compartment camera parameters stored by each camera in the compartment can be directly spliced without dislocation according to the read compartment camera parameters stored by each camera in the compartment without re-calibrating the external parameters after the compartment is replaced, and therefore the image splicing efficiency of the panoramic images of the trailer is improved.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 9, a schematic structural diagram of an image generating apparatus of a trailer according to an exemplary embodiment of the present application is shown, wherein a compartment of the trailer is replaceable, and a preset number of cameras are disposed on the compartment. The image generation device of the trailer may be implemented as all or part of the terminal by software, hardware or a combination of both. The device 1 comprises a compartment camera parameter acquisition module 10, an image acquisition module 20 and a panoramic image generation module 30.
The car camera parameter acquiring module 10 is used for reading car camera parameters stored by each camera on a car, wherein the car camera parameters comprise external parameters of each camera and internal parameters of each camera;
the image acquisition module 20 is used for acquiring a first image shot by each camera on the carriage;
and a panoramic image generation module 30, configured to participate in generating a panoramic image of the towed vehicle based on the car camera parameters and the first image.
It should be noted that, when the image generation device of the towed vehicle provided in the above embodiment executes the image generation method of the towed vehicle, only the division of the above function modules is taken as an example, and in practical applications, the function distribution may be completed by different function modules according to needs, that is, the internal structure of the device may be divided into different function modules to complete all or part of the functions described above. In addition, the image generation device of the trailer and the image generation method embodiment of the trailer provided by the above embodiments belong to the same concept, and details of the implementation process are referred to as method embodiments, which are not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the embodiment of the application, an image generation device of a trailer firstly reads compartment camera parameters stored by each camera on a compartment, wherein the compartment camera parameters comprise external parameters of each camera and internal parameters of each camera; acquiring a first image shot by each camera on a carriage; participating in generating a panoramic image of the towed vehicle based on the car camera parameters and the first image. Because the compartment camera parameters are stored in the cameras, the compartment camera parameters stored by each camera in the compartment can be directly spliced without dislocation according to the read compartment camera parameters stored by each camera in the compartment without re-calibrating the external parameters after the compartment is replaced, and therefore the image splicing efficiency of the panoramic images of the trailer is improved.
Optionally, the panoramic image generation module 30 includes:
the image stitching lookup table generating unit is used for generating an image stitching lookup table according to the read compartment camera parameters under the condition that the read compartment camera parameters are inconsistent with the pre-stored historical compartment camera parameters, wherein the image stitching lookup table comprises a mapping relation between world coordinates in a scatter diagram corresponding to each camera on the compartment and image coordinates corresponding to each camera, coordinate points in the scatter diagram corresponding to each camera are located in the respective shooting visual angle range of each camera, the coordinate points in the scatter diagram have determined world coordinate values, and the image coordinates are coordinate points expressed based on an image coordinate system;
the second image generation unit is used for generating a second image corresponding to each first image according to the mapping relation in the image splicing lookup table, wherein the second image is an image represented by a world coordinate system;
and the image splicing unit is used for splicing at least the second image to generate a panoramic image of the trailer.
Optionally, the image stitching lookup table generating unit includes:
a world coordinate system establishing subunit, configured to establish a world coordinate system according to a preset center;
the system comprises a scatter diagram constructing subunit, a scatter diagram constructing unit and a scatter diagram constructing unit, wherein the scatter diagram constructing subunit is used for constructing a scatter diagram corresponding to each camera in a world coordinate system according to the shooting visual angle range of each camera on a carriage;
the camera coordinate determining subunit is used for determining camera coordinates corresponding to the coordinate points in the scatter diagram corresponding to each camera according to the external parameters of each camera in the compartment camera parameters, wherein the camera coordinates are the coordinate points expressed based on the camera coordinate system;
the image coordinate determining subunit is used for determining image coordinates corresponding to the coordinate points in the scatter diagram corresponding to each camera according to the internal parameters of each camera in the carriage camera parameters and the camera coordinates corresponding to the coordinate points in the scatter diagram corresponding to each camera;
and the corresponding relation establishing subunit is used for establishing the corresponding relation between the world coordinates in the scatter diagram corresponding to each camera and the image coordinates corresponding to the coordinate points in the scatter diagram so as to generate an image splicing lookup table.
Optionally, the scattergram constructing subunit includes:
the first determining subunit is used for determining the number of key points in the scatter diagram corresponding to each camera according to the shooting visual angle range of each camera on the carriage and the resolution of the panoramic image of the trailer;
the first constructing subunit is configured to construct, according to the number of the key points, a scatter diagram corresponding to each camera in a world coordinate system, so as to determine camera coordinates and image coordinates corresponding to key points satisfying the number of the key points in the scatter diagram corresponding to each camera.
Optionally, the first determining subunit is specifically configured to:
calculating the ratio of the view angle width of the shooting view angle range of the first camera to the width of a preset panoramic range of the trailer to obtain a transverse view angle ratio; calculating the ratio of the angle of view length of the shooting angle of view range of the first camera to the length of a preset panoramic range to obtain a longitudinal angle of view ratio; the first camera is any one of cameras on the carriage;
calculating the product of the transverse visual angle ratio and the transverse resolution of the preset panoramic image of the trailer to obtain the transverse resolution of the scatter diagram of the first camera; determining the ratio of the transverse resolution of the scatter diagram of the first camera to a preset sampling interval as the number of transverse sampling key points in the scatter diagram of the first camera;
calculating the product of the longitudinal visual angle ratio and the longitudinal resolution of the preset panoramic image resolution to obtain the longitudinal resolution of the scatter diagram of the first camera, and determining the ratio of the longitudinal resolution of the scatter diagram of the first camera to the preset sampling interval as the number of longitudinal sampling key points in the scatter diagram of the first camera;
and calculating the product of the number of the transverse sampling key points and the number of the longitudinal sampling key points to obtain the number of the key points in the scatter diagram of the first camera.
Optionally, the second image generation unit comprises:
the verification value reading subunit is used for reading the mapping relation between the world coordinates in the scatter diagram corresponding to each camera in the image splicing lookup table and the image coordinates corresponding to each camera, and calculating a first verification value of the read mapping relation;
the second image generation subunit is used for generating a second image corresponding to the first image shot by the current camera according to the mapping relation in the image splicing lookup table if the first check value is the same as the read second check value of the mapping relation for any camera; the second check value of the mapping relation is generated into a check value stored in the process of generating the image splicing lookup table;
alternatively, the first and second electrodes may be,
and the second image generation subunit is used for reading the pre-stored mapping relation from the preset storage partition if the first check value is different from the read second check value of the mapping relation for any camera, and generating a second image corresponding to the first image shot by the current camera by using the pre-stored mapping relation, wherein the pre-stored mapping relation is pre-stored and is used for representing the mapping relation between the world coordinate and the image coordinate within the camera shooting visual angle range, and the attribute of the preset storage partition is a read-only storage partition.
Optionally, the second image generation unit comprises:
the interpolation processing subunit is used for carrying out interpolation processing on the mapping relation pairs included in the image splicing lookup table to obtain an extended image splicing lookup table, the number of the mapping relation pairs included in the extended image splicing lookup table is greater than that of the mapping relation pairs included in the image splicing lookup table, and a world coordinate and a corresponding image coordinate form a mapping relation pair;
and the coordinate conversion subunit is used for determining a first world coordinate corresponding to the first image coordinate in the first image by utilizing the mapping relation in the extended image stitching lookup table, and generating a second image corresponding to the first image based on the first world coordinate.
The present application further provides a computer readable medium, on which program instructions are stored, and when the program instructions are executed by a processor, the method for generating an image of a towed vehicle provided by the above method embodiments is implemented.
The present application also provides a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of image generation of a towed vehicle of the various method embodiments described above.
Please refer to fig. 10, which provides a schematic structural diagram of a terminal according to an embodiment of the present application. As shown in fig. 10, terminal 1000 can include: at least one processor 1001, at least one network interface 1004, a user interface 1003, memory 1005, at least one communication bus 1002.
The communication bus 1002 is used to implement connection communication among these components.
The user interface 1003 may include a display screen and a camera, and the optional user interface 1003 may also include a standard wired interface and a standard wireless interface.
The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Processor 1001 may include one or more processing cores, among other things. The processor 1001, which is connected to various parts throughout the electronic device 1000 using various interfaces and lines, performs various functions of the electronic device 1000 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1005 and calling data stored in the memory 1005. Alternatively, the processor 1001 may be implemented in at least one hardware form of Digital Signal Processing (DSP), field-Programmable Gate Array (FPGA), and Programmable Logic Array (Programmable Logic Array). The processor 1001 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is to be understood that the above modem may not be integrated into the processor 1001, but may be implemented by a single processor or processing unit.
The Memory 1005 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1005 includes a non-transitory computer-readable medium. The memory 1005 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 1005 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 10, a memory 1005, which is one type of computer storage medium, may include an operating system, a network communication module, a user interface module, and an image generation application program for a trailer.
In the terminal 1000 shown in fig. 10, the user interface 1003 is mainly used as an interface for providing input for a user, and acquiring data input by the user; and the processor 1001 may be configured to invoke the image generation application of the towed vehicle stored in the memory 1005 and specifically perform the following operations:
the method comprises the steps of reading compartment camera parameters stored by each camera on a compartment, wherein the compartment camera parameters comprise external parameters of each camera and internal parameters of each camera;
acquiring a first image shot by each camera on a carriage;
participating in generating a panoramic image of the towed vehicle based on the car camera parameters and the first image.
It should be appreciated that the processor 1001 may also perform the image generation method of the towed vehicle as described in any of the preceding method embodiments.
In the embodiment of the application, an image generation device of a trailer firstly reads the car camera parameters stored by each camera on a car, wherein the car camera parameters comprise the external parameters of each camera and the internal parameters of each camera; acquiring a first image shot by each camera on a carriage; participating in generating a panoramic image of the towed vehicle based on the car camera parameters and the first image. Because the compartment camera parameters are stored in the cameras, the compartment camera parameters stored by each camera in the compartment can be directly spliced without dislocation according to the read compartment camera parameters stored by each camera in the compartment without re-calibrating the external parameters after the compartment is replaced, and therefore the image splicing efficiency of the panoramic images of the trailer is improved.
It will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by a computer program to instruct related hardware, and the image generation program of the towed vehicle may be stored in a computer readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (10)

1. An image generation method for a towed vehicle, wherein a compartment of the towed vehicle is replaceable, and a predetermined number of cameras are disposed on the compartment, the method comprising:
reading compartment camera parameters stored by each camera on a compartment, wherein the compartment camera parameters comprise external parameters of each camera and internal parameters of each camera;
acquiring a first image shot by each camera on the carriage;
and participating in generating a panoramic image of the towed vehicle based on the car camera parameters and the first image.
2. The method of claim 1, wherein participating in generating a panoramic image of the towed vehicle based on the car camera parameters and the first image comprises:
under the condition that the read compartment camera parameters are inconsistent with prestored historical compartment camera parameters, generating an image splicing lookup table according to the read compartment camera parameters, wherein the image splicing lookup table comprises a mapping relation between world coordinates in a scatter diagram corresponding to each camera on the compartment and image coordinates corresponding to each camera, coordinate points in the scatter diagram corresponding to each camera are located in the respective shooting visual angle range of each camera, the coordinate points in the scatter diagram have determined world coordinate values, and the image coordinates are coordinate points represented based on an image coordinate system;
generating a second image corresponding to each first image according to the mapping relation in the image splicing lookup table, wherein the second image is an image represented by a world coordinate system;
and at least splicing the second images to generate a panoramic image of the trailer.
3. The method of claim 2, wherein generating an image stitching look-up table from the read car camera parameters comprises:
establishing a world coordinate system according to a preset center;
constructing a scatter diagram corresponding to each camera in the world coordinate system according to the shooting visual angle range of each camera on the carriage;
determining camera coordinates corresponding to coordinate points in a scatter diagram corresponding to each camera according to external parameters of each camera in the compartment camera parameters, wherein the camera coordinates are coordinate points expressed based on a camera coordinate system;
determining image coordinates corresponding to the coordinate points in the scatter diagram corresponding to each camera according to the internal parameters of each camera in the carriage camera parameters and the camera coordinates corresponding to the coordinate points in the scatter diagram corresponding to each camera;
and establishing a corresponding relation between the world coordinates in the scatter diagram corresponding to each camera and the image coordinates corresponding to the coordinate points in the scatter diagram to generate the image splicing lookup table.
4. The method according to claim 3, wherein constructing a scatter plot corresponding to each camera in the world coordinate system according to the shooting view angle range of each camera on the vehicle cabin comprises:
determining the number of key points in a scatter diagram corresponding to each camera according to the shooting visual angle range of each camera on the carriage and the resolution of a preset trailer panoramic image;
and respectively constructing a scatter diagram corresponding to each camera in the world coordinate system according to the number of the key points, so as to determine the camera coordinates and the image coordinates respectively corresponding to the key points which meet the number of the key points in the scatter diagram corresponding to each camera.
5. The method according to claim 4, wherein determining the number of key points included in the scatter diagram corresponding to each camera according to the shooting view range of each camera on the compartment and the preset trailer panoramic image resolution comprises:
calculating the ratio of the view angle width of the shooting view angle range of the first camera to the width of the preset panoramic range of the trailer to obtain a transverse view angle ratio; calculating the ratio of the length of the visual angle of the shooting visual angle range of the first camera to the length of the preset panoramic range to obtain the ratio of the longitudinal visual angles; wherein the first camera is any one of the cameras on the carriage;
calculating the product of the transverse visual angle ratio and the transverse resolution of the preset panoramic image resolution of the trailer to obtain the transverse resolution of the scatter diagram of the first camera; determining the ratio of the transverse resolution of the scatter diagram of the first camera to a preset sampling interval as the number of transverse sampling key points included in the scatter diagram of the first camera;
calculating the product of the longitudinal visual angle ratio and the longitudinal resolution of the preset panoramic image resolution to obtain the longitudinal resolution of the scatter diagram of the first camera, and determining the ratio of the longitudinal resolution of the scatter diagram of the first camera to the preset sampling interval as the number of longitudinal sampling key points in the scatter diagram of the first camera;
and calculating the product of the number of the transverse sampling key points and the number of the longitudinal sampling key points to obtain the number of the key points in the scatter diagram of the first camera.
6. The method according to claim 2, wherein generating the second image corresponding to each first image according to the mapping relationship in the image stitching lookup table comprises:
reading a mapping relation between world coordinates in a scatter diagram corresponding to each camera in the image splicing lookup table and image coordinates corresponding to each camera, and calculating a first check value of the read mapping relation;
for any camera, if the first check value is the same as the read second check value of the mapping relation, generating a second image corresponding to the first image shot by the current camera according to the mapping relation in the image stitching lookup table; the second check value of the mapping relation is a check value stored in the process of generating the image splicing lookup table;
alternatively, the first and second electrodes may be,
for any camera, if the first check value is different from the read second check value of the mapping relationship, reading a pre-stored mapping relationship from a preset storage partition, and generating a second image corresponding to a first image shot by the current camera by using the pre-stored mapping relationship, wherein the pre-stored mapping relationship is pre-stored and is used for representing the mapping relationship between world coordinates and image coordinates in a camera shooting visual angle range, and the attribute of the preset storage partition is a read-only storage partition.
7. The method according to claim 2, wherein generating the second image corresponding to each first image according to the mapping relationship in the image stitching lookup table comprises:
interpolating the mapping relation pairs in the image stitching lookup table to obtain an extended image stitching lookup table, wherein the number of the mapping relation pairs in the extended image stitching lookup table is greater than the number of the mapping relation pairs in the image stitching lookup table, and a world coordinate and a corresponding image coordinate form a mapping relation pair;
and determining a first world coordinate corresponding to a first image coordinate in the first image by utilizing the mapping relation in the extended image splicing lookup table, and generating a second image corresponding to the first image based on the first world coordinate.
8. An image generating apparatus of a towed vehicle, wherein a compartment of the towed vehicle is replaceable, and a predetermined number of cameras are disposed on the compartment, the apparatus comprising:
the system comprises a carriage camera parameter acquisition module, a camera parameter acquisition module and a camera parameter acquisition module, wherein the carriage camera parameter acquisition module is used for reading carriage camera parameters stored by each camera on a carriage, and the carriage camera parameters comprise external parameters of each camera and internal parameters of each camera;
the image acquisition module is used for acquiring a first image shot by each camera on the carriage;
and the panoramic image generation module is used for participating in generating the panoramic image of the trailer based on the compartment camera parameters and the first image.
9. A computer storage medium having stored thereon a plurality of instructions adapted to be loaded by a processor and to perform the method according to any of claims 1-7.
10. A terminal, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method according to any of claims 1-7.
CN202211685736.8A 2022-12-27 2022-12-27 Image generation method and device of trailer, storage medium and terminal Pending CN115880142A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211685736.8A CN115880142A (en) 2022-12-27 2022-12-27 Image generation method and device of trailer, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211685736.8A CN115880142A (en) 2022-12-27 2022-12-27 Image generation method and device of trailer, storage medium and terminal

Publications (1)

Publication Number Publication Date
CN115880142A true CN115880142A (en) 2023-03-31

Family

ID=85754760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211685736.8A Pending CN115880142A (en) 2022-12-27 2022-12-27 Image generation method and device of trailer, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN115880142A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116202424A (en) * 2023-04-28 2023-06-02 深圳一清创新科技有限公司 Vehicle body area detection method, tractor and tractor obstacle avoidance system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116202424A (en) * 2023-04-28 2023-06-02 深圳一清创新科技有限公司 Vehicle body area detection method, tractor and tractor obstacle avoidance system
CN116202424B (en) * 2023-04-28 2023-08-04 深圳一清创新科技有限公司 Vehicle body area detection method, tractor and tractor obstacle avoidance system

Similar Documents

Publication Publication Date Title
CN110264520B (en) Vehicle-mounted sensor and vehicle pose relation calibration method, device, equipment and medium
CN111223038B (en) Automatic splicing method of vehicle-mounted looking-around images and display device
CN110288527B (en) Panoramic aerial view generation method of vehicle-mounted panoramic camera
JP5739584B2 (en) 3D image synthesizing apparatus and method for visualizing vehicle periphery
US8817079B2 (en) Image processing apparatus and computer-readable recording medium
JP5455124B2 (en) Camera posture parameter estimation device
CN109948398B (en) Image processing method for panoramic parking and panoramic parking device
CN108052910A (en) A kind of automatic adjusting method, device and the storage medium of vehicle panoramic imaging system
CN110728638A (en) Image distortion correction method, vehicle machine and vehicle
CN109754363B (en) Around-the-eye image synthesis method and device based on fish eye camera
KR20090078463A (en) Distorted image correction apparatus and method
TWI536313B (en) Method for adjusting vehicle panorama system
CN112399158A (en) Projection image calibration method and device and projection equipment
CN109635639B (en) Method, device, equipment and storage medium for detecting position of traffic sign
CN110400255B (en) Vehicle panoramic image generation method and system and vehicle
CN113658262B (en) Camera external parameter calibration method, device, system and storage medium
JP6151535B2 (en) Parameter acquisition apparatus, parameter acquisition method and program
CN115880142A (en) Image generation method and device of trailer, storage medium and terminal
CN111815752B (en) Image processing method and device and electronic equipment
Buljeta et al. Surround view algorithm for parking assist system
CN110610523A (en) Automobile look-around calibration method and device and computer readable storage medium
US20220222947A1 (en) Method for generating an image of vehicle surroundings, and apparatus for generating an image of vehicle surroundings
CN113496527A (en) Vehicle environment image calibration method, device, system and storage medium
CN118279414A (en) External parameter calibration method, device and equipment
CN112967173B (en) Image generation method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination