CN115883754A - Image processing method, image processing apparatus, vehicle, and computer-readable storage medium - Google Patents

Image processing method, image processing apparatus, vehicle, and computer-readable storage medium Download PDF

Info

Publication number
CN115883754A
CN115883754A CN202211486575.XA CN202211486575A CN115883754A CN 115883754 A CN115883754 A CN 115883754A CN 202211486575 A CN202211486575 A CN 202211486575A CN 115883754 A CN115883754 A CN 115883754A
Authority
CN
China
Prior art keywords
image
images
vehicle
bowl
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211486575.XA
Other languages
Chinese (zh)
Inventor
韦添元
陈光辉
宋灵杰
郭昌坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng Autopilot Technology Co Ltd
Original Assignee
Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaopeng Autopilot Technology Co Ltd filed Critical Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority to CN202211486575.XA priority Critical patent/CN115883754A/en
Publication of CN115883754A publication Critical patent/CN115883754A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processing device, a vehicle and a computer-readable storage medium, wherein the method comprises the following steps: acquiring a first image currently shot by each camera device, and acquiring an image to be processed in an RGB format corresponding to each first image; determining images to be spliced corresponding to the images to be processed based on the grid area of the bowl-shaped model corresponding to the vehicle; and performing image splicing operation on the images to be spliced in the bowl-shaped model based on the grid area to obtain a target image. The image with the large visual range shot by the camera device is converted into the planar image in the RGB format, the image to be spliced in the planar image is added to the bowl-shaped model, the three-dimensional target image is obtained, the image distortion is reduced, and a user can watch the image with light and dazzling, so that the three-dimensional display of the parking image is realized, and the visual range and the user experience of the panoramic parking system are improved.

Description

Image processing method, image processing apparatus, vehicle, and computer-readable storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to an image processing method, an image processing apparatus, a vehicle, and a computer-readable storage medium.
Background
With the improvement of the economic level and the living level of people in China, the quantity of automobiles is gradually increased, and the parking problem is increasingly concerned by people. The increasing number of vehicles also makes the difficulty of parking bigger, and for inexperienced drivers, when backing in crowded and narrow places such as streets, garages and parking lots, the conventional optical rearview mirrors are easy to scrape due to the blind areas.
In order to more intuitively present the situation around the vehicle to the driver, the panoramic parking system has come up. In the panoramic parking system, the condition of the surrounding road surface is shot by cameras arranged at the front, the back, the left and the right of a vehicle body, and then a whole set of image processing technology is utilized to synthesize a plurality of shot images into a complete overlooking picture to be displayed on a liquid crystal screen. Therefore, the driver of the automobile can completely see all the areas of the whole automobile body in front, back, left and right without any dead angle and blind area. The automobile driving auxiliary system can bring great convenience to the parking process and the driving safety.
The existing panoramic parking systems are in two-dimensional panoramic parking forms, and display two-dimensional images on a screen, which can realize a panoramic mode and have great progress, however, the mode visual range of the two-dimensional display is narrow, and if the visual range is to be enlarged, the images are distorted to cause users to feel dizzy when watching, so that the use requirements of the users cannot be met.
The above is only for the purpose of assisting understanding of the technical solutions of the present application, and does not represent an admission that the above is prior art.
Disclosure of Invention
The present application mainly aims to provide an image processing method, an image processing apparatus, a vehicle, and a computer-readable storage medium, and aims to solve the problem that a two-dimensional display mode in an existing parking system has a narrow viewing range. To a technical problem of (2).
In order to achieve the above object, the present application provides an image processing method applied to a vehicle provided with a plurality of image pickup devices for picking up surrounding environment images, comprising:
acquiring a first image currently shot by each camera device, and acquiring an image to be processed in an RGB format corresponding to each first image;
determining images to be spliced corresponding to the images to be processed based on the grid area of the bowl-shaped model corresponding to the vehicle;
and performing image splicing operation on the images to be spliced in the bowl-shaped model based on the grid area to obtain a target image.
Further, the step of determining the images to be stitched corresponding to the images to be processed based on the mesh region of the bowl-shaped model corresponding to the vehicle includes:
acquiring area coordinates corresponding to each grid area and external parameters corresponding to the camera device;
and determining the images to be spliced corresponding to the images to be processed based on the area coordinates and the external parameters.
Further, the step of determining the images to be stitched corresponding to the images to be processed based on the region coordinates and the external parameters includes:
respectively determining mapping areas corresponding to the grid areas in the images to be processed based on the area coordinates and the external parameters;
and taking the image in the mapping area in the image to be processed as the image to be spliced corresponding to each image to be processed.
Further, the step of determining the images to be stitched corresponding to the images to be processed based on the mesh region of the bowl-shaped model corresponding to the vehicle includes:
acquiring a preset mapping area corresponding to each grid area;
and taking the image in the preset mapping area in the image to be processed as the image to be spliced corresponding to each image to be processed.
Further, the step of performing an image stitching operation on each image to be stitched in the bowl-shaped model based on the mesh region to obtain a target image includes:
acquiring an edge fusion area and transparency of each image to be spliced;
splicing the images to be spliced to the corresponding grid areas in the bowl-shaped model to obtain a first image;
adjusting the transparency of the edge blending region in the first image based on the transparency to obtain a second image;
filling a vehicle image of the vehicle at the bottom of the second image to obtain the target image.
Further, the step of acquiring the to-be-processed image in RGB format corresponding to each of the first images includes:
and carrying out format conversion operation on each first image to obtain the to-be-processed image in the RGB format corresponding to each first image.
Further, before the step of acquiring the first image currently captured by each of the image capturing devices, the method further includes:
vehicle information of the vehicle is obtained, and a bowl-shaped model corresponding to the vehicle is determined based on the vehicle information.
Further, after the step of determining the bowl model corresponding to the vehicle based on the vehicle information, the method further includes:
and carrying out grid segmentation on the bowl-shaped model to obtain a grid area corresponding to each camera device.
Further, before the step of acquiring the first image currently captured by each of the image capturing devices, the method further includes:
acquiring a second image shot by each camera device in the current environment;
determining image coordinates of a plurality of preset points in the current environment in the second image, and acquiring space coordinates corresponding to each preset point;
and respectively determining external parameters corresponding to the camera devices based on the image coordinates and the space coordinates.
Further, the step of determining the external parameters corresponding to the respective image capturing devices based on the image coordinates and the space coordinates includes:
acquiring internal parameters corresponding to the camera device;
and respectively determining external parameters corresponding to the camera devices based on the internal parameters, the image coordinates and the space coordinates.
Further, the step of determining the external parameters corresponding to the respective image capturing devices based on the internal parameters, the image coordinates, and the spatial coordinates includes:
determining projection coordinates of projection points corresponding to the preset points based on the image coordinates and the internal parameters;
and respectively determining external parameters corresponding to the camera devices based on the projection coordinates and the space coordinates.
Furthermore, camera device is the fisheye camera, the fisheye camera includes foresight fisheye camera, back vision fisheye camera, left side vision fisheye camera and right side vision fisheye camera at least.
Further, to achieve the above object, the present application also provides a vehicle provided with a plurality of image pickup devices for picking up an image of a surrounding environment, the vehicle including:
the acquisition module is used for acquiring a first image currently shot by each camera device and acquiring an image to be processed in an RGB format corresponding to each first image;
the determining module is used for determining the images to be spliced corresponding to the images to be processed based on the grid areas of the bowl-shaped models corresponding to the vehicles;
and the image splicing module is used for carrying out image splicing operation on the images to be spliced in the bowl-shaped model based on the grid area so as to obtain a target image.
In order to achieve the above object, the present application also provides an image processing apparatus comprising: a memory, a processor and an image processing program stored on the memory and executable on the processor, the image processing program, when executed by the processor, implementing the steps of the image processing method as described above.
Further, to achieve the above object, the present application also provides a computer-readable storage medium having stored thereon an image processing program which, when executed by a processor, implements the steps of the image processing method as described above.
The method comprises the steps of obtaining first images currently shot by all the camera devices, and obtaining to-be-processed images in an RGB format corresponding to all the first images; determining images to be spliced corresponding to the images to be processed based on the grid area of the bowl-shaped model corresponding to the vehicle; and then, based on the grid area, performing image splicing operation on the images to be spliced in the bowl-shaped model to obtain a target image, converting the image with a larger visual range shot by the camera device into a planar image in an RGB format, and adding the images to be spliced in the planar image to the bowl-shaped model to obtain a three-dimensional target image so as to reduce image distortion and cause a user to feel dizzy when watching, thereby realizing three-dimensional display of the parking image and improving the visual range and user experience of the panoramic parking system.
Drawings
Fig. 1 is a schematic structural diagram of an image processing apparatus in a hardware operating environment according to an embodiment of the present application;
FIG. 2 is a schematic flowchart of a first embodiment of an image processing method according to the present application;
FIG. 3 is a schematic view of a bowl-shaped model in the image processing method of the present application;
FIG. 4 is a schematic scene diagram of a grid region of a bowl-shaped model in the image processing method of the present application;
FIG. 5 is another schematic scene diagram of a grid region of a bowl-shaped model in the image processing method of the present application;
FIG. 6 is a functional block diagram of an embodiment of a vehicle according to the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
As shown in fig. 1, fig. 1 is a schematic structural diagram of an image processing apparatus in a hardware operating environment according to an embodiment of the present application.
The image processing device in the embodiment of the application can be a vehicle. As shown in fig. 1, the image processing apparatus may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Alternatively, the image processing apparatus may further include a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and the like. Such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display according to the brightness of ambient light, and a proximity sensor that turns off the display and/or the backlight when the mobile terminal moves to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when the mobile terminal is stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer and tapping) and the like for recognizing the attitude of the mobile terminal; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
It will be appreciated by those skilled in the art that the terminal structure shown in fig. 1 does not constitute a limitation of the image processing apparatus, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an image processing program.
In the image processing apparatus shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be used to call up an image processing program stored in the memory 1005.
In the present embodiment, an image processing apparatus includes: the image processing system comprises a memory 1005, a processor 1001 and an image processing program which is stored on the memory 1005 and can run on the processor 1001, wherein when the processor 1001 calls the image processing program stored in the memory 1005, the steps of the image processing method in each embodiment are executed.
The present application further provides an image processing method, and referring to fig. 2, fig. 2 is a schematic flowchart of a first embodiment of the image processing method of the present application.
The image processing method is applied to a vehicle provided with a plurality of image pickup devices for picking up surrounding environment images. Wherein, camera device can be the fisheye camera, and the fisheye camera includes foresight fisheye camera, back vision fisheye camera, left side fisheye camera and right side fisheye camera at least to shoot the preceding image, rear image, left side image and the right side image of vehicle through the fisheye camera that corresponds respectively.
The image processing method comprises the following steps:
step S101, acquiring a first image currently shot by each camera device, and acquiring an image to be processed in an RGB format corresponding to each first image;
when the vehicle is parked, the shooting operation of the camera device can be triggered through manual operation or voice operation of a user (driver), or when the vehicle arrives, the shooting operation of the camera device is automatically triggered, when the camera device shoots, the vehicle acquires a first image currently shot by each camera device, the camera device can be a fisheye camera, the fisheye camera comprises a front fisheye camera, a rear fisheye camera, a left fisheye camera and a right fisheye camera, the first image comprises a current front image, a rear image, a left image and a right image of the vehicle, and the current front image, the rear image, the left image and the right image of the vehicle can be images shot by each fisheye camera at the same time. The camera device is fixedly installed on the vehicle, and the shooting angle of each camera device is a fixed angle.
When the first images are acquired, format conversion is carried out on each first image so as to obtain the to-be-processed images in the RGB format corresponding to each first image, and then each first image is converted into a two-dimensional plane graph.
Step S102, determining images to be spliced corresponding to the images to be processed based on the grid areas of the bowl-shaped models corresponding to the vehicles;
before a vehicle is parked, a bowl-shaped model corresponding to the vehicle and a grid area of the bowl-shaped model are stored in the vehicle in advance, the grid area is each divided area obtained by dividing the bowl-shaped model, for example, when the camera device comprises a front-view fisheye camera, a rear-view fisheye camera, a left-view fisheye camera and a right-view fisheye camera, the grid area is 4 areas corresponding to each camera device in the bowl-shaped model, the bowl-shaped model is formed by the 4 areas in a surrounding mode, and furthermore, the boundary areas of the 4 grid areas can have overlapped parts. Referring to fig. 3, the bottom of the bowl-shaped model is a plane, and the size of the plane or the size of the bowl-shaped model is matched with vehicle information (e.g., length information of the vehicle) of the vehicle to be finished in the planeThe image of the vehicle is completely displayed, and the area above the bottom is an annular curved surface. When a bowl-shaped model is established, a series of points are established, the points are basic points of the bowl-shaped model, theta is sliced from 0-360 degrees, in each slice, every three coordinate points are needed to be used for drawing a plane in OpenGL, therefore, the points need to be coded and classified, every three points are input into OpenGL as a group of points for rendering, wherein x = sin theta, y = sos theta, z = alphax 2 And alpha is a constant, so that a bowl-shaped model is obtained.
After obtaining each image to be processed, the vehicle obtains a corresponding bowl-shaped model, that is, a preset bowl-shaped model thereof, and obtains a mesh region of the bowl-shaped model, referring to fig. 4 and 5. Determining the images to be stitched corresponding to the images to be processed based on the grid regions, specifically, determining a mapping region of each grid region in the corresponding image to be processed, taking an image in the mapping region in the image to be processed as the image to be stitched, or, for each grid region, having a corresponding preset mapping region in the vehicle in advance, and directly taking the image in the preset mapping region in the image to be processed as the image to be stitched after the image to be processed is acquired.
And S103, performing image splicing operation on the images to be spliced in the bowl-shaped model based on the grid region to obtain a target image.
After the images to be spliced are obtained, image splicing operation is carried out on each image to be spliced in the bowl-shaped model according to the grid area, specifically, each image to be spliced can be filled into the corresponding grid area in the bowl-shaped model, for each image to be spliced, the corresponding grid area is rendered according to pixel points in the image to be spliced, and image fusion operation is carried out on the boundary area of each image to be spliced and the boundary area of the adjacent image to be spliced, so that a target image is obtained. In an implementation manner, each image to be stitched may be filled into a corresponding grid region in the bowl-shaped model, an image fusion operation is performed on a boundary region of each image to be stitched and a boundary region of an adjacent image to be stitched, then a vehicle image of the vehicle is added to a central region of the bottom of the bowl-shaped model, and an image fusion operation is performed on the boundary region of the vehicle image and the boundary region of the adjacent image to be stitched, so as to obtain a target image.
In some embodiments, the transparency of each image to be stitched can be obtained first, the boundary area of the image to be stitched is determined, the image except the boundary area in each image to be stitched is filled into the corresponding grid area in the bowl-shaped model, and the boundary area is filled according to the transparencies of two adjacent images to be stitched corresponding to the boundary area, so that the target graph is obtained, the abrupt change of the transparency of the image in the target graph is reduced, and the visual experience of a user is improved.
In a possible implementation manner, step S101 includes:
step a, performing format conversion operation on each first image to obtain an image to be processed in an RGB format corresponding to each first image.
The first image shot by the camera device is an image in YUV format, and in order to facilitate image processing, format conversion operation needs to be performed on the first image to obtain to-be-processed images in RGB format corresponding to each first image.
And if the Orin hardware platform is loaded with hardware coding and decoding, transcoding is directly carried out through the hardware coding and decoding to obtain an image to be processed in an RGB format. If the Orin hardware platform is not loaded with hardware encoding and decoding, converting each pixel point of each first image based on a format conversion formula, wherein the format conversion formula is as follows: r = Y +1.14V, G = Y-0.39U-0.58V, B = Y +2.03U; in practical application, a thread is opened up, data of the four-path camera device is read circularly, and the conversion of the data of the camera device is realized by calling GPU resources based on NVIDIA CUDA through an algorithm corresponding to the format conversion formula.
In yet another possible implementation manner, before step S101, the image processing method further includes:
and b, acquiring vehicle information of the vehicle, and determining a bowl-shaped model corresponding to the vehicle based on the vehicle information.
Before the vehicle is parked, the bowl model corresponding to the vehicle is stored in the vehicle in advance, vehicle information of the vehicle may be acquired first, the bowl model corresponding to the vehicle is determined among a plurality of preset bowl models based on the vehicle information, and the bowl model corresponding to the vehicle is stored in the vehicle. Specifically, the vehicle information may include length information of the vehicle, and the corresponding bowl-shaped model is determined according to the length information of the vehicle and a ratio of a vehicle image of the vehicle to a bottom of the bowl-shaped model, which is shown in fig. 3.
In another possible implementation manner, after the step of determining the corresponding bowl model of the vehicle based on the vehicle information, the method further includes:
and c, carrying out grid segmentation on the bowl-shaped model to obtain grid areas corresponding to the camera devices.
Referring to fig. 4 and 5, when the bowl-shaped model is obtained, the bowl-shaped model is pre-grid-divided to obtain grid regions corresponding to the respective imaging devices, that is, the grid regions are divided into the respective divided regions obtained by dividing the bowl-shaped model, for example, when the imaging devices include a front-view fisheye camera, a rear-view fisheye camera, a left-view fisheye camera and a right-view fisheye camera, the grid regions are 4 regions corresponding to each imaging device in the bowl-shaped model, the 4 regions surround to form the bowl-shaped model, and further, there may be overlapping portions in boundary regions of the 4 grid regions.
Acquiring first images currently shot by the camera devices and acquiring to-be-processed images in an RGB format corresponding to the first images; determining images to be spliced corresponding to the images to be processed based on the grid area of the bowl-shaped model corresponding to the vehicle; and then, based on the grid area, performing image splicing operation on the images to be spliced in the bowl-shaped model to obtain a target image, converting the image with a larger visual range shot by the camera device into a planar image in an RGB format, and adding the images to be spliced in the planar image to the bowl-shaped model to obtain a three-dimensional target image so as to reduce image distortion and cause a user to feel dizzy when watching, thereby realizing three-dimensional display of the parking image and improving the visual range and user experience of the panoramic parking system.
Based on the first embodiment, a second embodiment of the image processing method of the present application is proposed, which includes all contents of the first embodiment, wherein step S102 includes:
step S201, acquiring area coordinates corresponding to each grid area and external parameters corresponding to the camera device;
step S202, determining the images to be spliced corresponding to the images to be processed based on the area coordinates and the external parameters.
After each image to be processed is obtained, the vehicle obtains a corresponding bowl-shaped model, namely a preset bowl-shaped model thereof, and obtains area coordinates corresponding to each grid area and external parameters corresponding to the camera device, wherein the external parameters corresponding to each camera device can be obtained through pre-calculation and stored in the vehicle, and the area coordinates can be coordinates of an edge area of the grid area in the bowl-shaped model.
After the external reference is acquired, the vehicle determines the images to be stitched corresponding to the images to be processed based on the region coordinates and the external reference, specifically, determines the mapping region of each grid region in the corresponding image to be processed according to the external reference of the camera device, and takes the image in the mapping region in the images to be processed as the images to be stitched, thereby accurately obtaining the images to be stitched of each image to be processed.
Acquiring area coordinates corresponding to each grid area and external parameters corresponding to the camera device; and then determining the images to be spliced corresponding to the images to be processed based on the area coordinates and the external parameters, accurately obtaining the images to be spliced according to the area coordinates and the external parameters, and further improving the accuracy of splicing the images to be spliced in the bowl-shaped model to obtain a three-dimensional target image matched with the current environment, so that the three-dimensional display of the parking images is realized, and the view field range and the user experience of the panoramic parking system are improved.
Based on the second embodiment, a third embodiment of the image processing method of the present application is proposed, which includes all contents of the second embodiment, wherein step S202 includes:
step S301, respectively determining the mapping areas of the grid areas in the images to be processed based on the area coordinates and the external parameters;
step S302, the image in the mapping area in the image to be processed is used as the image to be spliced corresponding to each image to be processed.
After the area coordinates and the external parameters of the camera device are obtained, the vehicle respectively determines the mapping areas of the grid areas in the images to be processed based on the area coordinates and the external parameters, and then obtains the mapping areas in the images to be processed, specifically, corresponding to each grid area, the area coordinates of the grid area are mapped into the coordinates of the pixels in the images to be processed through the external parameters one by one, and then the coordinates of the pixels in the mapping areas are obtained, and the corresponding mapping areas are obtained. And taking the image in the mapping area in the image to be processed as the image to be spliced corresponding to each image to be processed.
Respectively determining the corresponding mapping areas of the grid areas in the images to be processed based on the area coordinates and the external parameters; then, the images in the mapping areas in the images to be processed are used as the images to be spliced corresponding to the images to be processed, the images to be spliced can be accurately obtained according to the area coordinates and the mapping areas corresponding to the external parameters, the accuracy of splicing the images to be spliced in the bowl-shaped model is further improved, a three-dimensional target image matched with the current environment is obtained, therefore, three-dimensional display of the parking images is achieved, and the view field range and the user experience of the panoramic parking system are improved.
Based on the first embodiment, a fourth embodiment of the image processing method of the present application is proposed, which includes all the contents of the first embodiment, wherein step S102 includes:
step S401, acquiring a preset mapping area corresponding to each grid area;
step S402, using the image in the preset mapping area in the image to be processed as the image to be spliced corresponding to each image to be processed.
After a camera device of a vehicle is fixed, when a bowl-shaped model of the vehicle and external parameters of the camera device are acquired, area coordinates of grid areas are acquired, the area coordinates can be coordinates of edge areas of the grid areas in the bowl-shaped model, correspond to each grid area, the area coordinates of the grid areas are mapped into coordinates of pixel points in an image to be processed through the external parameters one by one, then coordinates of the pixel points of the mapping areas are acquired, corresponding preset mapping areas are acquired, then corresponding preset mapping areas of each grid area are acquired, and the preset mapping areas are stored in the vehicle in advance. The preset mapping area may be a planar coordinate range.
After each image to be processed is obtained, the vehicle obtains a corresponding bowl-shaped model, namely a preset bowl-shaped model of the bowl-shaped model, obtains a preset mapping area corresponding to each grid area, and takes an image in the preset mapping area in the image to be processed as an image to be spliced corresponding to each image to be processed.
Acquiring a preset mapping area corresponding to each grid area; and taking the image in the preset mapping area in the image to be processed as the image to be spliced corresponding to each image to be processed, accurately obtaining the image to be spliced according to the preset mapping area, and further improving the accuracy of splicing the image to be spliced in the bowl-shaped model to obtain a three-dimensional target image matched with the current environment, so that the three-dimensional display of the parking image is realized, and the view field range and the user experience of the panoramic parking system are improved.
Based on the first embodiment, a fifth embodiment of the image processing method of the present application is proposed, which includes all the contents of the first embodiment, wherein step S103 includes:
step S501, obtaining an edge fusion area and transparency of each image to be spliced;
step S502, splicing the images to be spliced to the corresponding grid areas in the bowl-shaped model to obtain a first image;
step S503, adjusting the transparency of the edge blending region in the first image based on the transparency to obtain a second image;
step S504, filling the vehicle image of the vehicle at the bottom of the second image to obtain the target image.
After the images to be spliced are obtained, an edge fusion area of each image to be spliced is obtained, in general, two adjacent images to be spliced can exist in an overlapping area, if the two adjacent images to be spliced exist in the overlapping area, the edge fusion area is the overlapping area of the two adjacent images to be spliced, if the two adjacent images to be spliced do not exist in the overlapping area, an area with a preset width away from the boundary of the images to be spliced in the images to be spliced is determined to be the edge fusion area, the preset width can be reasonably set according to a bowl-shaped model, the boundary of the images to be spliced is a boundary adjacent to other images to be spliced, the transparency of each image to be spliced is obtained at the same time, and the transparency is calculated according to the existing transparency algorithm based on each pixel point of the images to be spliced.
Meanwhile, in the bowl-shaped model, the images to be stitched are stitched to the corresponding grid regions to obtain a first image, that is, the corresponding images to be stitched are filled in each grid region of the bowl-shaped model, specifically, for each image to be stitched, the pixel points of the image to be stitched are mapped to the corresponding grid regions through the external parameters of the camera device corresponding to the image to be stitched, so that the image to be stitched is stitched (filled) to the corresponding grid regions.
Then, the transparency of the edge blending region is adjusted in the first image based on the transparency to obtain a second image, and the image blending operation of the edge blending region is implemented, specifically, for the edge blending region of some two adjacent images to be blended, for example, the two images to be blended include a first image to be blended and a second image to be blended, the transparency of the first image to be blended is a first transparency a, the transparency of the second image to be blended is a second transparency b, the edge blending region includes a center line and a plurality of edge lines parallel to or corresponding to the center line, the transparencies of the pixel points of the center line and the edge lines are both xa + yb, wherein the transparency of the pixel point of the center line is set to 0.5a +0.5b, x corresponding to the edge line close to the first image to be blended is gradually increased and y is gradually decreased, x corresponding to the edge line close to the second image to be blended is gradually decreased and y is gradually increased, for example, x corresponding to the first edge line from the first image to be blended to the second image to be blended is 0.9 and y is gradually increased and is equal to the last edge line of the image to be blended, and y is gradually increased and the same as x value of the first image to be blended.
Finally, a vehicle image of the vehicle is filled in the bottom of the second image to obtain the target image.
Obtaining the edge fusion area and the transparency of each image to be spliced; splicing the images to be spliced to the corresponding grid areas in the bowl-shaped model to obtain a first image; then, adjusting the transparency of the edge fusion area in the first image based on the transparency to obtain a second image; and then filling the vehicle image of the vehicle at the bottom of the second image to obtain the target image, and accurately fusing the images to be spliced to obtain a three-dimensional target image so as to reduce the image distortion to cause a user to feel light and dazzling when watching, thereby realizing the three-dimensional display of the parking image and improving the visual field range and user experience of the panoramic parking system.
Based on the above embodiments, a sixth embodiment of the image processing method of the present application is proposed, and before step S101, the image processing method further includes:
step S601, acquiring second images shot by the camera devices in the current environment;
step S602, determining image coordinates of a plurality of preset points in the current environment in the second image, and obtaining space coordinates corresponding to each preset point;
step S603 is to determine, based on the image coordinates and the space coordinates, external parameters corresponding to the respective imaging devices.
Before a vehicle generates a three-dimensional parking image, because the image pickup device is a fisheye camera, the image distortion of the image shot by the fisheye camera and the position pattern distortion of the image shot by the camera at a larger deflection angle with the center of the camera are more obvious, so a special correction algorithm is needed to correct the image to meet a normal visual angle, at this time, external parameters of each image pickup device installed in the vehicle need to be calibrated, specifically, a second image shot by the image pickup device at the current environment view is obtained in a preset area with a preset point, namely the second image comprises the image shot by each image pickup device, and the second image can comprise multiple groups of images shot by each image pickup device at multiple same time.
Then, determining image coordinates of a plurality of preset points in a second image in the current environment, wherein each second image corresponds to a plurality of preset points, and obtaining a space coordinate corresponding to each preset point. And respectively determining external parameters corresponding to each of the image capturing devices based on the image coordinates and the space coordinates, and specifically, in a possible implementation manner, step S603 includes:
step d, acquiring internal parameters corresponding to the camera device;
and e, respectively determining external parameters corresponding to the camera devices based on the internal parameters, the image coordinates and the space coordinates.
The internal parameters of each camera device are fixed parameters of the camera device. When the internal parameters of the camera device are acquired, the external parameters corresponding to the camera device are respectively determined based on the internal parameter image coordinates and the space coordinates, so that the external parameters of the camera device are accurately acquired, and the accuracy of the target graph is improved.
Further, in another possible implementation manner, step e includes:
f, determining projection coordinates of projection points corresponding to the preset points based on the image coordinates and the internal parameters;
and g, respectively determining external parameters corresponding to the camera devices based on the projection coordinates and the space coordinates.
To a cameraAfter internal reference, the vehicle determines projection coordinates of projection points corresponding to each preset point based on the image coordinates and the internal reference, and after the projection coordinates are obtained, external reference corresponding to each camera device is determined respectively based on the projection coordinates and the space coordinates. For example, for each image pickup apparatus, the internal parameter is M I The external reference is M E The spatial coordinate of a preset point is P W (X W ,Y W ,Z W ) The projection coordinate corresponding to the preset point is P C (X C ,Y C ,Z C ) The image coordinate corresponding to the preset point is P CC (U C ,V C ) Then, by the formula: p CC (U C ,V C )=M I* P C (X C ,Y C ,Z C ) Obtaining the projection coordinate corresponding to the preset point as P C (X C ,Y C ,Z C ) By the formula: p C (X C ,Y C ,Z C )=M E* P W (X W ,Y W ,Z W ) Obtaining the external parameter M of the camera device E For example, 8 external parameters corresponding to 8 groups of preset points are obtained first, and the average value of the 8 external parameters is used as the external parameter M of the image capturing device E And further, the external parameters of each camera device can be accurately obtained.
Acquiring second images shot by the camera devices in the current environment; then determining the image coordinates of a plurality of preset points in the current environment in the second image, and acquiring the space coordinates corresponding to each preset point; and then respectively determining external parameters corresponding to the camera devices based on the image coordinates and the space coordinates, accurately obtaining the external parameters of the camera devices according to the second image and the preset point, further accurately dividing the grid area of the bowl-shaped model according to the external parameters, and accurately obtaining the images to be spliced according to the external parameters, thereby improving the accuracy of the target graph.
Further, the present application also proposes a vehicle provided with a plurality of image pickup devices for picking up an image of a surrounding environment, the vehicle including, with reference to fig. 6:
the acquiring module 10 is configured to acquire a first image currently captured by each of the image capturing devices, and acquire a to-be-processed image in an RGB format corresponding to each of the first images;
a determining module 20, configured to determine to-be-stitched images corresponding to the to-be-processed images based on a mesh region of a bowl-shaped model corresponding to the vehicle;
and the image stitching module 30 is configured to perform an image stitching operation on each image to be stitched in the bowl-shaped model based on the grid region, so as to obtain a target image.
The methods executed by the program units may refer to various embodiments of the image processing method of the present application, and are not described herein again.
Furthermore, the present application also proposes a computer-readable storage medium having stored thereon an image processing program which, when executed by a processor, implements the steps of the image processing method as described above.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of other like elements in a process, method, article, or system comprising the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (15)

1. An image processing method applied to a vehicle provided with a plurality of image pickup devices for picking up surrounding image, comprising:
acquiring a first image currently shot by each camera device, and acquiring an image to be processed in an RGB format corresponding to each first image;
determining images to be spliced corresponding to the images to be processed based on the grid area of the bowl-shaped model corresponding to the vehicle;
and performing image splicing operation on the images to be spliced in the bowl-shaped model on the basis of the grid region to obtain a target image.
2. The image processing method of claim 1, wherein the step of determining the images to be stitched corresponding to the respective images to be processed based on the mesh region of the bowl model corresponding to the vehicle comprises:
acquiring area coordinates corresponding to each grid area and external parameters corresponding to the camera device;
and determining the images to be spliced corresponding to the images to be processed based on the area coordinates and the external parameters.
3. The image processing method according to claim 2, wherein the step of determining the images to be stitched corresponding to the respective images to be processed based on the region coordinates and the external parameters comprises:
respectively determining mapping areas corresponding to the grid areas in the images to be processed based on the area coordinates and the external parameters;
and taking the image in the mapping area in the image to be processed as the image to be spliced corresponding to each image to be processed.
4. The image processing method of claim 1, wherein the step of determining the images to be stitched corresponding to the respective images to be processed based on the mesh region of the bowl model corresponding to the vehicle comprises:
acquiring a preset mapping area corresponding to each grid area;
and taking the image in the preset mapping area in the image to be processed as the image to be spliced corresponding to each image to be processed.
5. The image processing method of claim 1, wherein the step of performing an image stitching operation on each of the images to be stitched in the bowl-shaped model based on the mesh region to obtain a target image comprises:
acquiring an edge fusion area and transparency of each image to be spliced;
splicing the images to be spliced to the corresponding grid areas in the bowl-shaped model to obtain a first image;
adjusting the transparency of the edge blending region in the first image based on the transparency to obtain a second image;
filling a vehicle image of the vehicle at the bottom of the second image to obtain the target image.
6. The image processing method according to claim 1, wherein the step of acquiring the to-be-processed image in RGB format corresponding to each of the first images comprises:
and carrying out format conversion operation on each first image to obtain the to-be-processed image in the RGB format corresponding to each first image.
7. The image processing method according to claim 1, wherein the step of acquiring the first image currently captured by each of the image capturing devices further comprises, before the step of acquiring the first image currently captured by each of the image capturing devices:
vehicle information of the vehicle is obtained, and a bowl-shaped model corresponding to the vehicle is determined based on the vehicle information.
8. The image processing method of claim 7, wherein the step of determining the corresponding bowl model of the vehicle based on the vehicle information is followed by further comprising:
and carrying out grid segmentation on the bowl-shaped model to obtain a grid area corresponding to each camera device.
9. The image processing method according to claim 1, wherein the step of acquiring the first image currently captured by each of the image capturing devices further comprises, before the step of acquiring the first image currently captured by each of the image capturing devices:
acquiring a second image shot by each camera device in the current environment;
determining image coordinates of a plurality of preset points in the current environment in the second image, and acquiring space coordinates corresponding to the preset points;
and respectively determining external parameters corresponding to the camera devices based on the image coordinates and the space coordinates.
10. The image processing method according to claim 9, wherein the step of determining the external parameter corresponding to each of the image capturing devices based on the image coordinates and the spatial coordinates, respectively, comprises:
acquiring internal parameters corresponding to the camera device;
and respectively determining external parameters corresponding to the camera devices based on the internal parameters, the image coordinates and the space coordinates.
11. The image processing method according to claim 10, wherein the step of determining the external parameter corresponding to each of the image capturing devices based on the internal parameter, the image coordinates, and the spatial coordinates, respectively, comprises:
determining projection coordinates of projection points corresponding to the preset points based on the image coordinates and the internal reference;
and respectively determining external parameters corresponding to the camera devices based on the projection coordinates and the space coordinates.
12. The image processing method according to any one of claims 1 to 11, wherein the camera device is a fisheye camera, and the fisheye camera comprises at least a front-view fisheye camera, a rear-view fisheye camera, a left-view fisheye camera, and a right-view fisheye camera.
13. A vehicle provided with a plurality of image pickup devices for picking up an image of a surrounding environment, comprising:
the acquisition module is used for acquiring a first image currently shot by each camera device and acquiring an image to be processed in an RGB format corresponding to each first image;
the determining module is used for determining the images to be spliced corresponding to the images to be processed based on the grid areas of the bowl-shaped models corresponding to the vehicles;
and the image splicing module is used for carrying out image splicing operation on the images to be spliced in the bowl-shaped model based on the grid area so as to obtain a target image.
14. An image processing apparatus characterized by comprising: memory, a processor and an image processing program stored on the memory and executable on the processor, the image processing program, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 12.
15. A computer-readable storage medium, characterized in that an image processing program is stored thereon, which when executed by a processor implements the steps of the image processing method according to any one of claims 1 to 12.
CN202211486575.XA 2022-11-23 2022-11-23 Image processing method, image processing apparatus, vehicle, and computer-readable storage medium Pending CN115883754A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211486575.XA CN115883754A (en) 2022-11-23 2022-11-23 Image processing method, image processing apparatus, vehicle, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211486575.XA CN115883754A (en) 2022-11-23 2022-11-23 Image processing method, image processing apparatus, vehicle, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN115883754A true CN115883754A (en) 2023-03-31

Family

ID=85763907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211486575.XA Pending CN115883754A (en) 2022-11-23 2022-11-23 Image processing method, image processing apparatus, vehicle, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN115883754A (en)

Similar Documents

Publication Publication Date Title
US10681271B2 (en) Image processing apparatus, image capturing system, image processing method, and recording medium
US10789671B2 (en) Apparatus, system, and method of controlling display, and recording medium
CN101978694B (en) Image processing device and method, driving support system, and vehicle
US9030524B2 (en) Image generating apparatus, synthesis table generating apparatus, and computer readable storage medium
EP3120328B1 (en) Information processing method, information processing device, and program
US8514282B2 (en) Vehicle periphery display device and method for vehicle periphery image
US20150109446A1 (en) Vehicle monitoring device, vehicle monitoring system, terminal device, and vehicle monitoring method
CN110139084B (en) Vehicle surrounding image processing method and device
CN110456967B (en) Information processing method, information processing apparatus, and program
US10855916B2 (en) Image processing apparatus, image capturing system, image processing method, and recording medium
JP2008083786A (en) Image creation apparatus and image creation method
CN1461561A (en) Method and apparatus for displaying pickup image of camera installed in vehicle
US10930070B2 (en) Periphery monitoring device
CN110651295B (en) Image processing apparatus, image processing method, and program
CN109905594B (en) Method of providing image and electronic device for supporting the same
CN110400255B (en) Vehicle panoramic image generation method and system and vehicle
CN114881863B (en) Image splicing method, electronic equipment and computer readable storage medium
CN103496339B (en) A kind of display system by 3D display automobile panorama and its implementation
EP3770859B1 (en) Image processing method, image processing apparatus, and storage medium
US20120106868A1 (en) Apparatus and method for image correction
CN111726544A (en) Method and apparatus for enhancing video display
US10701286B2 (en) Image processing device, image processing system, and non-transitory storage medium
CN115883754A (en) Image processing method, image processing apparatus, vehicle, and computer-readable storage medium
US10661654B2 (en) Method for setting display of vehicle infotainment system and vehicle infotainment system to which the method is applied
CN113268215B (en) Screen picture adjusting method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination