CN117177076A - Channel numerical value calculation method, and method, device, equipment and medium for generating look-around graph - Google Patents

Channel numerical value calculation method, and method, device, equipment and medium for generating look-around graph Download PDF

Info

Publication number
CN117177076A
CN117177076A CN202210567975.7A CN202210567975A CN117177076A CN 117177076 A CN117177076 A CN 117177076A CN 202210567975 A CN202210567975 A CN 202210567975A CN 117177076 A CN117177076 A CN 117177076A
Authority
CN
China
Prior art keywords
channel
camera
target
value
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210567975.7A
Other languages
Chinese (zh)
Inventor
王晓飞
江源
宋丹丹
丁礼健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Glenfly Tech Co Ltd
Original Assignee
Glenfly Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Glenfly Tech Co Ltd filed Critical Glenfly Tech Co Ltd
Priority to CN202210567975.7A priority Critical patent/CN117177076A/en
Publication of CN117177076A publication Critical patent/CN117177076A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The application relates to a channel numerical value calculation method, a ring-around view generation method, a device, equipment and a medium. The channel numerical value calculation method comprises the following steps: acquiring images acquired by each camera, extracting a target common-view area in the images, and acquiring an initial image according to the target common-view area; performing color space conversion on the initial image to obtain a channel-separated image to be processed; calculating to obtain channel parameter adjustment values of each camera according to the image to be processed; and obtaining a target channel value based on the channel parameter adjustment value and the original channel value of the image acquired by each camera. The method for generating the ring-view comprises the following steps: acquiring images acquired by each camera; and processing the images acquired by each camera according to the target channel value and the looking-around model to generate a circular view. By adopting the method, the numerical value of the target channel of the acquired image of each camera can be adjusted, so that the visual difference of the annular view does not exist.

Description

Channel numerical value calculation method, and method, device, equipment and medium for generating look-around graph
Technical Field
The present application relates to the field of vehicle technologies, and in particular, to a channel numerical calculation method, a look-around graph generating method, a device, a computer device, a storage medium, and a computer program product.
Background
With the development of the automobile industry, a vehicle-mounted looking-around system is developed. The vehicle-mounted looking-around system is generally provided with 4 or more cameras, and the looking-around system is used for splicing and synthesizing images acquired by the cameras to generate panoramic images of the surrounding environment of the vehicle. Since each camera independently collects images, the exposure parameters of the cameras are determined by the attribute of the shot area, for example, the ISO sensitivity parameters are automatically adjusted according to the average brightness of the shot area, so that the exposure parameters of each camera are different at the same time, and the attribute, for example, the brightness, of the same scene shot by the adjacent cameras is also different. An obvious splicing area can appear when the collected original images are directly used for splicing, and the display effect is affected, so that a method for adjusting the exposure parameters of each camera at the same time is urgently needed.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a channel numerical calculation method, a surround view generation method, an apparatus, a computer device, a storage medium, and a computer program product that are capable of adjusting exposure parameters of respective cameras and reducing visual differences of the surround view.
In a first aspect, the present application provides a channel numerical value calculation method, including:
Acquiring images acquired by each camera, extracting a target common view area in the images, and acquiring an initial image according to the target common view area;
performing color space conversion on the initial image to obtain a channel-separated image to be processed;
calculating to obtain channel parameter adjustment values of the cameras according to the image to be processed;
and obtaining a target channel value based on the channel parameter adjustment value and the original channel value of the image acquired by each camera.
In one embodiment, the calculating, according to the image to be processed, a channel parameter adjustment value of each camera includes:
determining a target channel value according to the current channel value of each target common view area in the image to be processed and the channel parameter adjustment value to be calculated;
obtaining a channel parameter equation according to the equality of the target channel values of the adjacent cameras;
obtaining a constraint equation according to accumulation and total invariance of the target channel values of each camera;
obtaining a target equation set according to the channel parameter equation and the constraint equation;
and solving the target equation set to obtain channel parameter adjustment values of all cameras.
In one embodiment, the obtaining the target channel value based on the channel parameter adjustment value and the original channel value of the image acquired by each camera includes:
calculating an intermediate channel value based on the channel parameter adjustment value and the original channel value of the image acquired by each camera;
and mapping the intermediate channel value to a reasonable value interval to obtain a target channel value.
In one embodiment, the method further comprises:
calibrating each camera to obtain camera parameters;
the extracting the target common view area in the image, and obtaining an initial image according to the target common view area includes:
and extracting a target common-view area in each image, and processing the target common-view area according to the camera parameters to obtain an initial image.
In one embodiment, the calibrating each camera to obtain the camera parameters includes:
calculating according to the external parameters of each camera to obtain the camera position of each camera;
calculating the length of the vehicle body, the width of the vehicle and the height of the camera based on the position of the camera;
Calculating model parameters of a look-around model according to the vehicle width and the vehicle body length;
determining an initial observation height value and an initial observation radius value of an observation position according to the lowest bowl height in the model parameters, and adjusting the initial observation height value and the initial observation radius value until the observation visual angle meets the requirement, so as to obtain a target observation position;
and calculating camera parameters of each camera according to the target observation position.
In one embodiment, the performing color space conversion on the initial image to obtain a channel-separated image to be processed includes:
and carrying out color space conversion on each pixel in the initial image in parallel to obtain the channel-separated image to be processed.
In a second aspect, the present application further provides a method for generating a look-around view, where the method for generating a look-around view includes:
acquiring images acquired by each camera;
and processing the images acquired by each camera according to the target channel value and the look-around model to generate a ring view, wherein the target channel value is calculated according to the channel value calculation method.
In one embodiment, the processing the image acquired by each camera according to the channel value and the look-around model to generate a ring view includes:
And processing each pixel in the images acquired by the cameras according to the channel value and the looking-around model in a parallel processing mode to generate a ring view.
In one embodiment, the processing, by parallel processing, each pixel in the image acquired by each camera according to the target channel value and the look-around model to generate a ring view includes:
processing the images acquired by each camera according to the camera parameters to obtain target pixel points in the annular view;
converting each target pixel point from RGB space to YUV space in parallel;
adjusting the channel value of the target channel in the YUV space according to the value of the target channel in parallel;
and converting each pixel point in the YUV space into the RGB space according to the adjusted channel value in parallel, and generating a ring view by using a ring view model.
In one embodiment, the method further comprises:
receiving a channel parameter adjustment instruction, wherein the channel parameter adjustment instruction carries an adjustment coefficient;
and adjusting the target channel value according to the adjustment coefficient.
In a third aspect, the present application also provides a channel numerical value calculation apparatus, including:
The first drawing module is used for acquiring images acquired by each camera, extracting a target common view area in the images and obtaining an initial image according to the target common view area;
the first conversion module is used for carrying out color space conversion on the initial image to obtain a channel-separated image to be processed;
the channel parameter adjustment value calculation module is used for calculating the channel parameter adjustment value of each camera according to the image to be processed;
and the target exposure parameter calculation module is used for obtaining a target channel value based on the channel parameter adjustment value and the original channel value of the image acquired by each camera.
In a fourth aspect, the present application also provides a device for generating a look-around view, where the device includes:
the image acquisition module is used for acquiring images acquired by each camera;
and the second drawing module is used for processing the images acquired by the cameras according to the target channel numerical value and the looking-around model to generate a ring view, and the target channel numerical value is calculated according to the channel numerical value calculation device.
In a fifth aspect, the present application also provides a computer device comprising a memory storing a computer program and a processor implementing the steps of the method described in any one of the embodiments above when the computer program is executed by the processor.
In a sixth aspect, the present application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method described in any of the embodiments above.
In a seventh aspect, the application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method described in any of the embodiments above.
According to the exposure parameter calculation method, the ring-view image generation method, the device, the equipment and the medium, the same scenery can be guaranteed to be extracted by selecting the target common-view area, and the channel parameter adjustment value is calculated according to the image to be processed, namely, the channel parameter adjustment value of each camera is calculated according to the equal calculation of the same scenery in the target common-view area corresponding to the image acquired by the adjacent camera in the adjusted target channel value corresponding to each adjacent camera, so that the target channel value is calculated, and the ring-view images acquired by the adjacent cameras are not visually different.
Drawings
FIG. 1 is a diagram of an application environment for a channel value calculation method in one embodiment;
FIG. 2 is a flow chart of a method for calculating channel values in one embodiment;
FIG. 3 is a schematic view of the position of a camera and a reference object in one embodiment;
FIG. 4 is a schematic illustration of a label cloth in one embodiment;
FIG. 5 is a schematic illustration of an initial image in one embodiment;
FIG. 6 is a flowchart illustrating step S206 in the embodiment shown in FIG. 2;
FIG. 7 is a flow chart of a camera calibration process in one embodiment;
FIG. 8 is a round bottom or elliptical bottom based look-around model in one embodiment;
FIG. 9 is a schematic diagram of perspective projection in one embodiment;
FIG. 10 is a schematic diagram of an actual bowl model in one embodiment;
FIG. 11 is a schematic diagram of a simplified model in one embodiment;
FIG. 12 is a flow diagram of a method of generating a cyclic view in one embodiment;
FIG. 13 is a block diagram of a channel number calculation device in one embodiment;
FIG. 14 is a block diagram showing the structure of a cyclic view generating apparatus in one embodiment;
fig. 15 is an internal structural view of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The channel numerical value calculation method and the look-around graph generation method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the vehicle terminal 102 communicates with the camera 104. The data storage system may store data that the vehicle terminal 102 needs to process. The data storage system may be integrated on the vehicle terminal 102 or may be located on a cloud or other network server.
Wherein, the vehicle needs to be calibrated before using the panoramic system. This is because the camera mounting angle, the camera position and the parameters of the camera itself cannot be completely consistent, and it is necessary to determine these parameters through a calibration process and generate a look-around model according to these parameters. Even if the automobiles are produced in the same assembly line of an automobile factory, the looking-around system is also required to be calibrated; in the later modification mode, the calibration process cannot be omitted, and the application is the improvement of the calculation of the channel value in the calibration process.
The camera 104 collects images around the vehicle, the collected images are sent to the vehicle terminal 102, the vehicle terminal 102 extracts a target common-view area in each image, and an initial image is obtained according to the target common-view area; performing color space conversion on the initial image to obtain a channel-separated image to be processed; calculating to obtain channel parameter adjustment values of each camera according to the image to be processed; and obtaining a target channel value based on the channel parameter adjustment value and the original channel value of the image acquired by each camera. The method comprises the steps of selecting the target common view area, acquiring the target common view area corresponding to the image acquired by the adjacent cameras in the image to be processed, and calculating the channel parameter adjustment value according to the target common view area corresponding to the image acquired by the adjacent cameras, namely, calculating the channel parameter adjustment value of each camera according to the equal adjusted target channel value of the same scene in the target common view area corresponding to the image acquired by the adjacent cameras, so that the target channel value is calculated, the spliced view is free from visual difference, and only the target common view area in the image is extracted and channel converted in the calculating process, thereby reducing the calculation of data quantity and improving the calculating efficiency.
The vehicle terminal 102 may be, but not limited to, various intelligent vehicle-mounted devices, etc., preferably, the vehicle terminal 102 is a vehicle-mounted control device, and specifically may refer to a hardware processor corresponding to a vehicle-mounted looking-around system, where the hardware processor includes a CPU (Central Processing Unit, a central processing unit, abbreviated as CPU) and a GPU (Graphics Processing Unit, also referred to as a display core, a vision processor, a display chip, or a graphics chip, abbreviated as GPU) in order to improve processing efficiency of the whole algorithm through cooperative processing of the CPU and the GPU. The camera 104 may be an in-vehicle camera.
In one embodiment, as shown in fig. 2, a channel numerical calculation method is provided, and the method is applied to the vehicle terminal 102 in fig. 1 for illustration, and includes the following steps:
s202: and acquiring images acquired by each camera, extracting a target common-view area in the images, and obtaining an initial image according to the target common-view area.
Specifically, the camera is a vehicle-mounted camera mounted around the vehicle, and an image is acquired through the vehicle-mounted camera. The target common view area is a specific area which is set manually for calibrating the channel value, wherein the target common view area is provided with a corresponding reference object. Optionally, the reference object has a single color, so as to facilitate adjustment of channel parameters, for example, preferably, white or black, and in other embodiments, other pure-colored reference objects may be used, which is not specifically limited herein.
Specifically, with reference to fig. 3, specific positions of the vehicle-mounted cameras and the target common-view area are shown, in other embodiments, the number and positions of the vehicle-mounted cameras, the number and positions of the target common-view area may be adjusted according to practical applications, and specific configurations of one embodiment are only shown in fig. 3 without specific limitation.
As shown in fig. 3, images captured by two adjacent cameras have a certain overlapping area, and a target common view area is selected in the overlapping area so as to deploy a corresponding reference object. Wherein, a large black block with a certain size, preferably one square meter, is preferably arranged around the vehicle body and used as a reference for calibrating the camera. The four large black blocks must be completely shot by two adjacent cameras, and after calibration is completed, the four large black blocks are also the areas with the best splicing effect. And for normal driving, the possibility that the interference objects higher than the ground appear in the areas is also small, so that the large black area of the calibration cloth is used as the target common view area for adjusting the brightness of the camera (the whole picture in the target common view area is used as a reference object for comparing channel values).
In practical applications, large black blocks can be deployed by calibration, where the calibration cloth is actually two (one placed in front of the car body and one placed behind the car body) white cloths with black squares. For a common car, the size is fixed (the nominal cloth size of a large bus is different). When in calibration, a piece of calibration cloth is placed in front of the vehicle body, a piece of calibration cloth is placed at the rear of the vehicle body, and the central line of the calibration cloth is aligned with the central axis of the vehicle body. The distance between the edge of the calibration cloth and the car body is related to the mounting height, angle and the mounting height of the camera, generally 30-60cm, and can be seen in fig. 4, wherein the calibration cloth in fig. 4 is preferable, and in other embodiments, the user can change the calibration cloth according to the needs. It should be noted that if the included angle between the central axis of the front camera and the horizontal plane is smaller, the calibration cloth is far away from the vehicle body because the near ground cannot be shot. If the anti-collision beam shields the sight of the camera, the calibration cloth is required to be far away from the car body.
With reference to fig. 3, in this embodiment, 4 cameras are selected as examples for the cameras, and the cameras C1 to C4 in fig. 3 are respectively described.
The initial image is generated according to the target common-view area in the image collected by each camera, wherein the target common-view areas in the image collected by each camera may be arranged according to a certain rule, and specifically may be shown in fig. 5, where each target common-view area Aij in fig. 5, where i represents a camera and j represents a target common-view area. It should be noted that, in other embodiments, the target common-view area may not be stitched, but only be used as a single initial image, that is, 8 initial images are included in fig. 5.
In one embodiment, the generating of the initial image includes: the vehicle terminal obtains images collected by each camera, extracts a target common-view area from the images collected by each camera, for example, extracts pixel value information and the like, adjusts a view angle of the target common-view area according to camera calibration parameters, for example, adjusts the target common-view area to a corresponding view angle, for example, under a overlook angle, so as to obtain a plurality of initial images.
S204: and performing color space conversion on the initial image to obtain a channel-separated image to be processed.
Specifically, the initial image is collected by the camera, which is generally an RGB image, and the visual perception is generally the perception of brightness, saturation and contrast, so that the RGB image needs to be subjected to color space conversion to obtain a to-be-processed image under a corresponding channel, preferably, the vehicle terminal converts the RGB image to obtain a corresponding to-be-processed image through at least one channel, for example, when the brightness difference of the adjacent camera needs to be adjusted, the RGB image can be converted to a gray image of the brightness channel, and the gray image is the to-be-processed image; when the saturation difference or the contrast difference of the adjacent cameras needs to be adjusted, the saturation channel and the contrast channel can be correspondingly adjusted, and the method is not particularly limited.
For convenience of explanation, in this embodiment, the luminance channel is taken as an example, the vehicle terminal converts the RGB image into the gray-scale image, that is, converts the RGB format into the YUV format, and only converts the corresponding channel, that is, the luminance channel, in order to reduce the processing amount, the specific formula is as follows:
Y=R*0.299+G*0.587+B*0.104 (1)
s206: and calculating to obtain channel parameter adjustment values of each camera according to the image to be processed.
Specifically, the channel parameter adjustment value is calculated on the premise of minimizing the brightness difference of the target common view area corresponding to the image acquired by the adjacent cameras.
The vehicle terminal can obtain a channel parameter equation according to the adjusted target channel value of the target common view area corresponding to the image acquired by the adjacent cameras, obtain a constraint equation according to the channel parameter constraint, obtain an equation set according to the channel parameter equation and the constraint equation, and solve the unknown number in the equation set to obtain the channel parameter adjustment value.
The channel parameter equation may be obtained by equalizing adjusted target channel values of the target common-view areas corresponding to the images collected by the adjacent cameras, for example, the average brightness of the adjusted target common-view areas corresponding to the images collected by the adjacent cameras is substantially equal, but in reality, the average brightness of the target common-view areas corresponding to the images collected by the adjacent cameras is not equal due to different channel values of the cameras, so that a channel parameter adjustment value to be calculated needs to be assumed for each camera so that the average brightness of the adjusted target common-view areas corresponding to the images collected by the adjacent cameras is substantially equal.
The constraint equation may be obtained according to the principle that the total value of the target channel of the target common view area after adjustment is unchanged, that is, the sum of brightness gains of all cameras is 0, that is, if some cameras are lightened, the necessary cameras are darkened to ensure that the total brightness is basically unchanged.
In practical applications, the solution to the above equation set may be a least squares method, and in other embodiments, the solution to the equation may be performed in other manners.
S208: and obtaining a target channel value based on the channel parameter adjustment value and the original channel value of the image acquired by each camera.
Specifically, the original channel value of the image collected by each camera, that is, the channel value of each pixel of the image collected by the camera, the target channel value=the original channel value+the channel parameter adjustment value.
The vehicle terminal calculates a target channel value according to the original channel value of the image acquired by the camera and the channel parameter adjustment value, so that the subsequent generation of the annular view can be performed according to the target channel value, and the generated annular view is ensured to have no visual difference.
According to the channel numerical value calculation method, the same scenery can be guaranteed to be extracted by selecting the target common view area, the channel parameter adjustment value is calculated according to the image to be processed, namely, the channel parameter adjustment value of each camera is calculated according to the fact that the adjusted target channel value of the same scenery in the target common view area corresponding to the image acquired by the adjacent cameras is equal to the adjusted target channel value corresponding to each adjacent camera, so that the target channel numerical value is calculated, further, the spliced circular view images have no visual difference, in addition, only the target common view area in the image is extracted and channel conversion is carried out in the calculation process, the calculation of data quantity can be reduced, and the calculation efficiency is improved.
In one embodiment, referring to fig. 6, fig. 6 is a schematic flow chart of step S206 in the embodiment shown in fig. 2, in which a luminance channel is taken as an example for illustration, but it should be understood by those skilled in the art that a saturation channel and a contrast channel may also be used, and the processing steps are substantially similar to those of the luminance channel and are not repeated herein. Step S206, namely, calculating channel parameter adjustment values of each camera according to the image to be processed, includes:
s602: and determining the target channel value according to the current channel value of each target common view area in the image to be processed and the channel parameter adjustment value to be calculated.
Specifically, the current channel value is a channel parameter calculated from the target common view regions, such as average brightness, wherein the current channel value of each target common view region is calculated for convenienceAnd c represents a camera, and n represents a corresponding target co-view area, wherein for convenience, the numbers of the cameras are counted clockwise from the front camera. The channel parameter adjustment value to be calculated can be obtained by F c And (3) representing. The target channel parameter of the target common view area 1 obtained by the camera 1 after adjustment is +. >Likewise, the adjusted target channel parameters of the target common view areas shot by the cameras can be calculated.
S604: and obtaining a channel parameter equation according to the fact that the values of the target channels of the adjacent cameras are basically equal.
Specifically, since the current channel values of the cameras are different, the current channel values of the target common view areas obtained by the adjacent cameras when the same target common view area of the photo is taken are not equal in practice, but theoretically the channel parameters adjusted by the adjacent cameras when the same target common view area of the photo is taken should be equal, so the channel parameter equation can be obtained based on the condition, wherein the following channel parameter equation can be obtained by taking the camera and the target common view area in fig. 3 as an example:
(the average brightness of the second target common view area photographed by the first camera and the average brightness of the second target common view area photographed by the second camera are substantially identical after the corresponding brightness gain is added)
(the average brightness of the third target common view area photographed by the second camera and the average brightness of the third target common view area photographed by the third camera are substantially identical after the corresponding brightness gain is added)
(the average brightness of the fourth target common view area shot by the third camera and the fourth target common view area shot by the fourth camera are basically consistent after the corresponding brightness gain is added)
(the average brightness of the first target common view area photographed by the fourth camera and the average brightness of the first target common view area photographed by the first camera are substantially identical after the corresponding brightness gain is added)
S606: and obtaining a constraint equation according to the accumulation and total invariance of the target channel values of each camera.
Specifically, since D' =d+f is required to adjust the brightness, but the overall brightness is not expected to change, the sum of the channel parameter adjustment values F of the respective cameras is equal to 0, that is, if there is a camera that is turned on, there must be a camera that is turned off a little, so that the overall brightness is ensured to be substantially unchanged. Taking still the co-view area of the camera and the target in fig. 3 as an example, the following constraint equation can be obtained:
F 1 +F 2 +F 3 +F 4 =0
s608: and obtaining a target equation set according to the channel parameter equation and the constraint equation.
S610: and solving a target equation set to obtain channel parameter adjustment values of each camera.
Specifically, the set of objective equations may be:
wherein the vehicle terminal can obtain each gain F by solving the equation set by a least square method c
In practical application, for convenience in calculation, the above equation set may be rewritten as:
the vehicle terminal can obtain the error minimum solution through a least square method because the unique solution cannot be obtained due to 4 unknowns and 5 equations, and only the solution with the minimum error can be obtained. A matrix [ F ] can be solved by substituting the two matrices M and b into the following formula (3) 1 ,F 2 ,F 3 ,F 4 ]That is, we want to calculate F 1 -F 4 . This F 1 -F 4 I' =i+f, i.e., the luminance increase value. So make
The vehicle terminal then calculates the following overdetermined equation by using a least square method to obtain the corresponding gains F of the cameras c Namely, the channel parameter adjustment value to be calculated:
F=(M T M) -1 M T b (3)
in the above embodiment, a complete calculation manner of the channel parameter adjustment value to be calculated is provided, and the channel parameter adjustment value to be calculated is obtained by solving the value with the minimum error of the equation.
In one embodiment, obtaining the target channel value based on the channel parameter adjustment value and the original channel value of the image acquired by each camera includes: calculating an intermediate channel value based on the channel parameter adjustment value and the original channel value of the acquired original image of each camera; and mapping the intermediate channel value to a reasonable value interval to obtain the target channel value.
The middle channel value is a calculated middle value of the channel parameter of the pixel point of each target common view area, I' =i+f, but the presence of F can cause the value to exceed [0,1], so that the brightness is remapped to be within the range of [0,1 ].
Specifically, in general, a photograph (each camera calculates the channel value according to this standard) that is well exposed, the darkest brightness is 0, and the brightest brightness is 255. Therefore, it is normalized to [0,1], and after the brightness adjustment of I' =i+f, the brightest position in the whole picture is affirmed to be >1, and the darkest position is <0, which is beyond the range of the system representation. The overall brightness is thus re-pressed back into the [0,1] range using the smoothstep function in OGL, and not too much detail is lost. The R, G, B value range in the formula I is 0 and 1, and the calculated maximum value of Y is 1 and cannot exceed 1.
rl=min(F 1 :F 4 )
rh=max(F 1 :F 4 )+1
I’=(I+Fc-rl)/(rh–rl) (3)
In the above embodiment, the calculated intermediate channel value is mapped to the target interval, so that details are not lost.
In one embodiment, the method further comprises: calibrating each camera to obtain camera parameters; extracting a target common-view area in an image, and obtaining an initial image according to the target common-view area, wherein the method comprises the following steps: and extracting a target common-view area in the account image, and processing the target common-view area of the reference object according to camera parameters to obtain an initial image.
The camera parameters are required to be calibrated in advance, and include, but are not limited to, the position of the camera, the height of the camera, the installation angle of the camera, and the like, according to the camera parameters, an annular view can be generated according to the images acquired by each camera, and the camera parameters are also the basis for obtaining an initial image in a target common view area, namely, the images acquired by each camera are converted to the same view angle, for example, under the overlooking view angle, so that the splicing premise is ensured, and the camera parameters are comparable.
In one embodiment, referring to fig. 7, fig. 7 is a flowchart of a camera calibration process in one embodiment, where calibrating each camera to obtain camera parameters includes: obtaining the camera positions of all cameras according to the external parameters of all cameras; calculating the length of the vehicle body, the width of the vehicle and the height of the camera based on the position of the camera; calculating model parameters of the looking-around model according to the vehicle width and the length of the vehicle body; determining an initial observation height value and an initial observation radius value of an observation position according to the lowest bowl height in the model parameters, and adjusting the initial observation height value and the initial observation radius value until the observation visual angle meets the requirement, so as to obtain a target observation position; and calculating according to the target observation position to obtain camera parameters of each camera.
Wherein, when generating the ring view, a ring view model is needed, the ring view model is a curved surface similar to a bowl shape, and a virtual vehicle body is arranged at the center of the bowl. The look-around model generation module generates the bowl-shaped curved surface according to parameters obtained in the calibration process and some self algorithms. When the looking-around system operates, the pictures acquired by the camera are attached to the bowl-shaped curved surface through OpenGL, and the pictures seen by the looking-around system are pictures of the periphery of the vehicle body, which are attached to the bowl-shaped curved surface, from the virtual vehicle body outwards.
In this embodiment, a 3D bowl model is first constructed, and a 3D bowl model based on a round bottom or an elliptical bottom is specifically shown in fig. 8, where fig. 8 is a look-around model based on a round bottom or an elliptical bottom in one embodiment.
Wherein the vehicle bottom plane center point is located at the (0, 0) coordinate, and the vehicle length is parallel to the y-axis, the vehicle width is parallel to the x-axis, and the vehicle height is parallel to the z-axis. The bowl model bottom plane is located the xoy plane, and the centre of a circle is located the origin, when looking around the 3D bowl model of model selection round bottom, its mathematical expression is:
wherein, minR is the radius of the bowl bottom circular surface.
When the look-around model selects the 3D bowl model of an elliptical bottom, it is mathematically expressed as:
Wherein,wherein a is a long half shaft of an ellipse, and b is a short half shaft of the ellipse.
After the 3D bowl model is built, calculation of the view angle is performed, wherein a circle of view is projected by perspective, as shown in fig. 9: the position of the eyes moves on a circle with the radius of eyeR, is parallel to the xy plane z=eyeH, the circle center is (0, eyeH), the corresponding target position is a circle with the radius of targetR and positioned on the xy plane, and the circle center is (0, 0); the eye position intersects the z-axis with the target corresponding position line.
The vehicle width obtained by the terminal according to camera external parameters is marked by using carW, the vehicle length is represented by using carL, the vehicle height is represented by using carH, and the average height of the camera is camera H. The height of the lowest point of the upper edge of the bowl model is mMinHeight, and the corresponding three-dimensional coordinate point is marked as mMinHeight P. The FOV value of the target view angle is targetFov. The bowl bottom radius is noted as bowlR. The height of the eye is denoted as eyeH and the distance from the origin of the projection of the eye on the xoy plane is denoted as eyeR. The specific calculation method is shown in fig. 7, and the coordinate position of the camera is calculated according to the external parameters of the camera, wherein the rotation vector is rcw, the translation vector is tcw, cv: : roclrigues (-rcw, R), three-dimensional coordinate vector p= -r·tcw.
Still taking four cameras as an example in fig. 3, wherein the front camera is denoted as P 1 The left camera is marked as P 2 The right camera is marked as P 3 The rear camera is denoted as P 4 The following relationship exists:
length carl=p 1 y-P 4 y
Vehicle width carw=p 3 x-P 2 x
Camera height camera h= (P) 1 z+P 2 z+P 3 z+P 4 z)/4
Vehicle height carh=carw×1459/1901 (where the two constants are aspect ratios of the vehicle model, other values may be selected in other embodiments).
The ellipse minor half axis or circle radius is calculated, where the round bottom radius bowlr=2×carbow, the ellipse bottom minor half axis b=1.8 carbow, and the major half axis a=1.8×carbol, bowlr=b (the part constant is adjusted according to the final display effect).
Specifically, referring to fig. 10, an actual bowl model projected by the camera is shown in fig. 10, the edge of the bowl opening has a height fluctuation (determined by the visual angle range of the camera), and the three-dimensional coordinates of the bowl opening are mapped by the upper edge of the photo of the camera, so that the three-dimensional coordinate values of each mapped point are recorded in the mapping process, the minimum z value is the min height to be obtained, and the corresponding point is the bowl opening lowest point min height p.
The positions of the eyes and the target positions and a 2D simplified model of the vehicle and the bowl model are shown in fig. 11, and are classified into two cases according to the relationship between the bowl model height and the vehicle height, the bowl model being lower than the vehicle height, and the bowl model being higher than the vehicle height. Based on dimension reduction of the three-dimensional model, a simple mathematical model is established, and then the optimal eye position and the target position are calculated. The basic principle is that the radius targetR of the target point from the origin is moved leftwards from the origin, the maximum movement distance is not more than the bowlR/3, the eye position radius eyeR is moved rightwards from the initial value, the maximum movement distance is not more than the bowlR/2 (i.e. not more than the left boundary of the vehicle), and when the included angle between the vector a and the vector b is exactly half of the target targetFov (the included angle between the vector c and the vector b is targetFov), the best view point is found. Wherein, the vector a refers to the vector from the eye position to the target position, the vector b refers to the vector from the eye position to the lowest point position, and the vector c refers to the vector from the eye position to the left half axis position of the bowl bottom edge in the x axis.
For ease of understanding, the calculation procedure is described in two cases, one is that the bowl model is lower than the vehicle height, then the variable k is introduced first, where k < hdeps, where minbowlh=mminhight+steph×k, the value of the variable i is sequentially increased, and the value of the variable j is sequentially increased, and ttargetr= (bowlR/3/steps) j is calculated, where the best viewing angle is determined if the angle between vectors a and b is half of targetFov.
Another is to introduce the variable k first, where k < hdeps, where eyeh=eyehbase (1.0+k 0.05), increase the value of the variable i in sequence, calculate treyer=eyer- (bowlR/2/steps) i, increase the value of the variable j in sequence, and calculate ttargetr= (bowlR/3/steps) j, where the optimal viewing angle is determined if the angle between vectors a and b is half targetFov.
Wherein the top view adopts front projection, eye position (0, eyeZ), where eyeZ is carH x 2 (2 is adjusted according to the view effect, and in other embodiments is other value). The target position is (0, 0), the viewing range x-direction half-width a=carw×2.5/2, and the y-direction half-width is b=a×ratio, where ratio is the aspect ratio of the display window.
For the other 9 views, a perspective projection mode is adopted, and the constant which is not specially described is a coefficient which is adjusted according to the visual effect:
the left forward looking parameters are as follows: double lf_eye_x=carw/2×1.2× (-1) (division by 2 indicates vehicle left side edge coordinates because the vehicle center is at the origin, multiplication by 1.2 indicates a coefficient where the eye position is to be left of the vehicle left side edge, multiplication by-1 indicates that the coordinate point is on the negative half axis);
double lf_eye_y=carL/5×(-1);
double lf_eye_z=cameraH×2.5;
double f_target_x=lf_eye_x
double lf_target_y=carL×5/7;
double f_target_z=0
double fov=45。
the front right viewing parameters were calculated as follows:
double rf_eye_x=lf_eye_x×(-1);
double rf_eye_y=lf_eye_y;
double rf_eye_z=lf_eye_z;
double rf_target_x=rf_eye_x;
double rf_target_y=lf_target_y;
double rf_target_z=0;
double fov=45。
the left rearview parameters were calculated as follows:
double lb_eye_x=lf_eye_x;
double lb_eye_y=lf_eye_y×(-1);
double lb_eye_z=lf_eye_z;
double lb_target_x=lb_eye_x;
double lb_target_y=lf_target_y×(-1);
double lb_target_y=0;
double fov=45。
the right rearview parameters were calculated as follows:
double rb_eye_x=lb_eye_x×(-1);
double rb_eye_y=lb_eye_y;
double rb_eye_z=lb_eye_z;
double rb_target_x=rb_eye_x;
double rb_target_y=lb_target_y;
double rb_target_z=0;
double fov=45。
the following four front, rear, left and right views all adopt a viewing angle looking vertically downwards parallel to the z-axis from the eye position, so that only the eye position needs to be listed, the x and y of the target are the same as those of eye, and z=0;
the forward looking parameters were calculated as follows:
double f_x=0;
double f_y= (carL/2) ×1.2; (division by 2 means half the length of the vehicle, multiplication by 1.2 means a factor slightly greater than the coordinate value of half the vehicle, adjusted by the final visual effect)
double f_z=carW/tan(targetFov/2)。
The rearview parameters were calculated as follows:
double b_x=0;
double b_y=-f_y;
double b_z=f_h。
the left-view parameters were calculated as follows:
double l_x=-carW;
double l_y=0;
double l_z=carW/tan(targetFov/2)。
the right-view parameters were calculated as follows:
double r_x=-l_x;
double r_y=0;
double r_z=l_h。
the rearview mirror parameters were calculated as follows:
double rear_eye_x=0;
double rear_eye_y=carL/4×(-1);
double rear_eye_z=cameraH×2.3;
double rear_target_x=0;
double rear_target_y=carL×(-0.9);
double rear_target_z=0。
the embodiment completes the automatic calculation of the visual angle, not only can improve the efficiency of manually adjusting the visual angle, but also can automatically adapt to the visual angles of different vehicles in actual installation, and provides the maximum range of effective visual angles for drivers.
In one embodiment, performing color space conversion on the initial image to obtain a channel separated image to be processed includes: and carrying out color space conversion on each pixel in the initial image in parallel to obtain the channel-separated image to be processed.
In this case, since the color space conversion requires pixel-by-pixel computation, the overall efficiency is greatly affected, and particularly when the output resolution is high, each pixel in the initial image is processed in parallel to improve the efficiency. In a preferred embodiment, the computation may be performed by hardware acceleration, for example by a GPU, because there are many computation units within the GPU, which may compute the value of each pixel in parallel. The GPU3D computing and rendering interfaces include D3D, openGL, openGL ES, and OpenCL, and in this embodiment, the use of GPU hardware acceleration is described by using OpenGL/OES as an example.
Before the vehicle-mounted looking-around system is used, a calibration process is usually carried out, and is used for acquiring the position, angle and lens parameters of the camera, and generating a looking-around model according to the acquired parameters. When the vehicle-mounted looking-around system is used, a camera image is acquired, and then a spliced looking-around view is drawn according to the looking-around model.
When the color space conversion is carried out, the terminal uses an OpenGL/OES drawing model M2 to generate a low-resolution Y gray scale image, and the GLSL script converts the RGB format into the Y format, and the GPU accelerates the low-resolution Y gray scale image.
For the whole calculation process of the channel parameter adjustment value, the CPU may be used to calculate, for example, the average brightness of the Y gray scale map blocks. (this step may enable GPU hardware acceleration by OpenCL, but the acceleration effect is limited due to the lower Y-gram resolution, but the complexity is much increased, with additional overhead). Solving the equation set by using the CPU to obtain F 1 -F 4 、rl、rh。
In the above embodiment, the processing efficiency is improved by parallel computing.
In one embodiment, as shown in fig. 12, a method for generating a look-around view is provided, and the method is applied to the vehicle terminal in fig. 1 for illustration, and includes the following steps:
s1202: and acquiring images acquired by each camera.
S1204: and processing the images acquired by each camera according to the target channel value and the looking-around model to generate a ring view, wherein the target channel value is calculated according to the channel value calculation method in any one embodiment.
Specifically, for the calculation of the target channel value, reference may be made to the above, i.e. the initial image of the target co-view region in the above, i.e. the model M2 in the above, the visual region of the model M2 being smaller and the accuracy being lower. In the present embodiment, the model M1 of the look-around view, that is, the ring view is normally drawn.
In the above embodiment, the target channel value is obtained by calculation, so that the spliced look-around images have no visual difference, and only the target common-view area in the image is extracted and color space converted in the calculation process, so that the calculation of the data volume can be reduced, and the calculation efficiency is improved.
In one embodiment, processing the images acquired by each camera to generate a ring view according to the target channel value and the look-around model includes: and processing each pixel in the image acquired by each camera according to the target channel value and the looking-around model in a parallel processing mode to generate a ring view.
The adjustment of the channel value in the ring view needs to be calculated pixel by pixel, so that the influence on the overall efficiency is great, particularly when the output resolution is high, and therefore, in order to improve the efficiency, each pixel in the image acquired by each camera is processed in parallel according to the target channel value to generate the ring view.
Specifically, the vehicle terminal draws a model M1 by using OpenGL/OES, and transmits F in 1 -F 4 Rl and Rh parameters. RGB to YUV is accomplished by GLSL script, and channel value adjustment is accomplished by a y=smoothstep (rl, rh, y+fc) function built into GLSL to further utilize the hardware acceleration capabilities of the GPU, as well as YUV to RGB.
In one embodiment, the processing, by parallel processing, each pixel in the image acquired by each camera according to the target channel value and the look-around model to generate a ring view includes: processing the images acquired by each camera according to the camera parameters to obtain target pixel points in the annular view; converting each target pixel point from RGB space to YUV space in parallel; adjusting the channel value of the target channel in the YUV space according to the target channel value in parallel; and converting each pixel point in the YUV space into the RGB space according to the adjusted channel value in parallel, and generating a ring view by using a ring view model.
The terminal converts each target pixel point from RGB space to YUV space in parallel, and can calculate by the following formula:
the terminal converts each pixel point in the YUV space into the RGB space according to the adjusted channel value in parallel, and can calculate through the following formula:
Specifically, referring to fig. 12, the pixel point is converted from RGB space to YUV space, then the corresponding channel value is adjusted, for example, the parameter value of the luminance channel is adjusted, specifically, the function of y=smoothstep (rl, rh, y+fc) built in the GLSL is completed, and finally the pixel point is converted from YUV space to RGB space.
In the above embodiment, the problems of brightness, contrast and/or saturation change of the vehicle-mounted looking-around splicing area caused by inconsistent values of the camera channels are basically eliminated, and more real experience is brought. Wherein the channel parameter adjustment value is calculated using the black square area. It should be noted that the brightness is actually adjusted by processing the whole picture taken by the camera. The brightness of the whole picture is adjusted not only in the area where the black block is located.
In addition, in the embodiment, GPU hardware can be used for accelerating the dodging processing process, so that the influence on the running speed of the vehicle-mounted looking-around system is reduced.
In one embodiment, the method further comprises: receiving a channel numerical adjustment instruction, wherein the channel numerical adjustment instruction carries an adjustment coefficient; and adjusting the target channel value according to the adjustment coefficient.
In particular, the entire view of the ring is processed in this embodiment, including the overall adjustment of brightness, saturation, and contrast.
In practical applications, I '=i×f is used to adjust luminance, F is a luminance adjustment coefficient, U' =u×contrast is used to adjust Contrast, and Contrast is a Contrast adjustment coefficient; saturation is adjusted using V' =v×saturation, which is a saturation adjustment coefficient.
In the above embodiment, not only the channel values of different cameras are adjusted, but also the overall parameters of the annular view can be adjusted.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a channel numerical value calculating device and a look-around graph generating device for realizing the above related channel numerical value calculating method and look-around graph generating method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitations in the embodiments of the one or more channel numerical calculation devices and the look-around graph generating device provided below can be referred to above for the limitations of the channel numerical calculation method and the look-around graph generating method, which are not repeated here.
In one embodiment, as shown in fig. 13, there is provided a channel numerical value calculating apparatus including: a first drawing module 1301, a first conversion module 1302, a channel parameter adjustment value calculation module 1303, and a target channel value calculation module 1304, wherein:
the first drawing module 1301 is configured to obtain images acquired by each camera, extract a target common-view area in the images, and obtain an initial image according to the target common-view area.
A first conversion module 1302, configured to perform color space conversion on the initial image to obtain a channel-separated image to be processed.
The channel parameter adjustment value calculation module 1303 is configured to calculate a channel parameter adjustment value of each camera according to the image to be processed.
The target channel value calculation module 1304 is configured to obtain a target channel value based on the channel parameter adjustment value and the original channel values of the images acquired by the cameras.
In one embodiment, the adjustment value calculating module 1303 may include:
and the target channel parameter calculation unit is used for determining the target channel value according to the current channel value of each target common view area in the image to be processed and the channel parameter adjustment value to be calculated.
The first equation generating unit is used for obtaining a channel parameter equation according to the fact that the values of the target channels of the adjacent cameras are basically equal.
And the second equation generating unit is used for obtaining a constraint equation according to the accumulation and total invariance of the target channel values of the cameras.
And the equation set generating unit is used for obtaining a target equation set according to the channel parameter equation and the constraint equation.
And the channel parameter adjustment value calculation unit is used for solving the target equation set to obtain the channel parameter adjustment values of the cameras.
In one embodiment, the target channel value calculation module 1304 includes:
The intermediate value calculation unit is used for calculating an intermediate channel value based on the channel parameter adjustment value and the original channel value of the acquired original image of each camera.
And the mapping unit is used for mapping the intermediate channel value to a reasonable value interval to obtain a target channel value.
In one embodiment, the channel numerical value calculating device further includes:
the calibration module is used for calibrating each camera to obtain camera parameters;
the first drawing module 1301 is further configured to extract a target common-view area in the accounting image, and process the reference object target common-view area according to the camera parameters to obtain an initial image.
In one embodiment, the calibration module may include:
the camera position calculation unit is used for calculating the camera positions of the cameras according to the external parameters of the cameras;
the vehicle parameter calculation unit is used for calculating the length of the vehicle body, the width of the vehicle and the height of the camera based on the position of the camera;
the model parameter calculation unit is used for calculating model parameters of the looking-around model according to the vehicle width and the length of the vehicle body;
the target observation position calculation unit is used for determining an initial observation height value and an initial observation radius value of the observation position according to the lowest bowl height in the model parameters, and adjusting the initial observation height value and the initial observation radius value until the observation visual angle meets the requirement, so as to obtain the target observation position;
And the camera parameter calculation unit is used for calculating and obtaining camera parameters of each camera according to the target observation position.
In one embodiment, the first conversion module 1302 is further configured to perform color space conversion on each pixel in the initial image in parallel, so as to obtain a channel-separated image to be processed.
In one embodiment, as shown in fig. 14, there is provided a look-around view generating apparatus, including: an image acquisition module 1401 and a second rendering module 1402, wherein:
an image acquisition module 1401, configured to acquire images acquired by each camera;
the second drawing module 1402 is configured to process the images acquired by each camera to generate a ring view according to the target channel value and the look-around model, where the target channel value is calculated by the channel value calculating device according to any one of the embodiments.
In one embodiment, the second drawing module 1402 is further configured to process each pixel in the image acquired by each camera according to the target channel value and the look-around model in a parallel processing manner to generate a ring view.
In one embodiment, the second drawing module 1402 may include:
The target pixel point acquisition unit is used for processing the images acquired by each camera according to the camera parameters to obtain target pixel points in the annular view;
the first conversion unit is used for converting each target pixel point from RGB space to YUV space in parallel;
the adjusting unit is used for adjusting the channel value of the target channel in the YUV space according to the target channel value in parallel;
and the second conversion unit is used for converting each pixel point in the YUV space into the RGB space in parallel according to the adjusted channel value, and generating a ring view by using a ring view model.
In one embodiment, the above-mentioned look-around view generating apparatus further includes:
the receiving module is used for receiving a channel numerical value adjusting instruction, wherein the channel numerical value adjusting instruction carries an adjusting coefficient;
and the adjusting module is used for adjusting the target channel value according to the adjusting coefficient.
The above-described channel numerical value calculation means and the respective modules in the look-around image generation means may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, such as a vehicle terminal, whose internal structure may be as shown in fig. 15. The computer device comprises a processor, a memory, a communication interface, a display screen, an input device and a camera which are connected through a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a channel numerical calculation method and a look-around graph generation method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like. The camera can be a common camera or a vehicle-mounted camera.
It will be appreciated by those skilled in the art that the structure shown in fig. 15 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements are applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above. Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (15)

1. A channel numerical calculation method, characterized in that the channel numerical calculation method comprises:
acquiring images acquired by each camera, extracting a target common view area in the images, and acquiring an initial image according to the target common view area;
performing color space conversion on the initial image to obtain a channel-separated image to be processed;
calculating to obtain channel parameter adjustment values of the cameras according to the image to be processed;
And obtaining a target channel value based on the channel parameter adjustment value and the original channel value of the image acquired by each camera.
2. The method according to claim 1, wherein the calculating the channel parameter adjustment value of each camera according to the image to be processed includes:
determining a target channel value according to the current channel value of each target common view area in the image to be processed and the channel parameter adjustment value to be calculated;
obtaining a channel parameter equation according to the equality of the target channel values of the adjacent cameras;
obtaining a constraint equation according to accumulation and total invariance of the target channel values of each camera;
obtaining a target equation set according to the channel parameter equation and the constraint equation;
and solving the target equation set to obtain channel parameter adjustment values of all cameras.
3. The method according to claim 1, wherein the obtaining the target channel value based on the channel parameter adjustment values and the original channel values of the images acquired by the respective cameras includes:
calculating an intermediate channel value based on the channel parameter adjustment value and the original channel value of the image acquired by each camera;
And mapping the intermediate channel value to a reasonable value interval to obtain a target channel value.
4. The exposure parameter calculation method according to claim 1, characterized in that the method further comprises:
calibrating each camera to obtain camera parameters;
the extracting the target common view area in the image, and obtaining an initial image according to the target common view area includes:
and extracting a target common-view area in each image, and processing the target common-view area according to the camera parameters to obtain an initial image.
5. The method according to claim 4, wherein calibrating each of the cameras to obtain camera parameters comprises:
calculating according to the external parameters of each camera to obtain the camera position of each camera;
calculating the length of the vehicle body, the width of the vehicle and the height of the camera based on the position of the camera;
calculating model parameters of a look-around model according to the vehicle width and the vehicle body length;
determining an initial observation height value and an initial observation radius value of an observation position according to the lowest bowl height in the model parameters, and adjusting the initial observation height value and the initial observation radius value until the observation visual angle meets the requirement, so as to obtain a target observation position;
And calculating camera parameters of each camera according to the target observation position.
6. The method according to any one of claims 1 to 5, wherein said performing color space conversion on said initial image to obtain a channel separated image to be processed comprises:
and carrying out color space conversion on each pixel in the initial image in parallel to obtain the channel-separated image to be processed.
7. The method for generating the ring-view image is characterized by comprising the following steps of:
acquiring images acquired by each camera;
and processing the images acquired by each camera according to the target channel value and the look-around model to generate a ring view, wherein the target channel value is calculated according to the channel value calculation method of any one of claims 1 to 6.
8. The method for generating a view around according to claim 7, wherein the processing the image acquired by each camera according to the target channel value and the view around model to generate a view around comprises:
and processing each pixel in the images acquired by the cameras according to the channel value and the looking-around model in a parallel processing mode to generate a ring view.
9. The method for generating a view around according to claim 7, wherein the processing each pixel in the image acquired by each camera according to the target channel value and the view around model by parallel processing to generate a view around includes:
processing the images acquired by each camera according to the camera parameters to obtain target pixel points in the annular view;
converting each target pixel point from RGB space to YUV space in parallel;
adjusting the channel value of the target channel in the YUV space according to the value of the target channel in parallel;
and converting each pixel point in the YUV space into the RGB space according to the adjusted channel value in parallel, and generating a ring view by using a ring view model.
10. The method of generating a look-around view according to claim 7, further comprising:
receiving a channel parameter adjustment instruction, wherein the channel parameter adjustment instruction carries an adjustment coefficient;
and adjusting the target channel value according to the adjustment coefficient.
11. A channel numerical calculation device, characterized in that the channel numerical calculation device comprises:
the first drawing module is used for acquiring images acquired by each camera, extracting a target common view area in the images and obtaining an initial image according to the target common view area;
The first conversion module is used for carrying out color space conversion on the initial image to obtain a channel-separated image to be processed;
the channel parameter adjustment value calculation module is used for calculating the channel parameter adjustment value of each camera according to the image to be processed;
and the target exposure parameter calculation module is used for obtaining a target channel value based on the channel parameter adjustment value and the original channel value of the image acquired by each camera.
12. An apparatus for generating a look-around view, the apparatus comprising:
the image acquisition module is used for acquiring images acquired by each camera;
the second drawing module is configured to process the images acquired by the cameras to generate a ring view according to a target channel value and a look-around model, where the target channel value is calculated by the channel value calculation device according to claim 10.
13. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 6 or 7 to 10 when the computer program is executed.
14. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6 or 7 to 10.
15. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any one of claims 1 to 6 or 7 to 10.
CN202210567975.7A 2022-05-24 2022-05-24 Channel numerical value calculation method, and method, device, equipment and medium for generating look-around graph Pending CN117177076A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210567975.7A CN117177076A (en) 2022-05-24 2022-05-24 Channel numerical value calculation method, and method, device, equipment and medium for generating look-around graph

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210567975.7A CN117177076A (en) 2022-05-24 2022-05-24 Channel numerical value calculation method, and method, device, equipment and medium for generating look-around graph

Publications (1)

Publication Number Publication Date
CN117177076A true CN117177076A (en) 2023-12-05

Family

ID=88933995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210567975.7A Pending CN117177076A (en) 2022-05-24 2022-05-24 Channel numerical value calculation method, and method, device, equipment and medium for generating look-around graph

Country Status (1)

Country Link
CN (1) CN117177076A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108012078A (en) * 2017-11-28 2018-05-08 广东欧珀移动通信有限公司 Brightness of image processing method, device, storage medium and electronic equipment
CN109741455A (en) * 2018-12-10 2019-05-10 深圳开阳电子股份有限公司 A kind of vehicle-mounted stereoscopic full views display methods, computer readable storage medium and system
CN110443771A (en) * 2019-08-16 2019-11-12 同济大学 It is vehicle-mounted to look around panoramic view brightness and colour consistency method of adjustment in camera system
CN110753217A (en) * 2019-10-28 2020-02-04 黑芝麻智能科技(上海)有限公司 Color balance method and device, vehicle-mounted equipment and storage medium
CN113034616A (en) * 2021-03-31 2021-06-25 黑芝麻智能科技(上海)有限公司 Camera external reference calibration method and system for vehicle all-round looking system and all-round looking system
CN113421183A (en) * 2021-05-31 2021-09-21 中汽数据(天津)有限公司 Method, device and equipment for generating vehicle panoramic view and storage medium
CN113496474A (en) * 2021-06-15 2021-10-12 中汽创智科技有限公司 Image processing method, device, all-round viewing system, automobile and storage medium
CN113689368A (en) * 2020-05-18 2021-11-23 上海赫千电子科技有限公司 Automatic illumination consistency adjusting method applied to vehicle-mounted all-around image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108012078A (en) * 2017-11-28 2018-05-08 广东欧珀移动通信有限公司 Brightness of image processing method, device, storage medium and electronic equipment
CN109741455A (en) * 2018-12-10 2019-05-10 深圳开阳电子股份有限公司 A kind of vehicle-mounted stereoscopic full views display methods, computer readable storage medium and system
CN110443771A (en) * 2019-08-16 2019-11-12 同济大学 It is vehicle-mounted to look around panoramic view brightness and colour consistency method of adjustment in camera system
CN110753217A (en) * 2019-10-28 2020-02-04 黑芝麻智能科技(上海)有限公司 Color balance method and device, vehicle-mounted equipment and storage medium
CN113689368A (en) * 2020-05-18 2021-11-23 上海赫千电子科技有限公司 Automatic illumination consistency adjusting method applied to vehicle-mounted all-around image
CN113034616A (en) * 2021-03-31 2021-06-25 黑芝麻智能科技(上海)有限公司 Camera external reference calibration method and system for vehicle all-round looking system and all-round looking system
CN113421183A (en) * 2021-05-31 2021-09-21 中汽数据(天津)有限公司 Method, device and equipment for generating vehicle panoramic view and storage medium
CN113496474A (en) * 2021-06-15 2021-10-12 中汽创智科技有限公司 Image processing method, device, all-round viewing system, automobile and storage medium

Similar Documents

Publication Publication Date Title
JP7159057B2 (en) Free-viewpoint video generation method and free-viewpoint video generation system
US10269092B2 (en) Image processing device, image processing method, and storage medium
CN109076172B (en) Method and system for generating an efficient canvas view from an intermediate view
CN109690620B (en) Three-dimensional model generation device and three-dimensional model generation method
US8817079B2 (en) Image processing apparatus and computer-readable recording medium
KR101265667B1 (en) Device for 3d image composition for visualizing image of vehicle around and method therefor
CN110300292B (en) Projection distortion correction method, device, system and storage medium
US11303807B2 (en) Using real time ray tracing for lens remapping
KR20160116075A (en) Image processing apparatus having a function for automatically correcting image acquired from the camera and method therefor
US20180182178A1 (en) Geometric warping of a stereograph by positional contraints
CN111161398B (en) Image generation method, device, equipment and storage medium
TWI716874B (en) Image processing apparatus, image processing method, and image processing program
Li et al. HDRFusion: HDR SLAM using a low-cost auto-exposure RGB-D sensor
CN114143528A (en) Multi-video stream fusion method, electronic device and storage medium
CN115035235A (en) Three-dimensional reconstruction method and device
CN113793255A (en) Method, apparatus, device, storage medium and program product for image processing
US11908096B2 (en) Stereoscopic image acquisition method, electronic device and storage medium
US20210407113A1 (en) Information processing apparatus and information processing method
CN112734630B (en) Ortho image processing method, device, equipment and storage medium
CN116324894A (en) Generation of representation data of three-dimensional map data
JP7195785B2 (en) Apparatus, method and program for generating 3D shape data
CN117177076A (en) Channel numerical value calculation method, and method, device, equipment and medium for generating look-around graph
JP5926626B2 (en) Image processing apparatus, control method therefor, and program
CN116704111A (en) Image processing method and apparatus
CN115601275A (en) Point cloud augmentation method and device, computer readable storage medium and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: 200135, 11th Floor, Building 3, No. 889 Bibo Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai

Applicant after: Granfei Intelligent Technology Co.,Ltd.

Address before: 200135 Room 201, No. 2557, Jinke Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai

Applicant before: Gryfield Intelligent Technology Co.,Ltd.

Country or region before: China

CB02 Change of applicant information