CN113840123A - Image processing device of vehicle-mounted image and automobile - Google Patents

Image processing device of vehicle-mounted image and automobile Download PDF

Info

Publication number
CN113840123A
CN113840123A CN202010595463.2A CN202010595463A CN113840123A CN 113840123 A CN113840123 A CN 113840123A CN 202010595463 A CN202010595463 A CN 202010595463A CN 113840123 A CN113840123 A CN 113840123A
Authority
CN
China
Prior art keywords
image
revert
illumination intensity
value
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010595463.2A
Other languages
Chinese (zh)
Other versions
CN113840123B (en
Inventor
肖文平
石川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Hinge Electronic Technologies Co Ltd
Original Assignee
Shanghai Hinge Electronic Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Hinge Electronic Technologies Co Ltd filed Critical Shanghai Hinge Electronic Technologies Co Ltd
Priority to CN202010595463.2A priority Critical patent/CN113840123B/en
Publication of CN113840123A publication Critical patent/CN113840123A/en
Application granted granted Critical
Publication of CN113840123B publication Critical patent/CN113840123B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/68Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits
    • H04N9/69Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits for modifying the colour signals by gamma correction

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides an image processing device of a vehicle-mounted image, which specifically comprises: at least one visible light camera, microprocessor controller, visible light camera are connected with microprocessor controller, and microprocessor controller includes the virtual exposure processing module that is used for image processing, and virtual exposure processing module includes the virtual exposure processing process of image, specifically includes: converting the collected image into a reverse image, acquiring the atmospheric illumination intensity and the transmission rate of the image, generating a restored image according to a preset image restoration model, correcting the restored image, reversing the restored image and acquiring a processed image. By improving the calculation method of the atmospheric illumination intensity and the transmission rate in the prior art, the technical scheme of the invention enables the processed image to be smoother and the imaging effect to be better when the illumination is insufficient and the interference of car lamps, street lamps or brighter white objects occurs.

Description

Image processing device of vehicle-mounted image and automobile
Technical Field
The invention relates to the field of automobiles, in particular to an image processing device for vehicle-mounted images and an automobile.
Background
In the field of vehicle driving assist, a visible light camera is generally employed as an imaging device in front of a vehicle. However, most target detection technologies are based on the daytime driving condition, the photo is sufficient and uniform during daytime driving, the noise in the video data acquired by the camera is low, and the intelligent algorithm can be successfully applied to the scene, so that efficient target detection is realized, and an auxiliary driving decision is provided for the driver. The visible light camera is particularly poor in imaging quality at night often due to poor exposure conditions, and is particularly obvious in the situations of imperfect street lamp facilities and the like (such as rural roads). Under the not good condition of light at night, the driver can't see the environment around clearly through the naked eye, and the visible light camera is because the exposure is not enough, and the imaging effect is not good, also can't provide better image effect for the driver. Therefore, the driver often fails to detect the pedestrian, the vehicle, and the road surface defect in front in time, thereby causing traffic accidents. In order to better improve the imaging effect of the camera, patent CN105991938A provides a virtual exposure method, in which when t (x, y) is close to 0 in the process of calculating the image restoration model, the color of the region is directly seriously distorted. In addition, in the application scene of the vehicle-mounted camera, under the condition of insufficient illumination, the vehicle lamp is required to be turned on for road illumination, the illumination intensity is influenced by the vehicle lamp, street lamps at two sides of a road and the like, the calculation mode of the atmospheric illumination intensity A is that the gray-scale images are cut into gray-scale image blocks with preset number according to a preset segmentation sequence, and the segmentation sequence comprises from top to bottom and from left to right; when the gray image blocks are smaller than a preset size, acquiring the mean value and the variance of the brightness of each gray image block; the method for calculating the brightness of the pixel points in the gray-scale image blocks with the largest mean value and the smallest variance is selected as the atmospheric illumination intensity, and the calculation mode is feasible under the condition of no light interference, but when the illumination is poor and the number of lamps is large, the method for taking the maximum value of the brightness of the pixel points as the atmospheric illumination intensity can excessively estimate and improve the estimated value of the atmospheric illumination intensity, so that the final imaging effect is poor. For example, the light intensity at the position with the lamp light may be larger than that at the position without the lamp light, and when the estimation is performed by using the method, the current atmosphere illumination intensity is estimated according to the intensity of the lamp light position, so that the current estimation value is more deviated from the true value.
Therefore, under the condition of insufficient light, how to obtain the atmospheric illumination intensity and the transmission rate more accurately to obtain a better virtual exposure image processing technology is a difficult problem to be solved by the current automobile.
Disclosure of Invention
In view of the conventional shortcomings in the prior art, the present invention provides an image processing apparatus for vehicle-mounted images, comprising:
at least one visible light camera, microprocessor controller, visible light camera are connected with microprocessor controller, and microprocessor controller includes the virtual exposure processing module that is used for image processing, and virtual exposure processing module includes the virtual exposure processing process of image, specifically includes: converting the collected image into a reverse image, acquiring the atmospheric illumination intensity and the transmission rate of the image, generating a restored image according to a preset image restoration model, correcting the restored image, reversing the restored image and acquiring a processed image;
the step of obtaining the atmospheric illumination intensity comprises the following steps: calculating the atmospheric illumination intensity in blocks by taking preset blocks as units, and taking the average value in the blocks as the final atmospheric illumination intensity A of the blocksΨk
Figure BDA0002554572680000021
In the above formula, AΨkIs represented at ΨkAtmospheric illumination intensity, | ΨkI denotes ΨkThe total number of all pixel points in the area in the station; i isd(x, y) is ΨkDifference values of all pixels in the local area; a isk,bkIs ΨkCoefficients within a local region; (x, y) represents two-dimensional coordinates of the pixel; the disparity value is defined as the difference between the maximum pass image value and the minimum pass image value obtained from the inverted image.
The image processing device for the vehicle-mounted image further starts a virtual exposure processing module to perform virtual exposure processing on the acquired visible light image when the illumination intensity or the visibility of the current environment is lower than a preset threshold value.
The image processing device of the vehicle-mounted image further comprises an infrared camera, wherein the infrared camera is connected with the micro-processing controller; when the illumination intensity or the visibility of the current environment is lower than a preset threshold value, acquiring an infrared image and a visible light image at the same visual angle, selecting an interested area from the infrared image and acquiring the coordinates of the interested area, and selecting an image of a matching area in the corresponding visible light image for virtual exposure processing according to the coordinates of the interested area selected from the infrared image.
An image processing apparatus for on-vehicle images, further, a calculation method of a maximum pass image value in a reverse image is: i ismax(x,y)=max{Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x,y)};
The minimum pass image value in the inverted image is calculated in the following way: i ismin(x,y)=min{Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x,y)};
Calculating formula of difference value: i isd(x,y)=Imax(x,y)-Imin(x,y);
In the above formula, Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x, y) represent the values of the R, G, B channels of the inverted image, respectively.
An image processing apparatus for on-board images, further, ak,bkThe acquisition process comprises the following steps: establishing a linear relation between the atmospheric illumination intensity A (x, y) and the difference value of the pixel points in the local region Ψ k, and then establishing an objective function E (a) between the atmospheric illumination intensity A (x, y) and the brightness of the reversed imagek,bk) Solving the objective function by linear regression or least square method to obtain ak,bk
Figure BDA0002554572680000031
bk=μk-akk
In the above formula,. mu.k,σkAre respectively at preset psikDifference value I of all pixels in local areadThe mean and variance of (x, y), I (x, y) is the luminance of the pixels of the inverted image,
Figure BDA0002554572680000041
is ΨkAverage of I (x, y) of all pixels in the local region, | ΨkL is ΨkThe total number of all pixels in the local region; and lambda is an error adjustment factor.
An image processing apparatus for on-vehicle images, further, the acquisition process of the transmission rate includes:
step S4A, obtaining the transmission rate t corresponding to the maximum channel of the light source areaLRTransmission rate t corresponding to minimum channel of non-light source regionNRThe calculation formulas are respectively as follows:
Figure BDA0002554572680000042
in the above formula, Ic(x, y) denotes inversion of the image at Ψ with LR or NRkThe value of the r, g, b channel, Ψ, for each pixel in the regionkRepresenting a local region centered on pixel k; a. thec(x, y) is ΨkAtmospheric intensity of r, g, b channel of each pixel in the region;
step S4B, obtaining a brightness perception coefficient alpha (x, y);
Figure BDA0002554572680000043
step S4C, calculating the final transmission rate t (x, y):
t(x,y)=tLR(x,y)*α(x,y)+tNR(x,y)*(1-α(x,y)
an image processing apparatus for on-vehicle images, further, the acquisition process of the restored image includes:
respectively converting the three channel values I according to a recovery formularevert_R(x,y),Irevert_G(x,y),Irevert_B(x, y), the atmospheric illumination intensity A (x, y) and the transmission rate t (x, y) are substituted to obtain a restored image JrevertThe recovery value J of the (x, y) three channelsrevert_R(x,y),Jrevert_G(x,y),Irevert_B(x,y);
The calculation formula of the restored image is as follows:
Figure BDA0002554572680000051
the calculation formula of the values of the three-channel restored image is as follows:
Figure BDA0002554572680000052
Figure BDA0002554572680000053
Figure BDA0002554572680000054
an image processing apparatus for on-vehicle images, further, another process for obtaining the atmospheric illumination intensity comprises: acquiring an image of a maximum channel, sequencing the brightness of pixel points according to the size in the image of the maximum channel, selecting pixel points 0.1-10% of the brightness, selecting the brightness of an original image at the same positions of the selected pixel points, and calculating the average value of the brightness of the pixel points as the atmospheric illumination intensity;
the acquisition of the maximum channel image includes: in the process of obtaining the RGB reversed image, the image formed by the maximum values of the pixel points has the following calculation formula:
Im=max{Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x,y)}
Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x, y) denote reverse images of R, G, B channels corresponding to the RGB images, respectively.
An image processing apparatus for on-vehicle images, further, the acquisition process of the transmission rate includes:
step S41, acquiring a grayscale image of the inverted image:
Igray(x,y)=0.3*Irevert_R(x,y)+0.59*Irevert_G(x,y)+0.11Irevert_B(x,y)
step S42, blocking the grayscale image;
step S43, taking block as unit, according to the atmospheric illumination intensity A and the formula
Figure BDA0002554572680000055
Setting the transmission rates t (x, y) of the current block to be a plurality of values between 0.1 and 0.9 respectively, calculating corresponding J (x, y) respectively, and then obtaining J (x, y) which enables the image contrast to obtain the maximum value and recording the J (x, y) as t1(x,y);
The formula for calculating the contrast is:
Figure BDA0002554572680000061
wherein J (x, y) is a restored image generated at a predetermined transmission rate,
Figure BDA0002554572680000062
n is the number of pixel points of each block corresponding to J (x, y) as a common mode variable of the restored image;
step S44, the transmission rate t of the current blockΨk(x,y)=p(x,y)*t1(x,y);
Where p (x, y) represents a correction factor, defining:
Figure BDA0002554572680000063
an automobile comprising any one of the above image processing devices for on-board images.
Has the advantages that:
the invention provides a virtual exposure image processing method aiming at poor imaging effect caused by insufficient exposure of a current visible light camera, and the technical scheme of the invention enables the processed image to be smoother and better in imaging effect when the interference of car lamps, street lamps or brighter white objects is caused by insufficient illumination through improving the calculation method of the atmospheric illumination intensity and transmission rate in the prior art.
Drawings
The following drawings are only schematic illustrations and explanations of the present invention, and do not limit the scope of the present invention.
FIG. 1 is a schematic view illustrating a virtual exposure process of a vehicle-mounted image according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a vehicle-mounted image device including a visible light camera according to an embodiment of the invention.
Fig. 3 is a schematic structural diagram of a vehicle-mounted image device including an infrared camera and a visible light camera according to an embodiment of the present invention.
Detailed Description
For a more clear understanding of the technical features, objects, and effects herein, embodiments of the present invention will now be described with reference to the accompanying drawings, in which like reference numerals refer to like parts throughout. For the sake of simplicity, the drawings are schematic representations of relevant parts of the invention and are not intended to represent actual structures as products. In addition, for simplicity and clarity of understanding, only one of the components having the same structure or function is schematically illustrated or labeled in some of the drawings.
As for the control system, the functional module, application program (APP), is well known to those skilled in the art, and may take any suitable form, either hardware or software, and may be a plurality of functional modules arranged discretely, or a plurality of functional units integrated into one piece of hardware. In its simplest form, the control system may be a controller, such as a combinational logic controller, a micro-programmed controller, or the like, so long as the operations described herein are enabled. Of course, the control system may also be integrated as a different module into one physical device without departing from the basic principle and scope of the invention.
The term "connected" in the present invention may include direct connection, indirect connection, communication connection, and electrical connection, unless otherwise specified.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, values, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, values, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items
It should be understood that the term "vehicle" or "vehicular" or other similar terms as used herein generally includes motor vehicles such as passenger automobiles including Sport Utility Vehicles (SUVs), buses, trucks, various commercial vehicles, watercraft including a variety of boats, ships, aircraft, and the like, and includes hybrid vehicles, electric vehicles, plug-in hybrid electric vehicles, hydrogen-powered vehicles, and other alternative fuel vehicles (e.g., fuels derived from non-petroleum sources). As referred to herein, a hybrid vehicle is a vehicle having two or more power sources, such as both gasoline-powered and electric-powered vehicles.
An image processing device for vehicle-mounted images, see fig. 2 and 3, specifically comprises: the system comprises at least one visible light camera and a micro-processing controller, wherein the visible light camera is connected with the micro-processing controller and is arranged on a vehicle and used for shooting pictures;
the visible light camera comprises a fisheye camera, and a rotating part and a lifting part are arranged on the visible light camera;
the rotating part can drive the visible light camera to rotate, and the ascending part can drive the visible light camera to ascend or descend;
the image processing device also comprises an infrared camera which is connected with the micro-processing controller;
the visible light camera and the infrared camera are connected with the microprocessor in a mode of vehicle-mounted Ethernet bus or one of CAN, LVDS and MOST bus;
the micro-processing controller comprises a virtual exposure processing module for image processing, and the virtual exposure processing module comprises a virtual exposure processing process of an image;
referring to fig. 1, the virtual exposure process of an image includes: acquiring a picture shot by a visible light camera, detecting the current illumination intensity through an illumination sensor, and starting a virtual exposure method to perform virtual exposure processing on the picture shot by the visible light when the illumination intensity is smaller than a preset threshold value;
step S1, converting the image shot by the visible camera into an RGB format diagram;
specifically, there are various formats of images of the camera, such as a widely used bayer format image, and when the acquired image is a bayer image, the bayer image needs to be converted into an RGB image;
step S2: inverting the RGB format image to generate an inverted image;
step S3, obtaining the atmospheric illumination intensity;
step S4, obtaining the transmission rate of the gray image;
step S5 of generating a restored image based on a pre-configured image restoration model, the inverted image, the atmospheric light intensity, and the transmission rate;
in step S1, the image format is RGB format, and if the original image captured by the camera is a bayer format image, the bayer format image is converted into an RGB format image by interpolation;
in step S2, the RGB components of the input visible light image are inverted to obtain an inverted image Irevert(x,y);
Figure BDA0002554572680000091
Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x, y) respectively representing R, G and B channel images of pixels after inversion, and (x, y) tableAnd showing the position coordinates of the two-dimensional pixel points.
In the prior art, one way to calculate the illumination intensity of the atmosphere is to select the brightest pixel of the first 0.1% in the dark channel picture, then correspond the positions of the pixel points to the corresponding positions on the original picture, find the highest value of the brightness points, and use the highest value as the whole atmosphere illumination intensity a, but the calculation of the atmosphere illumination intensity by the method is very inaccurate, especially when the illumination is insufficient, the original RGB image is originally very dark, and the dark channel image obtained through the RGB channel can cause deviation and a large amount of color spots under the illumination condition of existing high beam lamps and street lamps. In order to reduce the interference of the existence of the car lights, the street lamps, etc. to the atmospheric light value under the condition of insufficient light, and at the same time, on the premise that the real-time performance of the image acquired by the camera is also guaranteed, in the embodiment step S3, a method for calculating the atmospheric light value is as follows:
the maximum channel image acquisition method comprises the following steps: in acquiring the RGB reverse image, an image composed of the points having the maximum value of luminance becomes the maximum channel image Im
Im=max{Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x,y)}
And sequencing the brightness of the pixel points from the maximum channel image according to the size, selecting the pixel points 0.1-10% of the brightness, then selecting the brightness of the original image at the same positions of the pixel points, and calculating the average value of the brightness of the pixel points to be used as an atmospheric light value, thereby avoiding overestimating the atmospheric light value. For example, in a real low-light image, if there is brighter light, such as light of a light or moonlight, the atmospheric light value will not be brightest, in which case noise may be introduced if we select the "brightest pixel" as atmospheric light. To avoid such problems, in this implementation, we have chosen the average intensity based on the largest channel image as our estimate. The experimental result shows that the quality of the enhanced image is improved, and the maximum channel image is more easily resistant to the interference of lamplight and moonlight than the dark channel image. By the calculation mode, the atmospheric illumination intensity can be calculated quickly, and the real-time performance of real vehicle picture observation or calculation is met.
Although the average value of the preset number of luminances as the whole atmospheric light intensity is solved based on the estimation of the bright channel in the above calculation, in practice, the atmospheric light intensity does not remain unchanged, and particularly, the light intensity is insufficient, such as at night. Due to the lack of illumination from a large area in the sky, due to spatially varying illumination and the lack of illumination from the sky in a local area at night, the prior art method of using the maximum value for the calculation has a large error.
It is assumed that the human eye perceives the brightness of an object from ambient illumination and reflection from the surface of the object. Mathematically, the restored image J can be written as the product of the atmospheric illumination intensity a and the reflectance R:
J(x,y)=A(x,y)R(x,y);
in the above equation, J (x, y) represents a restored image, a (x, y) represents an atmospheric illumination intensity, R (x, y) represents a reflectance, and x, y represent two-dimensional coordinates of a pixel.
The inverted image can be expressed as:
Irevert(x,y)=A(x,y)(R(x,y)*t(x,y)+(1-t(x,y));
in the above formula, IrevertWhich represents a converted reverse image of an image photographed by a camera, a (x, y) represents an atmospheric illumination intensity, R (x, y) represents a reflectance, and t (x, y) represents a transmission rate.
A (x, y) is estimated for low frequency components by conventional algorithms using Gaussian filtering, but Gaussian smoothing is isotropic and does not preserve edges, which results in inaccurate results, resulting in some local information loss.
To solve the technical problem of smoothing the image and preserving the edges, another method for calculating the atmospheric illumination intensity in the embodiment step S3 is as follows:
firstly S31, according to the preset maximum pass value and minimum pass value, the difference value between the maximum pass value and the minimum pass value is solved and recorded as the difference value Id
The maximum channel value is:
Imax(x,y)=max{Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x,y)}
the minimum channel values are:
Imin(x,y)=min{Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x,y)}
Id(x,y)=Imax(x,y)-Imin(x,y)
in step S32, the atmospheric illumination intensity a (x, y) is linear with the difference value in the local region centered on k pixels:
A(x,y)=ak*Id(x,y)+bk,(x,y)∈Ψk;
in the above formula, ak,bkThe coefficients in the local region of Ψ k, which are constant in this implementation
Step S33, establishing an objective function between the atmospheric illumination intensity A (x, y) and the brightness of the reversed image, and solving ak,bk
Figure BDA0002554572680000111
E(ak,bk) Expressing an objective function, and taking lambda as an error adjusting factor;
solving the objective function by linear regression or least square method to obtain ak,bk
By solving for the problem,
Figure BDA0002554572680000121
bk=μkkk
in the above formula,. mu.k,σkAre respectively at preset psikDifference value I of all pixels in local aread(x, y) mean and variance, I (x, y) is the luminance of a pixel in the inverted image,
Figure BDA0002554572680000124
is ΨkAverage of I (x, y) of all pixels in the local region, | ΨkL is ΨkThe total number of all pixels in the local region; and lambda is an error adjustment factor.
Step S34, when calculating the final atmospheric illumination intensity A, the atmospheric illumination intensity is calculated in blocks by taking a preset block as a unit, and the average value in the blocks is taken as the final atmospheric illumination intensity of the block;
Figure BDA0002554572680000122
in the above formula, AΨkIs represented at ΨkAtmospheric illumination intensity, | ΨkI denotes ΨkThe total number of all pixel points in the area in the station;
through the calculation method, the problem that the atmospheric illumination intensity of the whole image is replaced by the maximum value or a numerical value in the prior art, so that the final imaging distortion is caused can be solved, and through the adoption of the calculation method, the technical problems of image smoothing and edge reservation can be solved, and a better imaging effect can be obtained.
The image transmission rate is also a more critical factor, and in the prior art, some methods for calculating the transmission rate still have large errors, especially when there are lights, moonlights and other bright and high objects, the result calculated by selecting the mode with the largest image contrast as the initial transmission rate causes the enhancement effect of the bright foreground object to be insignificant. To solve this problem, one possible calculation method is:
in step S4, one way of calculating the transmission rate includes:
step S41, acquiring a grayscale image of the inverted image:
Igray(x,y)=0.3*Irevert_R(x,y)+0.59*Irevert_G(x,y)+0.11Irevert_B(x,y)
step S42, the gray image is partitioned into blocks Ψk(e.g., may be divided into 15x15 chunks);
step S43, using block asUnit based on the atmospheric illumination intensity A and the recovery formula
Figure BDA0002554572680000123
When the transmission rates t (x, y) of the current block are respectively set to 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 and 0.9, corresponding J (x, y) is obtained, and then the J (x, y) which enables the image contrast to obtain the maximum value is obtained and recorded as t1(x,y);
The formula for calculating the contrast is:
Figure BDA0002554572680000131
wherein J (x, y) is a restored image generated at a predetermined transmission rate,
Figure BDA0002554572680000132
n is the number of pixel points of each block corresponding to J (x, y) as a common mode variable of the restored image;
step S44, the transmission rate of the current block
Figure BDA0002554572680000133
Where p (x, y) represents a correction factor, defining:
Figure BDA0002554572680000134
by introducing the correction factor, the final transmission rate can effectively enhance the information of objects such as lamplight and white spots and maintain the spatial continuity of the transmission rate, so that the restored scene image has a smoother visual effect.
One method of calculating the transmission rate includes:
in night photographing, unlike an image captured in the daytime, there are generally a plurality of light sources in a night image, and the color characteristics of a light source region (LR) are greatly different from those of a non-light source region (NR). Therefore, both LR and NR regions need to be considered in calculating the transmission rate.
Step S4A, obtaining atmosphereIllumination intensity value, transmission rate t corresponding to maximum channel with light source regionLRTransmission rate t corresponding to minimum channel of non-light source regionNRThe calculation formulas of (A) and (B) are respectively as follows:
Figure BDA0002554572680000141
in the above formula, Ic(x, y) denotes Ψ for inverting the image in a source with LRkThe value of the r, g, b channel, Ψ, for each pixel in the regionkRepresenting a local region centered on pixel k; a. thec(x, y) is ΨkThe atmospheric illumination intensity of the r, g, b channel of each pixel in the region.
Note that the transmission rate of the maximum channel in which the light source region exists and the transmission rate of the minimum channel in which the light source region does not exist are effective only in the light source region and the non-light source region, respectively, and they need to be combined together to calculate the final transmission rate. One method in the prior art is to use a clear boundary line in an image to divide a light source area and a non-light source area, and to determine the attribution of pixels near the boundary simply and difficultly. In the present embodiment, a luminance perceptual weighting method is proposed to calculate the probability α (x, y) of each pixel belonging to the light source region. In the light source area, there is at least one pixel with high intensity in one of the RGB channels, the higher the value, the more likely the pixel belongs to the light source area;
step S4B, obtaining the brightness perception coefficient alpha (x, y)),
Figure BDA0002554572680000142
step S4C, calculating the final transmission rate t (x, y):
t(x,y)=tLR(x,y)*α(x,y)+tNR(x,y)*(1-α(x,y))
in step S5, the acquiring of the restored image specifically includes:
respectively converting the three channel values I according to a recovery formularevert_R(x,y),Irevert_G(x,y),Irevert_B(x, y) and atmospheric illuminationThe intensity A (x, y) and the transmission rate t (x, y) are substituted to obtain a restored image JrevertThe recovery value J of the (x, y) three channelsrevert_R(x,y),Jrevert_G(x,y),Jrevert_B(x,y);
The calculation formula of the restored image is as follows:
Figure BDA0002554572680000151
the three-channel recovery value obtaining and calculating formula is as follows:
Figure BDA0002554572680000152
Figure BDA0002554572680000153
Figure BDA0002554572680000154
in the above equation, if the calculated t (x, y) is close to 0, the color of the region is severely distorted by direct restoration, and the lower limit value t is set0In this embodiment, t0The value range is 0.1-0.15;
step S6, the restored image J is processedrevert_R(x,y),Jrevert_G(x,y),Jrevert_B(x, y) Gamma correction to obtain corrected image Jgamma_R(x,y),Jgamma_G(x,y),Jgamma_B(x,y);
Step S7, the corrected image is inverted to obtain an enhanced and clear visible light image Ioutput(x,y);
Figure BDA0002554572680000155
In the images shot by the camera, the main purpose is to help the driver to identify moving objects and obstacles more clearly, so in order to quickly find out interested objects from the images, an infrared camera and a visible light camera can be used in a matched mode, and a target area to be processed is selected from the images shot by the infrared camera.
Specifically, acquiring a region of interest from an infrared thermal imaging image;
the acquisition method for acquiring the region of interest comprises the following steps: sorting the brightness values of the pixels of the whole image, and then taking the pixels with the brightness values before fifty percent as the pixels of the interested area; or by means of target detection and the like, the region of the vehicle, the pedestrian and the like is detected in the infrared thermal imaging image, and then the pixel of the region of the vehicle, the pedestrian and the like is used as the pixel of the region of interest. In this embodiment, we use a pixel brightness ordering method to obtain the region of interest.
A vehicle-mounted image stitching processing method comprises the following steps:
acquiring an infrared image and a visible light image at the same visual angle;
selecting a region image with the same coordinate as the region of interest in the infrared image from the visible light image according to the pixel coordinate of the region of interest selected from the infrared image for virtual exposure processing;
the infrared image and the visible light image are spliced left and right, and the spliced images can be sent to a screen for display to be watched by a driver.
What has been described above is only a preferred embodiment of the present invention, and the present invention is not limited to the above examples. It is clear to those skilled in the art that the form in this embodiment is not limited thereto, and the adjustable manner is not limited thereto. It is to be understood that other modifications and variations, which may be directly derived or suggested to one skilled in the art without departing from the basic concept of the invention, are to be considered as included within the scope of the invention.

Claims (10)

1. An image processing apparatus for an in-vehicle image, comprising: at least one visible light camera, microprocessor controller, visible light camera are connected with microprocessor controller, and microprocessor controller includes the virtual exposure processing module that is used for image processing, and virtual exposure processing module includes the virtual exposure processing process of image, specifically includes: converting the collected image into a reverse image, acquiring the atmospheric illumination intensity and the transmission rate of the image, generating a restored image according to a preset image restoration model, correcting the restored image, reversing the restored image and acquiring a processed image;
the step of obtaining the atmospheric illumination intensity comprises the following steps: calculating the atmospheric illumination intensity in blocks by taking preset blocks as units, and taking the average value in the blocks as the final atmospheric illumination intensity A of the blocksΨk
Figure FDA0002554572670000011
In the above formula, AΨkIs represented at ΨkAtmospheric illumination intensity, | ΨkI denotes ΨkThe total number of all pixel points in the area in the station; i isd(x, y) is ΨkDifference values of all pixels in the local area; a isk,bkIs ΨkCoefficients within a local region; (x, y) represents two-dimensional coordinates of the pixel; the disparity value is defined as the difference between the maximum pass image value and the minimum pass image value obtained from the inverted image.
2. The image processing device of the vehicle-mounted image according to claim 1, wherein when the illumination intensity or visibility of the current environment is lower than a preset threshold, the virtual exposure processing module is started to perform virtual exposure processing on the acquired visible light image.
3. The image processing device of the vehicle-mounted image according to claim 1, further comprising an infrared camera, wherein the infrared camera is connected with the micro-processing controller; when the illumination intensity or the visibility of the current environment is lower than a preset threshold value, acquiring an infrared image and a visible light image at the same visual angle, selecting an interested area from the infrared image and acquiring the coordinates of the interested area, and selecting an image of a matching area in the corresponding visible light image for virtual exposure processing according to the coordinates of the interested area selected from the infrared image.
4. The image processing apparatus for vehicle-mounted images according to claim 1, wherein the maximum pass image value in the reversed image is calculated by: i ismax(x,y)=max{Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x,y)};
The minimum pass image value in the inverted image is calculated in the following way: i ismin(x,y)=min{Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x,y)};
Calculating formula of difference value: i isd(x,y)=Imax(x,y)-Imin(x,y);
In the above formula, Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x, y) represent the values of the R, G, B channels of the inverted image, respectively.
5. The image processing apparatus for vehicle-mounted images according to claim 1, wherein ak,bkThe acquisition process comprises the following steps: by establishing the local region ΨkLinear relation between atmospheric illumination intensity A (x, y) of the middle pixel point and the difference value is established, and then an objective function E (a) between the atmospheric illumination intensity A (x, y) and the brightness of the reversed image is establishedk,bk) Solving the objective function by linear regression or least square method to obtain ak,bk
Figure FDA0002554572670000021
bk=μk-akk
In the above formula,. mu.k,σkAre respectively at preset psikDifference value I of all pixels in local areadMean and square of (x, y)The difference, I (x, y) is the luminance of the pixels of the inverted image,
Figure FDA0002554572670000022
is ΨkAverage of I (x, y) of all pixels in the local region, | ΨkL is ΨkThe total number of all pixels in the local region; and lambda is an error adjustment factor.
6. The image processing apparatus for vehicle-mounted images according to claim 1, wherein the acquisition process of the transmission rate comprises:
step S4A, obtaining the transmission rate t corresponding to the maximum channel of the light source areaLRTransmission rate t corresponding to minimum channel of non-light source regionNRThe calculation formulas are respectively as follows:
Figure FDA0002554572670000031
Figure FDA0002554572670000032
in the above formula, Ic(x, y) denotes inversion of the image at Ψ with LR or NRkThe value of the r, g, b channel, Ψ, for each pixel in the regionkRepresenting a local region centered on pixel k; a. thec(x, y) is ΨkAtmospheric intensity of r, g, b channel of each pixel in the region;
step S4B, obtaining a brightness perception coefficient alpha (x, y);
Figure FDA0002554572670000033
step S4C, calculating the final transmission rate t (x, y):
t(x,y)=tLR(x,y)*α(x,y)+tNR(x,y)*(1-α(x,y)。
7. the image processing apparatus for vehicle-mounted images according to claim 1, wherein the process of acquiring the restored image comprises:
respectively converting the three channel values I according to a recovery formularevert_R(x,y),Irevert_G(x,y),Irevert_B(x, y), the atmospheric illumination intensity A (x, y) and the transmission rate t (x, y) are substituted to obtain a restored image JrevertThe recovery value J of the (x, y) three channelsrevert_R(x,y),Jrevert_G(x,y),Jrevert_B(x,y);
The calculation formula of the restored image is as follows:
Figure FDA0002554572670000034
the calculation formula of the values of the three-channel restored image is as follows:
Figure FDA0002554572670000035
Figure FDA0002554572670000036
Figure FDA0002554572670000041
8. an image processing apparatus for on-vehicle images according to claim 1, wherein the further acquisition of the intensity of the atmospheric illumination comprises: acquiring an image of a maximum channel, sequencing the brightness of pixel points according to the size in the image of the maximum channel, selecting pixel points 0.1-10% of the brightness, selecting the brightness of an original image at the same positions of the selected pixel points, and calculating the average value of the brightness of the pixel points as the atmospheric illumination intensity;
the acquisition of the maximum channel image includes: in the process of obtaining the RGB reversed image, the image formed by the maximum values of the pixel points has the following calculation formula:
Im=max{Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x,y)}
Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x, y) denote reverse images of R, G, B channels corresponding to the RGB images, respectively.
9. The image processing apparatus for vehicle-mounted images according to claim 1, wherein the acquisition process of the transmission rate comprises:
step S41, acquiring a grayscale image of the inverted image:
Igray(x,y)=0.3*Irevert_R(x,y)+0.59*Irevert_G(x,y)+0.11Irevert_B(x,y)
step S42, blocking the grayscale image;
step S43, taking block as unit, according to the atmospheric illumination intensity A and the formula
Figure FDA0002554572670000042
Setting the transmission rates t (x, y) of the current block to be a plurality of values between 0.1 and 0.9 respectively, calculating corresponding J (x, y) respectively, and then obtaining J (x, y) which enables the image contrast to obtain the maximum value and recording the J (x, y) as t1(x,y);
The formula for calculating the contrast is:
Figure FDA0002554572670000043
wherein J (x, y) is a restored image generated at a predetermined transmission rate,
Figure FDA0002554572670000051
n is the number of pixel points of each block corresponding to J (x, y) as a common mode variable of the restored image;
step (ii) ofS44, transmission rate of current block
Figure FDA0002554572670000052
Where p (x, y) represents a correction factor, defining:
Figure FDA0002554572670000053
10. an automobile comprising the image processing apparatus for the on-vehicle image according to any one of claims 1 to 9.
CN202010595463.2A 2020-06-24 2020-06-24 Image processing device of vehicle-mounted image and automobile Active CN113840123B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010595463.2A CN113840123B (en) 2020-06-24 2020-06-24 Image processing device of vehicle-mounted image and automobile

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010595463.2A CN113840123B (en) 2020-06-24 2020-06-24 Image processing device of vehicle-mounted image and automobile

Publications (2)

Publication Number Publication Date
CN113840123A true CN113840123A (en) 2021-12-24
CN113840123B CN113840123B (en) 2024-05-31

Family

ID=78964980

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010595463.2A Active CN113840123B (en) 2020-06-24 2020-06-24 Image processing device of vehicle-mounted image and automobile

Country Status (1)

Country Link
CN (1) CN113840123B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118138895A (en) * 2024-05-08 2024-06-04 深圳市安冉安防科技有限公司 Shot picture definition improving method and system based on infrared camera

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100021952A (en) * 2008-08-18 2010-02-26 삼성테크윈 주식회사 Image enhancement processing method and apparatus for distortion correction by air particle like fog
WO2014023231A1 (en) * 2012-08-07 2014-02-13 泰邦泰平科技(北京)有限公司 Wide-view-field ultrahigh-resolution optical imaging system and method
US20140177960A1 (en) * 2012-12-24 2014-06-26 Korea University Research And Business Foundation Apparatus and method of processing image
CN103914813A (en) * 2014-04-10 2014-07-09 西安电子科技大学 Colorful haze image defogging and illumination compensation restoration method
CN105787904A (en) * 2016-03-25 2016-07-20 桂林航天工业学院 Adaptive global dark channel prior image dehazing method for bright area
CN105991938A (en) * 2015-03-04 2016-10-05 深圳市朗驰欣创科技有限公司 Virtual exposure method, device and traffic camera
CN106530246A (en) * 2016-10-28 2017-03-22 大连理工大学 Image dehazing method and system based on dark channel and non-local prior
US20180225545A1 (en) * 2017-02-06 2018-08-09 Mediatek Inc. Image processing method and image processing system
CN108765342A (en) * 2018-05-30 2018-11-06 河海大学常州校区 A kind of underwater image restoration method based on improvement dark
US20190287219A1 (en) * 2018-03-15 2019-09-19 National Chiao Tung University Video dehazing device and method
CN111800586A (en) * 2020-06-24 2020-10-20 上海赫千电子科技有限公司 Virtual exposure processing method for vehicle-mounted image, vehicle-mounted image splicing processing method and image processing device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100021952A (en) * 2008-08-18 2010-02-26 삼성테크윈 주식회사 Image enhancement processing method and apparatus for distortion correction by air particle like fog
WO2014023231A1 (en) * 2012-08-07 2014-02-13 泰邦泰平科技(北京)有限公司 Wide-view-field ultrahigh-resolution optical imaging system and method
US20140177960A1 (en) * 2012-12-24 2014-06-26 Korea University Research And Business Foundation Apparatus and method of processing image
CN103914813A (en) * 2014-04-10 2014-07-09 西安电子科技大学 Colorful haze image defogging and illumination compensation restoration method
CN105991938A (en) * 2015-03-04 2016-10-05 深圳市朗驰欣创科技有限公司 Virtual exposure method, device and traffic camera
CN105787904A (en) * 2016-03-25 2016-07-20 桂林航天工业学院 Adaptive global dark channel prior image dehazing method for bright area
CN106530246A (en) * 2016-10-28 2017-03-22 大连理工大学 Image dehazing method and system based on dark channel and non-local prior
US20180225545A1 (en) * 2017-02-06 2018-08-09 Mediatek Inc. Image processing method and image processing system
US20190287219A1 (en) * 2018-03-15 2019-09-19 National Chiao Tung University Video dehazing device and method
CN108765342A (en) * 2018-05-30 2018-11-06 河海大学常州校区 A kind of underwater image restoration method based on improvement dark
CN111800586A (en) * 2020-06-24 2020-10-20 上海赫千电子科技有限公司 Virtual exposure processing method for vehicle-mounted image, vehicle-mounted image splicing processing method and image processing device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SUDEEP D. THEPADE等: "Improved Haze Removal Method Using Proportionate Fusion of Color Attenuation Prior and Edge Preserving", 《2018 FOURTH INTERNATIONAL CONFERENCE ON COMPUTING COMMUNICATION CONTROL AND AUTOMATION (ICCUBEA)》 *
杨爱萍等: "基于统计特性和亮度估计的夜晚图像去雾", 《天津大学学报 》, no. 3 *
赵宏宇: "雾天图像清晰化技术的研究", 《中国博士论文全文数据库》, no. 3 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118138895A (en) * 2024-05-08 2024-06-04 深圳市安冉安防科技有限公司 Shot picture definition improving method and system based on infrared camera

Also Published As

Publication number Publication date
CN113840123B (en) 2024-05-31

Similar Documents

Publication Publication Date Title
CN108515909B (en) Automobile head-up display system and obstacle prompting method thereof
CN108460734B (en) System and method for image presentation by vehicle driver assistance module
CN109435852B (en) Panoramic auxiliary driving system and method for large truck
CN102231206B (en) Colorized night vision image brightness enhancement method applicable to automotive assisted driving system
CN107301623B (en) Traffic image defogging method and system based on dark channel and image segmentation
CN105913390B (en) A kind of image defogging method and system
CN105374013A (en) Method and image processing apparatus for image visibility restoration on the base of dual dark channel prior
CN107360344B (en) Rapid defogging method for monitoring video
CN109584176B (en) Vision enhancement system for motor vehicle driving
CN110099268B (en) Blind area perspective display method with natural color matching and natural display area fusion
CN104766286A (en) Image defogging device and method based on pilotless automobile
CN103914820A (en) Image haze removal method and system based on image layer enhancement
CN104331867B (en) The method, device and mobile terminal of image defogging
Cheng et al. Visibility enhancement of single hazy images using hybrid dark channel prior
Choi et al. Fog detection for de-fogging of road driving images
CN111800586B (en) Virtual exposure processing method for vehicle-mounted image, vehicle-mounted image splicing processing method and image processing device
CN106780362B (en) Road video defogging method based on dichromatic reflection model and bilateral filtering
CN113840123B (en) Image processing device of vehicle-mounted image and automobile
CN107437241B (en) Dark channel image defogging method combined with edge detection
CN111491103B (en) Image brightness adjusting method, monitoring equipment and storage medium
CN112465720A (en) Image defogging method and device based on image sky segmentation and storage medium
KR101535630B1 (en) Apparatus for enhancing the brightness of night image using brightness conversion model
US20230171510A1 (en) Vision system for a motor vehicle
CN111028184B (en) Image enhancement method and system
Hautière et al. Free Space Detection for Autonomous Navigation in Daytime Foggy Weather.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant