WO2017071996A1 - Method for adjusting a camera parameter and/or an image, computer program product, camera system, driver assistance system and motor vehicle - Google Patents

Method for adjusting a camera parameter and/or an image, computer program product, camera system, driver assistance system and motor vehicle Download PDF

Info

Publication number
WO2017071996A1
WO2017071996A1 PCT/EP2016/075011 EP2016075011W WO2017071996A1 WO 2017071996 A1 WO2017071996 A1 WO 2017071996A1 EP 2016075011 W EP2016075011 W EP 2016075011W WO 2017071996 A1 WO2017071996 A1 WO 2017071996A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
motor vehicle
partial area
area
Prior art date
Application number
PCT/EP2016/075011
Other languages
French (fr)
Inventor
Patrick Eoghan Denny
Aidan Casey
Brian Michael Thomas DEEGAN
Ciáran HUGHES
Jonathan Horgan
Original Assignee
Connaught Electronics Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connaught Electronics Ltd. filed Critical Connaught Electronics Ltd.
Publication of WO2017071996A1 publication Critical patent/WO2017071996A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming

Definitions

  • the invention relates to a method for adjusting at least one camera parameter of a camera of a motor vehicle and/or an image captured by the camera.
  • An environmental region of the motor vehicle is captured by the camera at least partially.
  • the image is recorded of the captured environmental region.
  • the invention also relates to a computer program product, a camera system for a motor vehicle, a driver assistance system for a motor vehicle and a motor vehicle with a driver assistance system.
  • Methods for adjusting at least one camera parameter of a camera of a motor vehicle and/or an image captured by the camera are known from the prior art.
  • an area is usually defined in the image, in which properties of the image are determined, on the basis of which in turn the camera parameter can be adjusted.
  • An example thereof is the automatic exposure control of known cameras.
  • the image can also be locally adjusted in the defined area. Thus, in the defined area for example an increase of contrast can be effected.
  • At least one camera parameter of a camera of a motor vehicle and/or an image captured by the camera are adjusted.
  • An environmental region of the motor vehicle is captured by the camera at least partially.
  • the image is recorded of the captured environmental region.
  • a partial area of the image is determined, which is displayed on a display unit of the motor vehicle and/or provided for further processing by means of a method of machine vision.
  • a selection area positioned completely within the partial area is determined.
  • the at least one camera parameter and/or the image are adjusted in dependence on a property of the image in the selection area.
  • the partial area can, for example, be defined such that subsequent to further possible processing steps this area of the image is outputted on a display unit of the motor vehicle.
  • the partial area can also be defined such that this area is utilized, for example, for the further processing according to a method of machine vision.
  • the information relevant for the display and/or the further processing is present only in the partial area. It is thus sufficient if the adjustment of the camera parameter and/or the image takes into account only the area of the image within the partial area. This can, for example, result in an underexposed display of areas outside the partial area. However, in that case these underexposed areas outside the partial area are not relevant, since preferably they are utilized neither for being displayed on the display unit nor for further processing by means of the method of machine vision.
  • the selection area is then determined.
  • a focus of interest can then in turn be directed to a local area within the partial area.
  • a brightness of the image can, for example, be determined so that the camera parameter which influences the brightness of an image captured by the camera can be adjusted.
  • the image can also be adjusted on the basis of the property of the image in the selection area.
  • a brightness of the image can be increased so that the partial area is suitable for being displayed on the display unit and/or for further processing according to the method of machine vision. This can be performed even if in consequence thereof an area of the image outside the partial area is rendered too dark for being displayed and/or for further processing.
  • the partial area is continually adjusted image by image and that thus for each image the camera parameter and/or the image can be adjusted with regard to the relevant area, i.e. the partial area.
  • the selection area is in particular determined each time completely within the partial area. If the partial area is not known or if the selection area is determined independently of the partial area or the viewport, it cannot be ensured in the prior art that the camera parameter and/or the image can be displayed on the display unit and/or provided for further processing according to the method of machine vision entirely in better quality.
  • a size and/or a position of the selection area within the partial area are determined in dependence on a driving direction of the motor vehicle.
  • the selection area can be placed within the partial area where there might be obstacles for the motor vehicle. In this context, for example, objects in the
  • a speed of the motor vehicle can also, for example, be determined.
  • a driving trajectory of the motor vehicle can then be predicted, for example in connection with other sensor data of the motor vehicle, e.g. of a steering angle sensor.
  • the size of the selection area can also be adjusted.
  • the camera parameter and/or the image can be adjusted particularly precisely for the partial area, in particular the selection area.
  • a driver assistance system of the motor vehicle can be operated more precisely and the safety of the motor vehicle is improved.
  • the partial area is determined such that only the environmental region is shown in the partial area.
  • the camera comprises a so-called fisheye objective lens and the image is captured by the fisheye objective lens.
  • an azimuthal angle of, for example, 180° or more is captured by the fisheye objective lens.
  • the partial area is preferably positioned only within the environmental region depicted in the image. This is advantageous in that thus irrelevant areas of the image such as the own number plate of the motor vehicle do not negatively influence the quality of the image in the processing or adjustment of the image or of the camera parameter.
  • a further image is captured by the camera and that a further partial area is determined for the further image, wherein in dependence on the further partial area a further selection area is determined, preferably positioned
  • the camera parameter and/or the further image are adjusted each in dependence on a property of the further image, in particular in dependence on a property featured only in the selection area.
  • the adjustment or determination of the further partial area and/or of the further selection area are preferably carried out in real time or else in the time span remaining between single consecutive camera recordings.
  • the real time frame is predefined, for example, by the recording frequency or frame rate of the camera, which can be 30 Hz or 60 Hz, in particular 29.97 Hz, for example.
  • the partial area and/or the selection area are dynamically adjusted to the current situation.
  • the quality of the image can be maintained in continual compliance with qualitative requirements, e.g. the requirement that there should be no underexposure or overexposure.
  • the driver assistance system can be operated more precisely and the safety of the motor vehicle can be increased.
  • the image is adjusted only within the selection area.
  • the adjustment of the image can be limited solely to the selection area. This means, for example, that preferably the image is not adjusted outside the selection area. This has the advantage that thus accelerated processing is enabled in the adjustment of the image.
  • this can also mean that for the adjustment of the camera parameter only the selection area is taken into consideration and not the image within the partial area, but outside the selection area.
  • the selection area can also coincide with the partial area.
  • an output image is generated, in particular through rectification of the image within the partial area, and that the output image is outputted on a display unit of the motor vehicle.
  • the image is thus, for example, a raw image.
  • the partial area can then, for example, be determined in the image.
  • the partial area can feature a distortion resultant, for example, from the above-mentioned fisheye objective lens. Therefore, the partial area of the image can then, for example, be rectified and provided as an output image for the output on the display unit.
  • the term rectification describes, for example, the elimination of geometrical distortions in image data. This can be effected, for example, by means of a transformation of the partial area.
  • the output of the partial area as the output image can be less distorted and consequently more illustrative.
  • a camera parameter of the camera an exposure time and/or a gain setting and/or a white balance setting and/or a gamma value are adjusted.
  • Exposure time refers to the time span in which a photosensitive medium, e.g. a CMOS sensor or a CCD sensor, is exposed to the light for image capturing.
  • a photosensitive medium e.g. a CMOS sensor or a CCD sensor
  • an f-number and/or a light sensitivity of the camera can be adjusted.
  • the light sensitivity is for example adjusted via the gain setting of the camera.
  • the gain setting it is predefined to what extent the incident amount of light, e.g.
  • the white balance of the white balance setting serves to sensitize the camera to the colour temperature of the light at the recording location, that is, to the colour temperature of the environmental region of the motor vehicle, for example.
  • the gamma value or gamma correction is a correction function for transforming a proportionally increasing physical quantity into a quantity that does not grow on a linear scale according to human perception. In mathematical terms this function is a power function with an exponent often referred to for short as gamma value as its only parameter.
  • the further image can be captured such that at least the further selection area and/or the further partial area can be provided with the desired image properties. This is possible because the scene or the captured part of the environmental region do not usually change so abruptly that a completely different scene is captured by the image and the further image captured subsequently thereto.
  • the camera can also be a time of flight camera or a hyperspectral camera or a plenoptic camera.
  • the camera can also have a plurality of sensors.
  • the hyperspectral camera more than just the three usual frequency bands red, green and blue can be captured.
  • electromagnetic radiation from the infrared wavelength range, in particular with a plurality of frequency bands, can be captured. It is advantageous that colours in the environmental region can be captured by the hyperspectral camera in a more precise and differentiated manner, in particular if the camera is directed towards the sun.
  • the system may be more sensitive to particular colours depending on whether the driving direction is towards or moving away from sunlight.
  • the partial area may be moved away from a region of the image containing the sun. This can be determined uniquely from a combination of the driving direction of the motor vehicle, the current position of the motor vehicle, e.g. by global navigation satellite system (GNSS) of the motor vehicle, and the current time for example.
  • GNSS global navigation satellite system
  • the image is adjusted, in particular only, in the selection area by brightness setting and/or colour setting and/or contrast setting and/or a noise reduction method.
  • the image can thus, for example, be adjusted with regard to its brightness by brightness setting.
  • the selection area can thus for example be adjusted to be brighter or darker than before.
  • tone mapping or dynamic compression refers to the compression of the dynamic range of high dynamic range images, i.e. of images with a high brightness range.
  • tone mapping the contrast range of a high dynamic range image is reduced in order to be able to display it on conventional display devices. For example, as a result of the adjustment of the image within the selection area, tone mapping can be utilized more simply and effectively.
  • the contrast of the image, in particular in the selection area can be adjusted by contrast setting.
  • a smoothing filter such as a Gaussian filter can be applied in order to suppress image noise, in particular in the selection area, by means of the noise reduction method.
  • noise can be also for the time being deliberately kept in the image, in particular in the selection area. It is advantageous for several machine vision applications that noise is not immediately removed from the image. As methods based on noise statistics which are for instance used by a hereback-end" machine vision algorithm may remove noise in a qualtitively better way than fast pharmaceuticalfront-end" algorithms which are for instance implemented in the camera.
  • edges can, for example, be drawn more sharply in the image, in particular in the selection area, e.g. by means of so-called image sharpness methods. It is thus advantageous that within the partial area the image can be variously adjusted to enable an output of the partial area on the display unit in better quality and/or to apply a method of machine vision more precisely and correctly to the partial area and/or the selection area.
  • the selection area is divided into a plurality of selection subareas and the camera parameter and/or the image are adjusted in dependence on the plurality of selection subareas.
  • the selection subareas can, for example, have the same size.
  • the selection subareas are in particular arranged in the manner of a matrix. By means of the selection subareas the selection area can be adjusted more precisely and variously.
  • the selection area can, for example, comprise only one of the selection subareas or else also a plurality of selection subareas. The larger the number of selection subareas utilized, the higher the precision with which the adjustment of the camera parameter and/or the captured image can be performed.
  • the selection subareas can also, for example, have different sizes and/or be weighted differently when evaluated.
  • weightings for a plurality of camera parameters are determined and the plurality of camera parameters are adjusted in dependence on the determined weightings.
  • This can be utilized, for example, if a total image is provided by a plurality of cameras. The total image is, for example, joined together from a plurality of images in order to generate a top view image. In this context, it is usually of importance that the top view image is homogenous.
  • different weightings can be accorded to different selection areas.
  • the selection areas can, for example, be determined such with regard to their size and position that in the generation of the top view image they fit together and thus, subsequent to the merging, a homogenous total image or top view image is generated.
  • the invention relates to a computer program product for performing a method according to the invention if the computer program product is executed on a programmable computer device.
  • the invention relates to a camera system for a motor vehicle with a camera and an evaluation unit.
  • the camera system is configured to perform a method according to the invention.
  • the camera system can also comprise a plurality of cameras and/or evaluation units.
  • the invention also relates to a driver assistance system for a motor vehicle with a camera system according to the invention.
  • the invention moreover relates to a motor vehicle with a driver assistance system according to the invention.
  • Fig. 1 a schematic top view of an embodiment of a motor vehicle according to the invention with a driver assistance system
  • Fig. 2 a schematic view of an image with a partial area, captured by a camera of a camera system of the driver assistance system;
  • Fig. 3 a schematic view of an output image generated on the basis of the partial area
  • Fig. 4 a schematic view of the image with the partial area and a selection area having a plurality of selection subareas, which is positioned completely within the partial area; and Fig. 5 a schematic view of the image with the partial area, which is displayed on a display unit of the motor vehicle and provided for further processing by means of a method of machine vision.
  • Fig. 1 shows schematically a motor vehicle 1 with a driver assistance system 2.
  • the driver assistance system 2 comprises a camera system 3.
  • the camera system 3 further comprises a camera 4 and an evaluation unit 5.
  • the evaluation unit 5 can, for example, be integrated into the camera 4 or else formed separately from the camera 4.
  • the camera 4 is arranged at a front 6 of the motor vehicle 1 .
  • the camera 4 can be variously arranged, preferably, however, such that an environmental region 7 of the motor vehicle 1 can be captured at least partially.
  • the camera system 3 can also comprise a plurality of cameras 4. Thus, also a plurality of cameras 4 can be arranged at the motor vehicle 1 .
  • the camera 4 can be a CMOS camera (complementary metal-oxide-semiconductor) or a CCD camera (charge coupled-device) or any other image capturing device, for example a time of flight camera, a hyperspectral camera, a plenoptic camera or in particular an infrared camera.
  • CMOS camera complementary metal-oxide-semiconductor
  • CCD camera charge coupled-device
  • any other image capturing device for example a time of flight camera, a hyperspectral camera, a plenoptic camera or in particular an infrared camera.
  • NIR near infrared wavelength range
  • the camera 4 provides an image sequence of images of the environmental region 7.
  • the image sequence of the images is then, for example, processed in real time by the evaluation unit 5.
  • the driver assistance system 2 can, for example, comprise a top view image system and/or an object detection system and/or an obstacle warning system. Moreover, the driver assistance system 2 can also be formed as a camera monitoring system (CMS). In the camera monitoring system the camera 4 can, for example, additionally be arranged in a left side mirror 8 of the motor vehicle 1 and/or in a right side mirror 9 of the motor vehicle 1 and/or at a rear end 10 of the motor vehicle 1 .
  • Fig. 2 shows an image 1 1 .
  • the image 1 1 is captured by the camera 4.
  • the image 1 1 is a so-called raw image and shows a representation of the environmental region 7 and of the motor vehicle 1 .
  • the image 1 1 was captured depending on a fisheye objective lens of the camera 4.
  • the environmental region 7 is represented in the image 1 1 with a horizontal angle or an azimuthal angle of more than 180° for example.
  • the motor vehicle 1 itself is shown in part.
  • a partial area 12 is determined.
  • the partial area 12 is defined such that it is displayed on a display unit 13 - as shown in Fig. 1 - and/or provided for further processing by means of a method of machine vision.
  • the partial area 12 can also be described as a field of vision.
  • machine vision usually describes the computer-aided solution of tasks capable of being solved by the human visual system. This includes, for example, the detection of objects in the environmental region 7.
  • the further step can be the display of the partial area 12 on the display unit 13 or, additionally or alternatively, the further processing of the partial area 12 by means of a method of machine vision.
  • the partial area 12 is arranged such that within the partial area 12 only the environmental region 7 is represented. It is thus in particular not intended that parts of the motor vehicle 1 , which are shown in the image 1 1 , are arranged within the partial area 12.
  • Fig. 3 shows an output image 14.
  • the output image 14 is outputted thus on the display unit 13.
  • the output image 14 is generated on the basis of the partial area 12.
  • the partial area 12 of the image 1 1 is rectified or corrected through transformation.
  • the rectification can be necessary in order to reduce distortions which can, for example, result from a fisheye objective lens of the camera 4 in the partial area 12.
  • Fig. 4 shows the image 1 1 with the partial area 12.
  • a selection area 15 is arranged completely within the partial area 12.
  • a size and/or a position of the selection area 15 within the partial area 12 can, for example, depend on a driving direction of the motor vehicle 1 .
  • the partial area 12 is divided into selection subareas 16.
  • the selection subareas 16 can, for example, be arranged within the selection area 15 in various numbers and sizes.
  • the selection subareas 16 can be arranged within the selection area 15 in the manner of a matrix, for example.
  • the selection subareas 16 can, for example, be arranged in a four-by-four matrix.
  • the embodiment of the method according to the invention is executed as follows.
  • the image 1 1 is captured by the camera 4.
  • the partial area 12 is determined.
  • the partial area 12 is determined such that it is either displayed on the display unit 13 and thus, for example, converted into the output image 14, or else utilized for further processing by means of a method of machine vision.
  • the selection area 15 is determined completely within the partial area 12.
  • the selection area 15 cannot be determined unless the partial area 12 has already been determined and is thus known.
  • a property of the image 1 1 in the selection area 15 is determined.
  • the property of the image 1 1 can, for example, be a brightness or brightness distribution of the image 1 1 within the partial area 12.
  • a camera parameter of the camera 4 is adjusted and/or the image 1 1 itself is adjusted.
  • the camera parameter can, for example, be an exposure time and/or a gain setting and/or a white balance setting and/or a gamma value. If the camera parameter is adjusted in dependence on the property of the image 1 1 , the adjusted camera parameter is then utilized for the capturing of a further image by means of the camera 4. With regard to the camera parameter, the further image is thus configured for high-quality capturing of a further partial area of the environmental region 7 previously represented by the partial area 12. Consequently, with regard to a further partial area and thus also with regard to a further selection area, the further image can be captured more precisely and in better quality than would be the case if the camera parameter had not been adjusted in dependence on the property of the image 1 1 in the selection area 15.
  • the image 1 1 can also be adjusted within the selection area 15.
  • the image 1 1 can, for example, be adjusted within the partial area 12 and in particular only within the selection area 15 by brightness setting and/or colour setting and/or contrast setting and/or a noise reduction method and/or a plurality of further image processing methods.
  • the image 1 1 can then be adjusted such in the area relevant for the user, in particular the driver of the motor vehicle 1 , i.e. in the partial area 12, that in the image 1 1 the partial area 12 receives preferred treatment with regard to the qualitative representation.
  • the area relevant to the user i.e. the partial area 12 of the image 1 1
  • the adjustment of the camera parameter and/or the adjustment of the image 1 1 in dependence on the partial area 12 and/or the selection area 15 can also be utilized for the generation of a top view image, in which a plurality of images 1 1 are joined together.
  • different parts of the plurality of images 1 1 can be utilized.
  • each utilized part of the respective image 1 1 can, for example, be described as the partial area 12 and/or the selection area 15.
  • the determination of the partial area 12 and/or the selection area 15 is in particular performed in real time. This means that the partial area 12 and/or the selection area 15 are thus in particular for each image 1 1 of an image sequence captured by the camera 4 adjusted within the time span which remains until the next image 1 1 of the image sequence is provided.
  • Fig. 5 shows the image 1 1 with the partial area 12.
  • the selection area 15 corresponds to the partial area 12.
  • the selection area 15 comprises only one of the selection subareas 16.
  • a method of machine vision is performed.
  • a plurality of objects 17 of the environmental region 7 are detected.
  • the image 1 1 and/or the camera settings for the further image can be adjusted such that the method of machine vision can be applied to the partial area 12 more correctly and precisely.
  • the partial area 12 receives preferred treatment with regard to the area of the image 1 1 positioned outside the partial area 12.
  • the area positioned outside the partial area 12 is underexposed or overexposed subsequent to the adjustment of the image.
  • the area of the further image positioned outside the further partial area is underexposed or overexposed because the camera parameter of the camera 4 has been adjusted correspondingly on the basis of the preceding image 1 1 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)

Abstract

The invention relates to a method for adjusting at least one camera parameter of a camera (4) of a motor vehicle (1) and/or an image (11) captured by the camera (4), where an environmental region (7) of the motor vehicle (1) is captured by the camera (4) at least partially and the image (11) is recorded of the captured environmental region (7), wherein a partial area (12) of the image (11) is determined, which is displayed on a display unit (13) of the motor vehicle (1) and/or provided for further processing by means of a method of machine vision, wherein a selection area (15) positioned completely within the partial area (12) is determined and the at least one camera parameter and/or the image (11) are adjusted in dependence on a property of the image (11) in the selection area (15).

Description

Method for adjusting a camera parameter and/or an image, computer program product, camera system, driver assistance system and motor vehicle
The invention relates to a method for adjusting at least one camera parameter of a camera of a motor vehicle and/or an image captured by the camera. An environmental region of the motor vehicle is captured by the camera at least partially. The image is recorded of the captured environmental region. The invention also relates to a computer program product, a camera system for a motor vehicle, a driver assistance system for a motor vehicle and a motor vehicle with a driver assistance system.
Methods for adjusting at least one camera parameter of a camera of a motor vehicle and/or an image captured by the camera are known from the prior art. In this respect, an area is usually defined in the image, in which properties of the image are determined, on the basis of which in turn the camera parameter can be adjusted. An example thereof is the automatic exposure control of known cameras. Further, however, the image can also be locally adjusted in the defined area. Thus, in the defined area for example an increase of contrast can be effected.
It is the object of the invention to provide a method, a computer program product, a camera system, a driver assistance system and a motor vehicle, by means of which or where information of an environmental region of a motor vehicle producible by a camera is provided in a qualitatively improved manner.
This object is solved according to the invention by a method, a computer program product, a camera system, a driver assistance system and a motor vehicle having the features of the respective independent claims.
In a method according to the invention at least one camera parameter of a camera of a motor vehicle and/or an image captured by the camera are adjusted. An environmental region of the motor vehicle is captured by the camera at least partially. The image is recorded of the captured environmental region. It is envisaged as an essential idea of the invention that a partial area of the image is determined, which is displayed on a display unit of the motor vehicle and/or provided for further processing by means of a method of machine vision. A selection area positioned completely within the partial area is determined. The at least one camera parameter and/or the image are adjusted in dependence on a property of the image in the selection area. As a result of the determination of the partial area and of the selection area, which is in turn completely positioned within the partial area, the camera parameter and/or the image can be adjusted more effectively and in better quality.
By means of the partial area a viewport of the image is described. In this context, the partial area can, for example, be defined such that subsequent to further possible processing steps this area of the image is outputted on a display unit of the motor vehicle. However, the partial area can also be defined such that this area is utilized, for example, for the further processing according to a method of machine vision. In particular, the information relevant for the display and/or the further processing is present only in the partial area. It is thus sufficient if the adjustment of the camera parameter and/or the image takes into account only the area of the image within the partial area. This can, for example, result in an underexposed display of areas outside the partial area. However, in that case these underexposed areas outside the partial area are not relevant, since preferably they are utilized neither for being displayed on the display unit nor for further processing by means of the method of machine vision.
Within the partial area the selection area is then determined. By means of the selection area a focus of interest can then in turn be directed to a local area within the partial area. In the selection area a brightness of the image can, for example, be determined so that the camera parameter which influences the brightness of an image captured by the camera can be adjusted. Additionally or alternatively, the image can also be adjusted on the basis of the property of the image in the selection area. Thus, for example, a brightness of the image can be increased so that the partial area is suitable for being displayed on the display unit and/or for further processing according to the method of machine vision. This can be performed even if in consequence thereof an area of the image outside the partial area is rendered too dark for being displayed and/or for further processing.
Thus it is in particular envisaged that the partial area is continually adjusted image by image and that thus for each image the camera parameter and/or the image can be adjusted with regard to the relevant area, i.e. the partial area.
Thus, in contrast to the prior art, the selection area is in particular determined each time completely within the partial area. If the partial area is not known or if the selection area is determined independently of the partial area or the viewport, it cannot be ensured in the prior art that the camera parameter and/or the image can be displayed on the display unit and/or provided for further processing according to the method of machine vision entirely in better quality.
It is preferably provided that a size and/or a position of the selection area within the partial area are determined in dependence on a driving direction of the motor vehicle. With regard thereto, the selection area can be placed within the partial area where there might be obstacles for the motor vehicle. In this context, for example, objects in the
environmental region are not relevant obstacles unless they are positioned in the driving direction of the motor vehicle. In addition to the determination of the driving direction, a speed of the motor vehicle can also, for example, be determined. In dependence on the speed of the motor vehicle, a driving trajectory of the motor vehicle can then be predicted, for example in connection with other sensor data of the motor vehicle, e.g. of a steering angle sensor. Accordingly, additionally or alternatively, the size of the selection area can also be adjusted. Thus, for example, the camera parameter and/or the image can be adjusted particularly precisely for the partial area, in particular the selection area. Thus, for example, a driver assistance system of the motor vehicle can be operated more precisely and the safety of the motor vehicle is improved.
It can further be provided that in the image at least part of the environmental region and at least part of the motor vehicle are shown and that the partial area is determined such that only the environmental region is shown in the partial area. This can, for example, be the case if the camera comprises a so-called fisheye objective lens and the image is captured by the fisheye objective lens. Usually an azimuthal angle of, for example, 180° or more is captured by the fisheye objective lens. This frequently has the consequence that not only the environmental region itself, but also parts of the motor vehicle are represented in the image. Thus, in the determination of the partial area, the partial area is preferably positioned only within the environmental region depicted in the image. This is advantageous in that thus irrelevant areas of the image such as the own number plate of the motor vehicle do not negatively influence the quality of the image in the processing or adjustment of the image or of the camera parameter.
It is in particular provided that a further image is captured by the camera and that a further partial area is determined for the further image, wherein in dependence on the further partial area a further selection area is determined, preferably positioned
completely within the further partial area, and the camera parameter and/or the further image are adjusted each in dependence on a property of the further image, in particular in dependence on a property featured only in the selection area. Thereby it is enabled that the camera parameter and/or the image can be repeatedly adjusted anew and thus in a situation-related manner. The adjustment or determination of the further partial area and/or of the further selection area are preferably carried out in real time or else in the time span remaining between single consecutive camera recordings. The real time frame is predefined, for example, by the recording frequency or frame rate of the camera, which can be 30 Hz or 60 Hz, in particular 29.97 Hz, for example. Thus, the partial area and/or the selection area are dynamically adjusted to the current situation. Thus, within the relevant areas or partial areas the quality of the image can be maintained in continual compliance with qualitative requirements, e.g. the requirement that there should be no underexposure or overexposure. Thus, in turn, the driver assistance system can be operated more precisely and the safety of the motor vehicle can be increased.
It is preferably provided that in dependence on the property of the image in the selection area the image is adjusted only within the selection area. Thus, the adjustment of the image can be limited solely to the selection area. This means, for example, that preferably the image is not adjusted outside the selection area. This has the advantage that thus accelerated processing is enabled in the adjustment of the image. However, this can also mean that for the adjustment of the camera parameter only the selection area is taken into consideration and not the image within the partial area, but outside the selection area. However, in a further embodiment the selection area can also coincide with the partial area.
It is further preferably provided that on the basis of the partial area an output image is generated, in particular through rectification of the image within the partial area, and that the output image is outputted on a display unit of the motor vehicle. The image is thus, for example, a raw image. For the output of the relevant information of the image the partial area can then, for example, be determined in the image. However, the partial area can feature a distortion resultant, for example, from the above-mentioned fisheye objective lens. Therefore, the partial area of the image can then, for example, be rectified and provided as an output image for the output on the display unit. The term rectification describes, for example, the elimination of geometrical distortions in image data. This can be effected, for example, by means of a transformation of the partial area. Thus, the output of the partial area as the output image can be less distorted and consequently more illustrative. It is preferably provided that as a camera parameter of the camera an exposure time and/or a gain setting and/or a white balance setting and/or a gamma value are adjusted. Exposure time refers to the time span in which a photosensitive medium, e.g. a CMOS sensor or a CCD sensor, is exposed to the light for image capturing. However, as the camera parameter also, for example, an f-number and/or a light sensitivity of the camera can be adjusted. The light sensitivity is for example adjusted via the gain setting of the camera. By means of the gain setting it is predefined to what extent the incident amount of light, e.g. the incident photons, is to be artificially increased or reduced for the evaluation. The white balance of the white balance setting serves to sensitize the camera to the colour temperature of the light at the recording location, that is, to the colour temperature of the environmental region of the motor vehicle, for example. The gamma value or gamma correction is a correction function for transforming a proportionally increasing physical quantity into a quantity that does not grow on a linear scale according to human perception. In mathematical terms this function is a power function with an exponent often referred to for short as gamma value as its only parameter. As a result of the adjustment of the camera parameter, the further image can be captured such that at least the further selection area and/or the further partial area can be provided with the desired image properties. This is possible because the scene or the captured part of the environmental region do not usually change so abruptly that a completely different scene is captured by the image and the further image captured subsequently thereto.
The camera can also be a time of flight camera or a hyperspectral camera or a plenoptic camera. The camera can also have a plurality of sensors. By the hyperspectral camera more than just the three usual frequency bands red, green and blue can be captured. In particular, by the hyperspectral camera electromagnetic radiation from the infrared wavelength range, in particular with a plurality of frequency bands, can be captured. It is advantageous that colours in the environmental region can be captured by the hyperspectral camera in a more precise and differentiated manner, in particular if the camera is directed towards the sun.
In a hyperspectral architecture, the system may be more sensitive to particular colours depending on whether the driving direction is towards or moving away from sunlight. Alternatively, the partial area may be moved away from a region of the image containing the sun. This can be determined uniquely from a combination of the driving direction of the motor vehicle, the current position of the motor vehicle, e.g. by global navigation satellite system (GNSS) of the motor vehicle, and the current time for example. It is further preferably provided that the image is adjusted, in particular only, in the selection area by brightness setting and/or colour setting and/or contrast setting and/or a noise reduction method. The image can thus, for example, be adjusted with regard to its brightness by brightness setting. The selection area can thus for example be adjusted to be brighter or darker than before. Further, also for example a contrast can be adjusted. Likewise, the colour setting can be adjusted so that, for example, particular colours are emphasized. Further, within the selection area the image can thereby, for example, be perceived as more natural by an observer. Also, for example in the case of high dynamic range images (HDRI) a contribution to the so-called tone mapping method can be made. Tone mapping or dynamic compression refers to the compression of the dynamic range of high dynamic range images, i.e. of images with a high brightness range. In tone mapping the contrast range of a high dynamic range image is reduced in order to be able to display it on conventional display devices. For example, as a result of the adjustment of the image within the selection area, tone mapping can be utilized more simply and effectively. Also, for example, the contrast of the image, in particular in the selection area, can be adjusted by contrast setting. Also, however, for example a smoothing filter such as a Gaussian filter can be applied in order to suppress image noise, in particular in the selection area, by means of the noise reduction method. However, noise can be also for the time being deliberately kept in the image, in particular in the selection area. It is advantageous for several machine vision applications that noise is not immediately removed from the image. As methods based on noise statistics which are for instance used by a„back-end" machine vision algorithm may remove noise in a qualtitively better way than fast„front-end" algorithms which are for instance implemented in the camera. Due to their speed and compactness, the fast„front-end" algorithms can confuse machine vision data or excessively remove underlying structure that would not be noticed by human vision but would be noticed by machine vision algorithms. Moreover, edges can, for example, be drawn more sharply in the image, in particular in the selection area, e.g. by means of so-called image sharpness methods. It is thus advantageous that within the partial area the image can be variously adjusted to enable an output of the partial area on the display unit in better quality and/or to apply a method of machine vision more precisely and correctly to the partial area and/or the selection area.
It is further preferably provided that the selection area is divided into a plurality of selection subareas and the camera parameter and/or the image are adjusted in dependence on the plurality of selection subareas. The selection subareas can, for example, have the same size. The selection subareas are in particular arranged in the manner of a matrix. By means of the selection subareas the selection area can be adjusted more precisely and variously. In this context, the selection area can, for example, comprise only one of the selection subareas or else also a plurality of selection subareas. The larger the number of selection subareas utilized, the higher the precision with which the adjustment of the camera parameter and/or the captured image can be performed. However, the selection subareas can also, for example, have different sizes and/or be weighted differently when evaluated.
It can further be provided that in dependence on the selection area weightings for a plurality of camera parameters are determined and the plurality of camera parameters are adjusted in dependence on the determined weightings. This can be utilized, for example, if a total image is provided by a plurality of cameras. The total image is, for example, joined together from a plurality of images in order to generate a top view image. In this context, it is usually of importance that the top view image is homogenous. With regard thereto, different weightings can be accorded to different selection areas. In this respect, the selection areas can, for example, be determined such with regard to their size and position that in the generation of the top view image they fit together and thus, subsequent to the merging, a homogenous total image or top view image is generated.
The invention relates to a computer program product for performing a method according to the invention if the computer program product is executed on a programmable computer device.
The invention relates to a camera system for a motor vehicle with a camera and an evaluation unit. The camera system is configured to perform a method according to the invention. The camera system can also comprise a plurality of cameras and/or evaluation units.
The invention also relates to a driver assistance system for a motor vehicle with a camera system according to the invention.
The invention moreover relates to a motor vehicle with a driver assistance system according to the invention.
The preferred embodiments and their advantages illustrated with respect to the method according to the invention apply correspondingly to the computer program product according to the invention, the camera system according to the invention, the driver assistance system according to the invention and the motor vehicle according to the invention.
The terms "above", "below", "in front", "behind", "horizontal", "vertical" etc. signify the positions and orientations given if the camera is used and arranged as intended.
Further features of the invention are apparent from the claims, the figures and the description of the figures. The features and feature combinations previously mentioned in the description and the features and feature combinations mentioned below in the description of the figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations or else alone, without leaving the scope of the invention. Thus, also embodiments are to be regarded as comprised and disclosed by the invention, which are not explicitly shown and explained in the figures, but which derive and are producible from the illustrated explanations by separated feature combinations. Also, embodiments and feature combinations are to be regarded as disclosed which thus do not exhibit all features of an originally formulated independent claim.
Embodiments of the invention are detailed below with reference to schematic drawings. Therein show:
Fig. 1 a schematic top view of an embodiment of a motor vehicle according to the invention with a driver assistance system;
Fig. 2 a schematic view of an image with a partial area, captured by a camera of a camera system of the driver assistance system;
Fig. 3 a schematic view of an output image generated on the basis of the partial area;
Fig. 4 a schematic view of the image with the partial area and a selection area having a plurality of selection subareas, which is positioned completely within the partial area; and Fig. 5 a schematic view of the image with the partial area, which is displayed on a display unit of the motor vehicle and provided for further processing by means of a method of machine vision.
In the figures identical or functionally identical elements bear the same reference numbers.
Fig. 1 shows schematically a motor vehicle 1 with a driver assistance system 2. In the embodiment the driver assistance system 2 comprises a camera system 3. The camera system 3 further comprises a camera 4 and an evaluation unit 5. The evaluation unit 5 can, for example, be integrated into the camera 4 or else formed separately from the camera 4.
According to the embodiment, the camera 4 is arranged at a front 6 of the motor vehicle 1 . The camera 4 can be variously arranged, preferably, however, such that an environmental region 7 of the motor vehicle 1 can be captured at least partially. The camera system 3 can also comprise a plurality of cameras 4. Thus, also a plurality of cameras 4 can be arranged at the motor vehicle 1 .
The camera 4 can be a CMOS camera (complementary metal-oxide-semiconductor) or a CCD camera (charge coupled-device) or any other image capturing device, for example a time of flight camera, a hyperspectral camera, a plenoptic camera or in particular an infrared camera. By the infrared camera electromagnetic radiation from the infrared wavelength range from 780 nm to 1 mm, in particular the near infrared wavelength range (NIR) from 780 nm to 3 μηι, can be captured. The camera 4 provides an image sequence of images of the environmental region 7. The image sequence of the images is then, for example, processed in real time by the evaluation unit 5.
The driver assistance system 2 can, for example, comprise a top view image system and/or an object detection system and/or an obstacle warning system. Moreover, the driver assistance system 2 can also be formed as a camera monitoring system (CMS). In the camera monitoring system the camera 4 can, for example, additionally be arranged in a left side mirror 8 of the motor vehicle 1 and/or in a right side mirror 9 of the motor vehicle 1 and/or at a rear end 10 of the motor vehicle 1 . Fig. 2 shows an image 1 1 . The image 1 1 is captured by the camera 4. The image 1 1 is a so-called raw image and shows a representation of the environmental region 7 and of the motor vehicle 1 . In the embodiment the image 1 1 was captured depending on a fisheye objective lens of the camera 4. Thus, the environmental region 7 is represented in the image 1 1 with a horizontal angle or an azimuthal angle of more than 180° for example. Thus, in the peripheral areas of the image 1 1 the motor vehicle 1 itself is shown in part.
In the image 1 1 a partial area 12 is determined. The partial area 12 is defined such that it is displayed on a display unit 13 - as shown in Fig. 1 - and/or provided for further processing by means of a method of machine vision. Accordingly, the partial area 12 can also be described as a field of vision. The term machine vision usually describes the computer-aided solution of tasks capable of being solved by the human visual system. This includes, for example, the detection of objects in the environmental region 7. Thus, by means of the partial area 12 an area of the image 1 1 is determined which is relevant for a further step. The further step can be the display of the partial area 12 on the display unit 13 or, additionally or alternatively, the further processing of the partial area 12 by means of a method of machine vision. Preferably the partial area 12 is arranged such that within the partial area 12 only the environmental region 7 is represented. It is thus in particular not intended that parts of the motor vehicle 1 , which are shown in the image 1 1 , are arranged within the partial area 12.
Fig. 3 shows an output image 14. The output image 14 is outputted thus on the display unit 13. The output image 14 is generated on the basis of the partial area 12. Thus, the partial area 12 of the image 1 1 is rectified or corrected through transformation. As a result of the rectification of the partial area 12 the output image 14 is provided. The rectification can be necessary in order to reduce distortions which can, for example, result from a fisheye objective lens of the camera 4 in the partial area 12.
Fig. 4 shows the image 1 1 with the partial area 12. A selection area 15 is arranged completely within the partial area 12. A size and/or a position of the selection area 15 within the partial area 12 can, for example, depend on a driving direction of the motor vehicle 1 . Moreover, according to the embodiment of Fig. 4, the partial area 12 is divided into selection subareas 16. The selection subareas 16 can, for example, be arranged within the selection area 15 in various numbers and sizes. Furthermore, the selection subareas 16 can be arranged within the selection area 15 in the manner of a matrix, for example. The selection subareas 16 can, for example, be arranged in a four-by-four matrix. The embodiment of the method according to the invention is executed as follows. The image 1 1 is captured by the camera 4. Then the partial area 12 is determined. The partial area 12 is determined such that it is either displayed on the display unit 13 and thus, for example, converted into the output image 14, or else utilized for further processing by means of a method of machine vision. The selection area 15 is determined completely within the partial area 12. The selection area 15 cannot be determined unless the partial area 12 has already been determined and is thus known. Moreover, a property of the image 1 1 in the selection area 15 is determined. The property of the image 1 1 can, for example, be a brightness or brightness distribution of the image 1 1 within the partial area 12. Then, in dependence on the property of the image 1 1 a camera parameter of the camera 4 is adjusted and/or the image 1 1 itself is adjusted.
The camera parameter can, for example, be an exposure time and/or a gain setting and/or a white balance setting and/or a gamma value. If the camera parameter is adjusted in dependence on the property of the image 1 1 , the adjusted camera parameter is then utilized for the capturing of a further image by means of the camera 4. With regard to the camera parameter, the further image is thus configured for high-quality capturing of a further partial area of the environmental region 7 previously represented by the partial area 12. Consequently, with regard to a further partial area and thus also with regard to a further selection area, the further image can be captured more precisely and in better quality than would be the case if the camera parameter had not been adjusted in dependence on the property of the image 1 1 in the selection area 15.
Additionally or alternatively, the image 1 1 can also be adjusted within the selection area 15. Thus, the image 1 1 can, for example, be adjusted within the partial area 12 and in particular only within the selection area 15 by brightness setting and/or colour setting and/or contrast setting and/or a noise reduction method and/or a plurality of further image processing methods. The image 1 1 can then be adjusted such in the area relevant for the user, in particular the driver of the motor vehicle 1 , i.e. in the partial area 12, that in the image 1 1 the partial area 12 receives preferred treatment with regard to the qualitative representation. In this context, the area relevant to the user, i.e. the partial area 12 of the image 1 1 , can feature a traffic participant or another obstacle for the motor vehicle 1 .
Further, the adjustment of the camera parameter and/or the adjustment of the image 1 1 in dependence on the partial area 12 and/or the selection area 15 can also be utilized for the generation of a top view image, in which a plurality of images 1 1 are joined together. Thus, for example, different parts of the plurality of images 1 1 can be utilized.
Accordingly, each utilized part of the respective image 1 1 can, for example, be described as the partial area 12 and/or the selection area 15.
The determination of the partial area 12 and/or the selection area 15 is in particular performed in real time. This means that the partial area 12 and/or the selection area 15 are thus in particular for each image 1 1 of an image sequence captured by the camera 4 adjusted within the time span which remains until the next image 1 1 of the image sequence is provided.
Fig. 5 shows the image 1 1 with the partial area 12. According to the embodiment of Fig. 5, the selection area 15 corresponds to the partial area 12. According to this embodiment, the selection area 15 comprises only one of the selection subareas 16. In the
embodiment a method of machine vision is performed. Thus, within the partial area 12 a plurality of objects 17 of the environmental region 7 are detected. By means of the method according to the invention, the image 1 1 and/or the camera settings for the further image can be adjusted such that the method of machine vision can be applied to the partial area 12 more correctly and precisely.
As can be seen in Fig. 5, the partial area 12 receives preferred treatment with regard to the area of the image 1 1 positioned outside the partial area 12. As a result it can occur that the area positioned outside the partial area 12 is underexposed or overexposed subsequent to the adjustment of the image. This also applies to the further partial area, which is not shown, of the further image which is captured subsequent to the image 1 1 utilizing the adjusted camera settings. Here it can also happen that the area of the further image positioned outside the further partial area is underexposed or overexposed because the camera parameter of the camera 4 has been adjusted correspondingly on the basis of the preceding image 1 1 .

Claims

Patent claims
1 . Method for adjusting at least one camera parameter of a camera (4) of a motor vehicle (1 ) and/or an image (1 1 ) captured by the camera (4), where an
environmental region (7) of the motor vehicle (1 ) is captured by the camera (4) at least partially and the image (1 1 ) is recorded of the captured environmental region
(7),
characterized in that
a partial area (12) of the image (1 1 ) is determined, which is displayed on a display unit (13) of the motor vehicle (1 ) and/or provided for further processing by means of a method of machine vision, wherein a selection area (15) positioned completely within the partial area (12) is determined and the at least one camera parameter and/or the image (1 1 ) are adjusted in dependence on a property of the image (1 1 ) in the selection area (15)
and a size and/or a position of the selection area (15) within the partial area (12) are determined in dependence on a driving direction of the motor vehicle (1 ).
2. Method according to claim 1 ,
characterized in that
in the image (1 1 ) at least part of the environmental region (7) and at least part of the motor vehicle (1 ) are shown and the partial area (12) is determined such that only the environmental region (7) is shown in the partial area (12).
3. Method according to any one of the preceding claims,
characterized in that
a further image is captured by the camera (4) and a further partial area is determined for the further image, wherein a further selection area is determined in dependence on the further partial area and the camera parameter and/or the further image are each adjusted in dependence on a property of the further image, in particular in dependence on a property featured only in the further selection area.
4. Method according to any one of the preceding claims,
characterized in that in dependence on the property of the image (1 1 ) in the selection area (15) the image (1 1 ) is adjusted only within the selection area (15).
5. Method according to any one of the preceding claims,
characterized in that
an output image (14) is generated on the basis of the partial area (12), in particular through rectification of the image (1 1 ) within the partial area (12), and the output image (14) is outputted on a display unit (13) of the motor vehicle (1 ).
6. Method according to any one of the preceding claims,
characterized in that
as the camera parameter of the camera (4) an exposure time and/or a gain setting and/or a white balance setting and/or a gamma value are adjusted.
7. Method according to any one of the preceding claims,
characterized in that
in the selection area (15) the image (1 1 ) is adjusted by brightness setting and/or colour setting and/or contrast setting and/or a noise reduction method.
8. Method according to any one of the preceding claims,
characterized in that
the selection area (15) is divided into a plurality of selection subareas (16) and the camera parameter and/or the image (1 1 ) are adjusted in dependence on the plurality of selection subareas (16).
9. Method according to any one of the preceding claims,
characterized in that
at least one parameter of an image processing method, by means of which at least one property of the image (1 1 ) is adjusted, is adjusted in dependence on the selection area (15).
10. Method according to any one of the preceding claims,
characterized in that
in dependence on the selection area (15) weightings for a plurality of camera parameters are determined and the plurality of the camera parameters are adjusted in dependence on the determined weightings.
1 1 . Computer program product for performing a method according to any one of the preceding claims if the computer program product is executed on a programmable computer device.
12. Camera system (3) for a motor vehicle (1 ) with a camera (4) and an evaluation unit (5), which is configured to perform a method according to any one of claims 1 to 10.
13. Driver assistance system (2) for a motor vehicle (1 ) with a camera system (3)
according to claim 12.
14. Motor vehicle (1 ) with a driver assistance system (2) according to claim 13.
PCT/EP2016/075011 2015-10-29 2016-10-19 Method for adjusting a camera parameter and/or an image, computer program product, camera system, driver assistance system and motor vehicle WO2017071996A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102015118474.5 2015-10-29
DE102015118474.5A DE102015118474A1 (en) 2015-10-29 2015-10-29 Method for adjusting a camera parameter and / or an image, computer program product, camera system, driver assistance system and motor vehicle

Publications (1)

Publication Number Publication Date
WO2017071996A1 true WO2017071996A1 (en) 2017-05-04

Family

ID=57184435

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2016/075011 WO2017071996A1 (en) 2015-10-29 2016-10-19 Method for adjusting a camera parameter and/or an image, computer program product, camera system, driver assistance system and motor vehicle

Country Status (2)

Country Link
DE (1) DE102015118474A1 (en)
WO (1) WO2017071996A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112703724A (en) * 2018-09-13 2021-04-23 索尼半导体解决方案公司 Information processing apparatus and information processing method, imaging apparatus, mobile device, and computer program

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102023101815A1 (en) 2023-01-25 2024-07-25 Bayerische Motoren Werke Aktiengesellschaft METHOD AND DEVICE FOR ADAPTING A DISPLAY OF AN ENVIRONMENT OF A MOTOR VEHICLE

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102006030394A1 (en) * 2006-07-01 2008-01-03 Leopold Kostal Gmbh & Co. Kg Automotive driver assistance system for e.g. lane recognition has image sensor control unit responding to change in vehicle position
US20140204267A1 (en) * 2013-01-23 2014-07-24 Denso Corporation Control of exposure of camera
DE102013011844A1 (en) * 2013-07-16 2015-02-19 Connaught Electronics Ltd. Method for adjusting a gamma curve of a camera system of a motor vehicle, camera system and motor vehicle
DE102013020952A1 (en) * 2013-12-12 2015-06-18 Connaught Electronics Ltd. Method for setting a parameter relevant to the brightness and / or the white balance of an image representation in a camera system of a motor vehicle, camera system and motor vehicle

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6967569B2 (en) * 2003-10-27 2005-11-22 Ford Global Technologies Llc Active night vision with adaptive imaging
US8199198B2 (en) * 2007-07-18 2012-06-12 Delphi Technologies, Inc. Bright spot detection and classification method for a vehicular night-time video imaging system
DE102012008986B4 (en) * 2012-05-04 2023-08-31 Connaught Electronics Ltd. Camera system with adapted ROI, motor vehicle and corresponding method
US9738223B2 (en) * 2012-05-31 2017-08-22 GM Global Technology Operations LLC Dynamic guideline overlay with image cropping

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102006030394A1 (en) * 2006-07-01 2008-01-03 Leopold Kostal Gmbh & Co. Kg Automotive driver assistance system for e.g. lane recognition has image sensor control unit responding to change in vehicle position
US20140204267A1 (en) * 2013-01-23 2014-07-24 Denso Corporation Control of exposure of camera
DE102013011844A1 (en) * 2013-07-16 2015-02-19 Connaught Electronics Ltd. Method for adjusting a gamma curve of a camera system of a motor vehicle, camera system and motor vehicle
DE102013020952A1 (en) * 2013-12-12 2015-06-18 Connaught Electronics Ltd. Method for setting a parameter relevant to the brightness and / or the white balance of an image representation in a camera system of a motor vehicle, camera system and motor vehicle

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112703724A (en) * 2018-09-13 2021-04-23 索尼半导体解决方案公司 Information processing apparatus and information processing method, imaging apparatus, mobile device, and computer program
US11815799B2 (en) 2018-09-13 2023-11-14 Sony Semiconductor Solutions Corporation Information processing apparatus and information processing method, imaging apparatus, mobile device, and computer program

Also Published As

Publication number Publication date
DE102015118474A1 (en) 2017-05-04

Similar Documents

Publication Publication Date Title
KR101367637B1 (en) Monitoring apparatus
JP4341691B2 (en) Imaging apparatus, imaging method, exposure control method, program
US11477372B2 (en) Image processing method and device supporting multiple modes and improved brightness uniformity, image conversion or stitching unit, and computer readable recording medium realizing the image processing method
US10630920B2 (en) Image processing apparatus
JP6029954B2 (en) Imaging device
EP3410702B1 (en) Imaging device, imaging/display method and imaging/display program
WO2012172922A1 (en) Vehicle-mounted camera device
JPWO2018070100A1 (en) Image processing apparatus, image processing method, and photographing apparatus
JP5860663B2 (en) Stereo imaging device
US9800776B2 (en) Imaging device, imaging device body, and lens barrel
US9214034B2 (en) System, device and method for displaying a harmonized combined image
WO2016104166A1 (en) Image capturing system
US10551535B2 (en) Image pickup apparatus capable of inserting and extracting filter
US20170347008A1 (en) Method for adapting a brightness of a high-contrast image and camera system
US9769376B2 (en) Imaging device, imaging device body, and lens barrel
WO2017071996A1 (en) Method for adjusting a camera parameter and/or an image, computer program product, camera system, driver assistance system and motor vehicle
JP2019029833A (en) Imaging apparatus
JP2020068524A (en) Image processing
US20180176445A1 (en) Imaging device and imaging method
JP6330474B2 (en) Image processing apparatus, image processing apparatus control method, and imaging apparatus
JP7483368B2 (en) Image processing device, control method and program
US10944929B2 (en) Imaging apparatus and imaging method
JP6466809B2 (en) Imaging apparatus and imaging method
JP2012010282A (en) Imaging device, exposure control method, and exposure control program
CN109906602B (en) Image matching method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16784860

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16784860

Country of ref document: EP

Kind code of ref document: A1