KR20140118115A - System and method for calibrating around view of vehicle - Google Patents

System and method for calibrating around view of vehicle Download PDF

Info

Publication number
KR20140118115A
KR20140118115A KR1020130033464A KR20130033464A KR20140118115A KR 20140118115 A KR20140118115 A KR 20140118115A KR 1020130033464 A KR1020130033464 A KR 1020130033464A KR 20130033464 A KR20130033464 A KR 20130033464A KR 20140118115 A KR20140118115 A KR 20140118115A
Authority
KR
South Korea
Prior art keywords
color
image
color correction
correction
weight
Prior art date
Application number
KR1020130033464A
Other languages
Korean (ko)
Inventor
강진용
Original Assignee
삼성전기주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전기주식회사 filed Critical 삼성전기주식회사
Priority to KR1020130033464A priority Critical patent/KR20140118115A/en
Publication of KR20140118115A publication Critical patent/KR20140118115A/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/02Rear-view mirror arrangements
    • B60R1/08Rear-view mirror arrangements involving special optical features, e.g. avoiding blind spots, e.g. convex mirrors; Side-by-side associations of rear-view and other mirrors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/101Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using cameras with adjustable capturing direction
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/40Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the details of the power supply or the coupling to vehicle components
    • B60R2300/402Image calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a vehicle periphery image correction system and a vehicle periphery image correction method. According to one embodiment of the present invention, there is provided an image processing apparatus comprising: an image acquiring unit acquiring images around a vehicle with a plurality of cameras; An object detecting unit for detecting an object in the overlapping region of the acquired images; A color correcting unit for performing color correction in the remaining area excluding the object in the overlapping area; And an image synthesizer for performing image synthesis by connecting images forming the overlap region with the color corrected region. Also, a method for correcting the surrounding image of the vehicle is proposed.

Description

TECHNICAL FIELD [0001] The present invention relates to an image correction system for a vehicle,

The present invention relates to a vehicle periphery image correction system and a vehicle periphery image correction method. More particularly, the present invention relates to a vehicle periphery image correction system and a vehicle periphery image correction method for correcting a difference in color distribution due to the presence of an object in overlapping images of captured images around the vehicle.

The surround view or the surround view image is displayed so that the driver can not see the surroundings of the vehicle which can not be seen, so that a plurality of cameras are installed in the vehicle so that the images around the automobile are displayed on one screen. For example, a top-view image looking at the vehicle from above is generated from the captured images of the cameras mounted on the front, rear, left, and right sides of the vehicle to indicate the front, rear, and side conditions of the vehicle.

At this time, since a surround-view image is formed by combining images captured by a plurality of cameras installed in a vehicle, colors of the same object may be displayed differently depending on the environment in which the images are captured in each camera.

Conventionally, an overlapping region of images obtained by each camera for photographing the surroundings of a vehicle is adjusted by simply adjusting the average color distribution over the entire overlapping region without considering the influence of the presence of an object. For example, when photographing using a wide-angle lens in front, rear, left and right cameras mounted on a vehicle, it is possible to have a view angle of about 180 degrees or more. At this time, overlapping areas OA1 to OA4 May be formed. At this time, conventionally, the color distribution of the entire image is similarly adjusted by simply averaging and adjusting the color distribution of all the overlapping regions so that the color distributions of the entire overlapping regions are similar.

5, the color distribution OA_B1 due to the presence of the object B shown by the front camera 11 and the color distribution OA_B1 due to the presence of the object B in the overlap area OA1 where the object B exists, The color distribution (OA_B2) due to the presence of the object (B) to be displayed may be different. That is, when an object exists in the overlapping region, the distribution of color, brightness, and the like appears differently depending on the color distribution of the object, the direction of light, and the like because the object is viewed from different angles. At this time, when the color distribution of the entire image is similarly adjusted by simply averaging and adjusting the color distribution of the entire overlapping region as in the conventional method, the influence of the existence of the object is ignored or insignificant, so that the influence by the existence of the object is reflected Accurate peripheral images can not be obtained. Accordingly, it may be difficult to ensure accurate visibility of the user's surrounding environment from the processed image.

Korean Patent Publication No. 10-2013-0006906 (published on Jan. 18, 2013)

SUMMARY OF THE INVENTION The present invention has been made to solve the above-mentioned problems, and it is an object of the present invention to provide a vehicle periphery image correction technique in which color correction is performed in an overlapping area in consideration of the influence of objects existing in overlapping areas of images captured by a plurality of cameras.

According to one aspect of the present invention, there is provided an image processing apparatus comprising: an image obtaining unit for obtaining images around a vehicle by a plurality of cameras; An object detecting unit for detecting an object in the overlapping region of the acquired images; A color correcting unit for performing color correction in the remaining area excluding the object in the overlapping area; And an image synthesizer for performing image synthesis by connecting images forming the overlap region with the color corrected region.

In this case, in one example, the color correction unit may include: a color information extraction unit that extracts color information of the overlapping area; A parameter calculation unit for calculating a color correction parameter using the extracted color information; A maintenance ratio calculation unit for calculating a maintenance ratio using the calculated color correction parameter; And a color correction executing unit for performing color correction on the remaining area by applying the calculated correction ratio.

At this time, the color information extracting unit extracts the average object color information of the object region and the average remaining color information of the remaining region in each of the first and second images forming the overlap region, The sum of the product of the first weight and the average object color information, the product of the second weight and the average residual color information, the sum of the first weight and the second weight is a predetermined weight value, and each of the first and second weights May be an experimentally obtained set value among the interval values from '0' to the weight value.

Also, at this time, in one example, the first weight may be '0'.

According to one example, when the reference of the color correction is the first image, the maintenance ratio is the ratio of the color correction parameter of the second image to the color correction parameter of the first image, Color correction can be performed by multiplying the color information of the color correction amount by the correction amount.

Further, in one example, the plurality of cameras may be four cameras that photograph the vehicle in the longitudinal, lateral, and lateral directions.

In this case, the vehicle periphery image correction system according to one example may further include a display unit for displaying the synthesized image.

Next, in order to solve the above-mentioned problem, according to another aspect of the present invention, there is provided a method comprising: obtaining images around a vehicle with a plurality of cameras; Detecting an object in an overlapping region of acquired images; Performing color correction in an area other than an object in the overlapping area; And performing image synthesis by connecting images forming the overlap region with the color-corrected region.

At this time, in one example, the step of performing color correction includes: extracting color information of a redundant area; Calculating color correction parameters using the extracted color information; Calculating a maintenance ratio using the calculated color correction parameter; And performing color correction on the remaining area by applying the calculated maintenance ratio.

In this case, in the step of extracting the color information, the average object color information of the object region and the average remaining color information of the remaining region are extracted from each of the first and second images forming the overlap region, The sum of the product of the first weight and the average object color information and the product of the second weight and the average residual color information in each of the first image and the second image, and the sum of the first weight and the second weight is a preset weight value, And each of the second weights may be an experimentally obtained set value among the interval values from '0' to the weight value.

Also, at this time, in one example, the first weight may be '0'.

According to one example, in the case where the reference of the color correction is the first image, the maintenance ratio is the ratio of the color correction parameter of the second image to the color correction parameter of the first image, The color correction can be performed by multiplying the color information of each of the remaining areas by the maintenance ratio.

In another example, a plurality of cameras may be four cameras that photograph the vehicle in the front, rear, left, and right directions.

Here, in another example, the vehicle periphery image correction method may further include displaying the synthesized image.

According to an embodiment of the present invention, color correction can be performed in an overlapping area in consideration of the influence of objects existing in overlapping areas of images captured by a plurality of cameras.

In addition, according to one example, it is possible to easily perceive the periphery of the user by performing color correction so as to exhibit a similar color distribution by reflecting the influence of the color distribution by the object existing in the overlapping region

It is apparent that various effects not directly referred to in accordance with various embodiments of the present invention can be derived by those of ordinary skill in the art from the various configurations according to the embodiments of the present invention.

FIG. 1 is a block diagram schematically illustrating a vehicle periphery image correction system according to an embodiment of the present invention.
2 is a block diagram schematically illustrating a vehicle periphery image correction system according to another embodiment of the present invention.
3 is a block diagram schematically showing a part of the configuration of a vehicle surroundings image correction system according to an embodiment of the present invention.
4 is a block diagram schematically showing a part of the configuration of a vehicle surroundings image correction system according to another embodiment of the present invention.
5 is a diagram schematically showing an application state of a vehicle surroundings image correction system according to another embodiment of the present invention.
6 is a flowchart schematically illustrating a method for correcting a surrounding image according to another embodiment of the present invention.
FIG. 7 is a flowchart schematically illustrating a method of correcting a surrounding image of a vehicle according to another embodiment of the present invention.
FIG. 8 is a flowchart schematically showing a part of a method for correcting a surrounding image according to an embodiment of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a block diagram showing the configuration of a first embodiment of the present invention; Fig. In the description, the same reference numerals denote the same components, and a detailed description may be omitted for the sake of understanding of the present invention to those skilled in the art.

As used herein, unless an element is referred to as being 'direct' in connection, combination, or placement with other elements, it is to be understood that not only are there forms of being 'directly connected, They may also be present in the form of being connected, bonded or disposed.

It should be noted that, even though a singular expression is described in this specification, it can be used as a concept representing the entire constitution unless it is contrary to, or obviously different from, or inconsistent with the concept of the invention. It is to be understood that the phrases "including", "having", "having", "comprising", etc. in this specification are intended to be additionally or interchangeable with one or more other elements or combinations thereof.

A vehicle periphery image correction system according to one aspect of the present invention will be described in detail with reference to the drawings. Here, reference numerals not shown in the drawings to be referred to may be reference numerals in other drawings showing the same configuration.

FIG. 1 is a block diagram schematically showing a vehicle periphery image correction system according to an embodiment of the present invention, FIG. 2 is a block diagram schematically showing a vehicle periphery image correction system according to another embodiment of the present invention FIG. 3 is a block diagram schematically showing a part of the configuration of a vehicle periphery image correction system according to an embodiment of the present invention, and FIG. 4 is a block diagram schematically illustrating a configuration of a part of a vehicle periphery image correction system according to another embodiment of the present invention And FIG. 5 is a view schematically showing an application state of the vehicle surroundings image correction system according to another embodiment of the present invention.

An object detecting unit 30, a color correcting unit 50, and an image synthesizing unit 70. The image acquiring unit 10, the object detecting unit 30, the color correcting unit 50, and the image synthesizing unit 70 are examples of the vehicle surrounding image correcting system according to one embodiment of the present invention. ). Further, referring to FIG. 2, in one example, the vehicle surroundings image correction system may further include a display portion 90. FIG. Let's look at each configuration in detail below.

Referring to FIG. 1 and FIG. 2, the image acquisition unit 10 acquires images around the vehicle by a plurality of cameras 11 to 14. At this time, images obtained from at least some neighboring cameras include overlapping images. 5, when the object B exists in the overlapping areas OA1 to OA4, the direction of the camera B of the overlapped area OA1 in which the object B is present according to the direction of light or shadow (SOB) The color distribution may be different. 5, the color distribution OA_B1 due to the presence of the object B shown by the front camera 11 and the color distribution OA_B1 by the left camera 12 in the overlap area OA1 in which the object B exists, The color distribution (OA_B2) due to the presence of the visible object (B) may appear differently. At this time, the color distribution OA_B is an overlapping region of the color distribution OA_B1 and the color distribution OA_B2. 5, when the object B exists in the overlapping areas OA1 to OA4, the object is viewed at different angles, and thus the distribution of color, brightness, etc. appears differently depending on the color distribution of the object B, the direction of light, and so on. At this time, in the present embodiment, an object in the overlapping region is detected and an image around the vehicle is corrected by adjusting the color distribution for the remaining region excluding the position where the object exists in the overlapping region.

For example, referring to FIG. 5, the plurality of cameras 11 to 14 may be four cameras that photograph the vehicle in the longitudinal, lateral, and lateral directions. Referring to FIG. 5, the images obtained by the four cameras 11, 12, 13, and 14 include overlapping areas OA1 to OA4 between neighboring camera images. That is, a wide-angle lens having an angle of view of about 180 degrees or more can be used as an area to be photographed by the front, rear, left, and right cameras 11 to 14 mounted on the vehicle, and the overlapping areas become OA1 to OA4.

1 and 2, the object detecting unit 30 detects an object in the overlapping region of the images acquired by the image acquiring unit 10. [ At this time, the object detecting unit 30 detects the object B in the overlapping areas OA1 to OA4 of the acquired images, or detects the object B among the obtained images, and the detected object B is overlapped It is possible to detect the object B in the overlapping areas OA1 to OA4 by determining whether the object B is within the areas OA1 to OA4. At this time, a method of detecting an object only in the overlapping area is suitable for high-speed operation. Also, in the process of detecting an object in the object detection unit 30, overlapping regions of images acquired by the two neighboring cameras can be detected. For example, an object can be detected by binarizing the image to determine the boundary.

Next, with reference to Figs. 1, 2, 3, and / or 4, the color correction unit 50 will be described in detail. The color correction unit 50 performs color correction in the remaining area except the object in the overlap area.

3 and 4, in one example, the color correction section 50 includes a color information extraction section 51, a parameter calculation section 53, a maintenance ratio calculation section 55, (57).

Specifically, referring to FIG. 3 and / or 4, the color information extracting unit 51 can extract the color information of the overlap area. For example, referring to FIG. 4, the color information extracting unit 51 may extract the average object color information of the object region and the average remaining color information of the remaining region in each of the first and second images forming the overlap region.

4, when the images of the overlapping regions of the images obtained from the two cameras, that is, the images of the camera 1 and the camera 2 include overlapping images, the color information extracting unit 51 extracts the overlapping images It can be entered duplicate images I 1 and I 2 of the two cameras. The redundant image I 1 is an image of a redundant area obtained by the camera 1 and the camera 2 obtained from the camera 1, and the redundant image I 2 is an image from the camera 2 of the same redundant area. At this time, the color information extracting unit 51 can also receive coordinate information about the object region in the overlapping region detected by the object detecting unit 30. [ The color information extraction unit 51 duplicates the image I 1 and the overlapping image I the average color information of the object area based on the input data of 2 (Iobj 1, Iobj 2) and the average color information of the object other than the region (Iback 1, Iback 2 ) Can be calculated to extract the color information of the redundant image I 1 and the redundant image I 2 . Referring to FIG. 4, for example, the color information extracting unit 51 may include an object region color information extracting block 51a and a non-object region color information extracting block 51b.

3 and 4, the parameter calculation unit 53 of the color correction unit 50 can calculate the color correction parameter using the color information extracted by the color information extraction unit 51. [ For example, referring to FIG. 4, the color correction parameter may be calculated as the sum of the product of the first weight and the average object color information in each of the first and second images, and the product of the second weight and the average residual color information. In this case, the sum of the first weight and the second weight may be a predetermined weight value, and each of the first and second weights may be an experimentally obtained set value of the interval value from '0' to the weight value. Also, at this time, in one example, the first weight may be '0'.

4, the parameter calculating unit 53 calculates the overlapping image I 1 and the overlapping image I 1 by using arbitrary weights a and b for the extracted color information of the object region and the region other than the object, correction parameters of the image I 2 can be calculated Ival Ival 1 and 2. That is, the correction parameter of the overlapping image I 1 is Ival 1 = a * Iobj 1 + b * is calculated as Iback 1, the correction parameter of the overlapping image I 2 will be calculated by Ival 2 = a * Iobj 2 + b * Iback 2 have. Iobj 1 and Iobj 2 are the average color information of the object region in the overlapping image I 1 and the overlapping image I 2, respectively, and Iback 1 and Iback 2 are the average color information in the overlapping image I 1 and the overlapping image I 2 , Average color information. At this time, if the color information of the object area is not reflected at all, the weight a can be calculated as '0'. In FIG. 4, reference numeral 53a denotes a correction parameter calculation block of the camera 1 image, and FIG. 53B shows a correction parameter calculation block of the camera 2 image.

3 and 4, the maintenance cost calculating unit 55 of the color correcting unit 50 can calculate the maintenance cost using the color correction parameters calculated by the parameter calculating unit 53 . For example, in one example, when the reference of the color correction is the first image, the maintenance ratio may be a ratio of the color correction parameter of the second image to the color correction parameter of the first image.

4, the maintenance ratio or the correction ratio is a parameter for correcting the color information, which can be calculated as a ratio between Ival 1 and Ival 2 , and can be used for color correction of the final output image. For example, referring to FIG. 4, when the image of the camera 1 is used as a reference image, 'maintenance rate' can be calculated as Ival 1 / Ival 2 .

3 and 4, the color correction performing unit 57 of the color correction unit 50 applies the correction ratio calculated by the correction ratio calculation unit 55 to perform color correction on the remaining area Can be performed. For example, in one example, when the reference of color correction is the first image, the color correction performing unit 57 may perform color correction by multiplying the color information of each of the remaining areas by a correction ratio.

For example, referring to FIG. 4, when an image of the camera 1 is used as a reference image, color correction may be performed by I 1 * = I 1 * 'maintenance'. I 1 is a redundant image in camera 1, and I 1 * is a color-compensated redundant image. At this time, since the color correction is performed on the remaining area of the overlapping area, the color correction value can be calculated by calculating the correction ratio for pixels corresponding to the remaining area in the overlapping image I 1 . Or, I 1 * = I 1 * ' beam maintenance "instead of" may be the color correction carried out by this time, I 1' to I 1 "= I 1 '* ' beam maintenance is the image of the other of the overlapping region area Is color information, and I 1 "is image color information of the remaining region subjected to the color correction.

1 and 2, the image synthesizing unit 70 may perform image synthesis by connecting images forming a color-corrected area and an overlapping area.

In addition, referring to FIG. 2, the vehicle surroundings image correction system according to another example may further include a display unit 90 for displaying the synthesized image.

Next, a vehicle periphery image correction method according to another aspect of the present invention will be described in detail with reference to the following drawings. At this time, the vehicle periphery image correction systems according to the above embodiment and FIGS. 1 to 5 will be referred to, and redundant explanations therefor may be omitted.

FIG. 6 is a flowchart schematically illustrating a method of correcting a surrounding image of a vehicle according to another embodiment of the present invention, FIG. 7 is a flowchart schematically illustrating a method of correcting a surrounding image of a vehicle according to another embodiment of the present invention, FIG. 8 is a flowchart schematically showing a part of a method for correcting a surrounding image according to an embodiment of the present invention.

6 and 7, a method for correcting a surrounding image according to an embodiment of the present invention includes an image obtaining step S100, an object detecting step S300, a color correcting step S500, and an image synthesizing step S700 . In addition, referring to FIG. 7, a method for correcting a surrounding image according to another example may further include a display step S900. Let's look at each step in detail.

Referring to FIG. 6 and FIG. 7, in the image acquisition step S100, images around the vehicle are acquired by the plurality of cameras 11 to 14. FIG. At this time, images obtained from at least some neighboring cameras include overlapping images. 5, when the object B exists in the overlap areas OA1 to OA4, the color distributions OA_B1 and OA-B2 of the overlap area OA1 may be different depending on the direction of light or shadow have.

For example, referring to FIG. 5, the plurality of cameras 11 to 14 may be four cameras that photograph the vehicle in the longitudinal, lateral, and lateral directions.

Next, referring to FIG. 6 or FIG. 7, the object detecting step S300 can detect the object B in the overlapping areas OA1 to OA4 of the images obtained in the image obtaining step S100. For example, in the object detection step S300, the overlap areas OA1 to OA4 of the images acquired in the image acquisition step S100 may be detected and the object B in the overlap areas OA1 to OA4 may be detected. A method of detecting an object only in a redundant area can perform a high-speed operation. Alternatively, in the object detecting step S300, the object B is detected from the images obtained in the image acquiring step S100 and whether or not the detected object B is within the overlapping areas OA1 to OA4 It is possible to detect the object B in the overlapping areas OA1 to OA4. Also, in the process of detecting an object in the object detection step S300, overlapping regions of images acquired by two neighboring cameras can be detected.

6, 7, and / or 8, in the color correction step (S500), the color correction may be performed in the remaining area excluding the object in the overlap area.

For example, referring to FIG. 8, in one example, the color correction step S500 for performing color correction includes a color information extraction step S510, a color correction parameter calculation step S530, a color correction calculation calculation step S550, And performing a color correction step S570.

Specifically, in the color information extracting step (S510) in FIG. 8, the color information of the overlapping area can be extracted. For example, referring to the color information extracting unit 51 of FIG. 4, in the color information extracting step S510, the average object color information of the object region and the color information of the remaining region Can be extracted.

Referring to FIG. 8, in the color correction parameter calculation step S530 of the color correction step S500, the color correction parameter may be calculated using the color information extracted in the color information extraction step S510. For example, referring to FIG. 4, in the color correction parameter calculation step S530, the color correction parameter is calculated by multiplying each of the first and second images by the product of the first weight and the average object color information, Can be calculated as the sum of the products of the information. In this case, the sum of the first weight and the second weight may be a predetermined weight value, and each of the first and second weights may be an experimentally obtained set value of the interval value from '0' to the weight value. For example, in one example, the first weight may be '0'.

Next, referring to FIG. 8, the color correction maintenance calculation step S550 of the color correction steps S500 can calculate the maintenance cost using the color correction parameter calculated in the color correction parameter calculation step S530. For example, referring to the storage ratio calculating unit 55 in Fig. 4, in the color book repair calculation step S550, when the reference of the color correction is the first image, the storage ratio is the second The ratio of the color correction parameter of the image.

Next, referring to FIG. 8, in the color correction step S570 of the color correction step S500, the color correction can be performed on the remaining area by applying the correction ratio calculated in the color correction calculation calculation step S550 have. For example, referring to the color correction performing unit 57 of FIG. 4, in the color correction performing step S570, the color correction may be performed by multiplying the color information of each of the remaining areas by a correction ratio.

6 and 7, in the image synthesis step S700, image combining may be performed by connecting the images forming the color-corrected area and the overlapped area in the color correction step S500.

In addition, referring to FIG. 7, the method for correcting the surrounding image according to one example may further include displaying the synthesized image.

The foregoing embodiments and accompanying drawings are not intended to limit the scope of the present invention but to illustrate the present invention in order to facilitate understanding of the present invention by those skilled in the art. Embodiments in accordance with various combinations of the above-described configurations can also be implemented by those skilled in the art from the foregoing detailed description. Accordingly, various embodiments of the present invention may be embodied in various forms without departing from the essential characteristics thereof, and the scope of the present invention should be construed in accordance with the invention as set forth in the appended claims. Alternatives, and equivalents by those skilled in the art.

10: image acquiring unit 11, 12, 13, 14: camera
30: Object detection unit 50: Color correction unit
51: color information extracting unit 53: parameter calculating unit
55: Maintenance calculation unit 57: Color correction performing unit
70: image synthesizer 90: display

Claims (14)

An image acquiring unit for acquiring images of the surroundings of the vehicle with a plurality of cameras;
An object detecting unit for detecting an object in the overlapping region of the acquired images;
A color correcting unit for performing color correction in the remaining areas except for the object among the overlapping areas; And
And an image synthesis unit for synthesizing the color-corrected region and the images forming the overlap region to perform image synthesis.
The method according to claim 1,
Wherein the color correction unit comprises:
A color information extracting unit for extracting color information of the overlapping area;
A parameter calculation unit for calculating a color correction parameter using the extracted color information;
A storage ratio calculating unit for calculating a storage ratio using the calculated color correction parameter; And
And a color correction executing unit for performing color correction on the remaining area by applying the calculated correction ratio.
The method of claim 2,
Wherein the color information extracting unit extracts average object color information of the object region and average residual color information of the remaining region in each of the first and second images forming the overlap region,
Wherein the color correction parameter is calculated as a sum of a product of a first weight and an average object color information in each of the first and second images, a product of a second weight and a mean residual color information,
Wherein the sum of the first weight and the second weight is a preset weight value, and each of the first and second weight values is an experimentally obtained set value among interval values from '0' to the weight value. system.
The method of claim 3,
Wherein the first weight is '0'.
The method of claim 3,
Wherein the correction ratio is a ratio of a color correction parameter of the second image to a color correction parameter of the first image when the reference of the color correction is the first image,
Wherein the color correction performing unit multiplies the color information of each of the remaining regions by the correction ratio to perform the color correction.
The method according to any one of claims 1 to 5,
Wherein the plurality of cameras are four cameras for photographing the front, rear, left, and right directions of the vehicle.
The method of claim 6,
And a display unit for displaying the synthesized image.
Acquiring images around the vehicle with a plurality of cameras;
Detecting an object in the overlapping region of the acquired images;
Performing color correction in an area other than the object among the overlapping areas; And
And performing image synthesis by connecting the color-corrected area and the images forming the overlapping area.
The method of claim 8,
The step of performing the color correction includes:
Extracting color information of the overlapping region;
Calculating a color correction parameter using the extracted color information;
Calculating a maintenance ratio using the calculated color correction parameter; And
And performing color correction on the remaining area by applying the calculated maintenance ratio.
The method of claim 9,
Extracting the average object color information of the object region and the average remaining color information of the remaining region in each of the first and second images forming the overlap region,
Wherein the color correction parameter is calculated as a sum of a product of a first weight and an average object color information in each of the first and second images, a product of a second weight and a mean residual color information,
Wherein the sum of the first weight and the second weight is a preset weight value, and each of the first and second weight values is an experimentally obtained set value among interval values from '0' to the weight value. Way.
The method of claim 10,
Wherein the first weight is " 0 ".
The method of claim 10,
Wherein the correction ratio is a ratio of a color correction parameter of the second image to a color correction parameter of the first image when the reference of the color correction is the first image,
Wherein the color correction is performed by multiplying each of the color information of the remaining area by the correction ratio in performing the color correction.
The method according to any one of claims 8 to 12,
Wherein the plurality of cameras are four cameras for photographing the front, rear, left, and right directions of the vehicle.
14. The method of claim 13,
Further comprising the step of displaying the synthesized image.
KR1020130033464A 2013-03-28 2013-03-28 System and method for calibrating around view of vehicle KR20140118115A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020130033464A KR20140118115A (en) 2013-03-28 2013-03-28 System and method for calibrating around view of vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020130033464A KR20140118115A (en) 2013-03-28 2013-03-28 System and method for calibrating around view of vehicle

Publications (1)

Publication Number Publication Date
KR20140118115A true KR20140118115A (en) 2014-10-08

Family

ID=51991016

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020130033464A KR20140118115A (en) 2013-03-28 2013-03-28 System and method for calibrating around view of vehicle

Country Status (1)

Country Link
KR (1) KR20140118115A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170099252A (en) * 2016-02-23 2017-08-31 두산인프라코어 주식회사 Display system
KR20190075034A (en) * 2019-06-20 2019-06-28 주식회사 아이닉스 Imaging Apparatus and method for Automobile

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170099252A (en) * 2016-02-23 2017-08-31 두산인프라코어 주식회사 Display system
WO2017146403A1 (en) * 2016-02-23 2017-08-31 두산인프라코어 주식회사 Display system
KR20190075034A (en) * 2019-06-20 2019-06-28 주식회사 아이닉스 Imaging Apparatus and method for Automobile

Similar Documents

Publication Publication Date Title
JP5108605B2 (en) Driving support system and vehicle
KR101811157B1 (en) Bowl-shaped imaging system
US9767545B2 (en) Depth sensor data with real-time processing of scene sensor data
US20180292201A1 (en) Calibration apparatus, calibration method, and calibration program
US9387804B2 (en) Image distortion compensating apparatus and operating method thereof
US20160217625A1 (en) Image processing apparatus, image processing method, and program
US10719949B2 (en) Method and apparatus for monitoring region around vehicle
CN104851076A (en) Panoramic 360-degree-view parking auxiliary system for commercial vehicle and pick-up head installation method
JP5068134B2 (en) Target area dividing method and target area dividing apparatus
EP2061234A1 (en) Imaging apparatus
JP2008530667A (en) Method and apparatus for visualizing the periphery of a vehicle by fusing infrared and visible images
JP2002359838A (en) Device for supporting driving
CN112224132A (en) Vehicle panoramic all-around obstacle early warning method
KR20220012375A (en) Apparatus and method for providing around view
KR101705558B1 (en) Top view creating method for camera installed on vehicle and AVM system
JP4830380B2 (en) Vehicle periphery monitoring device and vehicle periphery monitoring method
US20190174065A1 (en) Image display apparatus
WO2019198399A1 (en) Image processing device and method
US10427683B2 (en) Vehicle display device and vehicle display method for displaying images
KR20140118115A (en) System and method for calibrating around view of vehicle
CN111527517A (en) Image processing apparatus and control method thereof
JP6855254B2 (en) Image processing device, image processing system, and image processing method
EP3896387B1 (en) Image processing device
JP6565674B2 (en) VEHICLE DISPLAY DEVICE AND VEHICLE DISPLAY METHOD
KR20110082873A (en) Image processing apparatus providing distacnce information in a composite image obtained from a plurality of image and method using the same

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E601 Decision to refuse application