JP4052226B2 - Image processing apparatus for vehicle - Google Patents

Image processing apparatus for vehicle Download PDF

Info

Publication number
JP4052226B2
JP4052226B2 JP2003372959A JP2003372959A JP4052226B2 JP 4052226 B2 JP4052226 B2 JP 4052226B2 JP 2003372959 A JP2003372959 A JP 2003372959A JP 2003372959 A JP2003372959 A JP 2003372959A JP 4052226 B2 JP4052226 B2 JP 4052226B2
Authority
JP
Japan
Prior art keywords
vehicle
attention area
image processing
region
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2003372959A
Other languages
Japanese (ja)
Other versions
JP2005135308A (en
Inventor
琢 高浜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nissan Motor Co Ltd
Original Assignee
Nissan Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nissan Motor Co Ltd filed Critical Nissan Motor Co Ltd
Priority to JP2003372959A priority Critical patent/JP4052226B2/en
Publication of JP2005135308A publication Critical patent/JP2005135308A/en
Application granted granted Critical
Publication of JP4052226B2 publication Critical patent/JP4052226B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Description

本発明は、カメラにより撮像した画像を処理して前方車両や道路白線を認識する車両用画像処理装置に関する。   The present invention relates to a vehicular image processing apparatus that recognizes a preceding vehicle or a road white line by processing an image captured by a camera.

路側のインフラ構造物にカメラとミリ波レーダーとを設置し、カメラの画像処理により路側の標識を認識できるが車両を認識できない場合には降雪などの走行環境にあると判断し、ミリ波レーダーを使って車両を検知するようにした車両検出装置が知られている(例えば、特許文献1参照)。   A camera and millimeter wave radar are installed on the roadside infrastructure, and if the roadside sign can be recognized by image processing of the camera but the vehicle cannot be recognized, it is judged that the vehicle is in a driving environment such as snowfall and the millimeter wave radar is installed. There is known a vehicle detection device that uses a vehicle to detect the vehicle (see, for example, Patent Document 1).

この出願の発明に関連する先行技術文献としては次のものがある。
特開2001−084485号公報
Prior art documents related to the invention of this application include the following.
JP 2001-084485 A

しかしながら、自動車にカメラを搭載した場合には、路側のインフラ構造物に設置したカメラの画像とは異なり、路側標識の位置が不定であるため車載カメラの画像処理により標識を検知することは容易ではなく、標識を検知できない場合や標識がない地域を走行する場合には、走行環境の判断が困難になるという問題がある。   However, when a camera is mounted on a car, unlike the camera image installed on the roadside infrastructure, the position of the roadside sign is indefinite, so it is not easy to detect the sign by image processing of the in-vehicle camera. In addition, there is a problem that it is difficult to determine the driving environment when the sign cannot be detected or when traveling in an area without the sign.

(1) 請求項1の発明は、自車前方を撮像する撮像手段により撮像された画像の中で、前方車両または道路白線または標識を含む領域(注目領域)とそれ以外の領域(非注目領域)とを設定し、撮像手段により撮像された画像の中の注目領域と非注目領域の輝度値の分散とその履歴のバラツキとを検出し、注目領域における輝度値の分散が所定値より小さく、かつ、非注目領域における輝度値の分散が所定値より大きいか、または、非注目領域における輝度値の分散の履歴のバラツキが所定値より大きい場合には、画像処理が困難な部分が画像中の局所的な部分であるとし、画像処理による自車前方の状況分析が可能な走行環境にあると判定する。
(2) 請求項2の発明は、自車前方を撮像する撮像手段により撮像された画像の中で、前方車両または道路白線または標識を含む領域(注目領域)とそれ以外の領域(非注目領域)とを設定し、撮像手段により撮像された画像の中の注目領域と非注目領域の輝度値の分散とその履歴のバラツキとを検出し、注目領域における輝度値の分散が所定値より小さいか、または、注目領域における輝度値の分散の履歴のバラツキが所定値より大きく、かつ、非注目領域における輝度値の分散が所定値より小さい場合には、画像処理が困難な部分が画像全体であるとし、画像処理による自車前方の状況分析が困難な走行環境にあると判定する。
(3) 請求項3の発明は、自車前方を撮像する撮像手段により撮像された画像の中で、前方車両または道路白線または標識を含む領域(注目領域)とそれ以外の領域(非注目領域)とを設定し、撮像手段により撮像された画像の中の注目領域と非注目領域の輝度値の分散とその履歴のバラツキとを検出し、注目領域における輝度値の分散の履歴のバラツキが所定値より大きく、かつ、非注目領域における輝度値の分散が所定値より大きい場合には、画像処理が困難な部分が画像中の局所的な部分であるとし、画像処理による自車前方の状況分析が可能な走行環境にあると判定する。
(4) 請求項4の発明は、自車前方を撮像する撮像手段により撮像された画像の中で、前方車両または道路白線または標識を含む領域(注目領域)とそれ以外の領域(非注目領域)とを設定し、撮像手段により撮像された画像の中の注目領域と非注目領域の輝度値の分散とその履歴のバラツキとを検出し、注目領域と非注目領域における輝度値の分散の履歴のバラツキが所定値より大きい場合には、画像処理が困難な部分が画像全体であるとし、画像処理による自車前方の状況分析が困難な走行環境にあると判定する。
(5) 請求項5の発明は、自車前方を撮像する撮像手段により撮像された画像の中で、自車前方の障害物を検出する障害物検出手段により障害物が検出された領域(注目領域)とそれ以外の領域(非注目領域)とを設定し、撮像手段により撮像された画像の中の注目領域と非注目領域の輝度値の分散とその履歴のバラツキとを検出し、注目領域における輝度値の分散が所定値より小さく、かつ、非注目領域における輝度値の分散が所定値より大きいか、または、非注目領域における輝度値の分散の履歴のバラツキが所定値より大きい場合には、画像処理が困難な部分が画像中の局所的な部分であるとし、画像処理による自車前方の状況分析が可能な走行環境にあると判定する。
(6) 請求項6の発明は、自車前方を撮像する撮像手段により撮像された画像の中で、自車前方の障害物を検出する障害物検出手段により障害物が検出された領域(注目領域)とそれ以外の領域(非注目領域)とを設定し、撮像手段により撮像された画像の中の注目領域と非注目領域の輝度値の分散とその履歴のバラツキとを検出し、注目領域における輝度値の分散が所定値より小さいか、または、注目領域における輝度値の分散の履歴のバラツキが所定値より大きく、かつ、非注目領域における輝度値の分散が所定値より小さい場合には、画像処理が困難な部分が画像全体であるとし、画像処理による自車前方の状況分析が困難な走行環境にあると判定する。
(7) 請求項7の発明は、自車前方を撮像する撮像手段により撮像された画像の中で、自車前方の障害物を検出する障害物検出手段により障害物が検出された領域(注目領域)とそれ以外の領域(非注目領域)とを設定し、撮像手段により撮像された画像の中の注目領域と非注目領域の輝度値の分散とその履歴のバラツキとを検出し、注目領域における輝度値の分散の履歴のバラツキが所定値より大きく、かつ、非注目領域における輝度値の分散が所定値より大きい場合には、画像処理が困難な部分が画像中の局所的な部分であるとし、画像処理による自車前方の状況分析が可能な走行環境にあると判定する。
(8) 請求項8の発明は、自車前方を撮像する撮像手段により撮像された画像の中で、自車前方の障害物を検出する障害物検出手段により障害物が検出された領域(注目領域)とそれ以外の領域(非注目領域)とを設定し、撮像手段により撮像された画像の中の注目領域と非注目領域の輝度値の分散とその履歴のバラツキとを検出し、注目領域と非注目領域における輝度値の分散の履歴のバラツキが所定値より大きい場合には、画像処理が困難な部分が画像全体であるとし、画像処理による自車前方の状況分析が困難な走行環境にあると判定する。
(1) The invention of claim 1 includes an area (attention area) including a preceding vehicle or a road white line or a sign and an area other than that (non-attention area) in an image captured by an imaging unit that images the front of the host vehicle. ) To detect the variance of the luminance value of the attention area and the non-attention area in the image captured by the imaging means and the variation in the history thereof, and the dispersion of the luminance value in the attention area is smaller than a predetermined value, In addition, when the variance of the luminance value in the non-attention area is larger than the predetermined value, or the variation in the history of the luminance value dispersion in the non-attention area is larger than the predetermined value, a portion that is difficult to process in the image It is determined that the vehicle is in a traveling environment where it is possible to analyze the situation in front of the vehicle by image processing, assuming that it is a local part.
(2) The invention of claim 2 includes an area (attention area) including a preceding vehicle or a road white line or a sign and an area other than that (non-attention area) in an image captured by an imaging unit that images the front of the host vehicle. ) To detect the variance of the luminance value of the attention area and the non-attention area in the image captured by the imaging means and the variation of the history, and whether the distribution of the luminance value in the attention area is smaller than a predetermined value. Or, when the dispersion of the luminance value dispersion history in the attention area is larger than the predetermined value and the dispersion of the luminance value in the non-attention area is smaller than the predetermined value, the portion where the image processing is difficult is the entire image. It is determined that the vehicle is in a driving environment where it is difficult to analyze the situation ahead of the vehicle by image processing.
(3) The invention according to claim 3 is a region (attention region) including a preceding vehicle or a road white line or a sign and an other region (non-attention region) in an image captured by an imaging unit that images the front of the host vehicle. ) To detect the variance of the luminance value and the history of the attention area and the non-attention area in the image captured by the imaging means, and the dispersion of the luminance value distribution history in the attention area is predetermined. If the variance of the luminance value in the non-attention area is larger than the predetermined value, the portion where the image processing is difficult is a local portion in the image, and the situation analysis ahead of the vehicle by image processing Is determined to be in a possible driving environment.
(4) The invention of claim 4 includes an area (attention area) including a preceding vehicle or a road white line or a sign and an other area (non-attention area) in an image captured by an imaging unit that images the front of the host vehicle. ) To detect the variance of the luminance value and the history of the attention area and the non-attention area in the image captured by the imaging means, and the luminance value distribution history in the attention area and the non-attention area Is larger than a predetermined value, it is determined that the portion where image processing is difficult is the entire image, and that it is in a driving environment where it is difficult to analyze the situation ahead of the vehicle by image processing.
(5) The invention according to claim 5 is an area in which an obstacle is detected by an obstacle detection unit that detects an obstacle ahead of the host vehicle in the image captured by the imaging unit that images the front of the host vehicle (attention) Region) and other regions (non-attention region) are set, the distribution of luminance values of the attention region and the non-attention region in the image captured by the imaging means, and the variation in the history thereof are detected, and the attention region When the variance of the luminance value in is less than the predetermined value and the variance of the luminance value in the non-attention area is larger than the predetermined value, or the variation in the history of the distribution of the luminance value in the non-attention area is larger than the predetermined value The part where the image processing is difficult is assumed to be a local part in the image, and it is determined that the driving environment is capable of analyzing the situation ahead of the vehicle by the image processing.
(6) According to the sixth aspect of the present invention, an area in which an obstacle is detected by an obstacle detection means for detecting an obstacle ahead of the host vehicle in an image picked up by the imaging means for capturing the front of the host vehicle (attention) Region) and other regions (non-attention region) are set, the distribution of luminance values of the attention region and the non-attention region in the image captured by the imaging means, and the variation in the history thereof are detected, and the attention region If the variance of the luminance value in the region of interest is smaller than the predetermined value, or the variation in the history of the luminance value dispersion in the region of interest is larger than the predetermined value, and the variance of the luminance value in the non-attention region is smaller than the predetermined value, The part where image processing is difficult is the entire image, and it is determined that the driving environment is difficult to analyze the situation ahead of the vehicle by image processing.
(7) According to the seventh aspect of the present invention, an area in which an obstacle is detected by an obstacle detection unit that detects an obstacle ahead of the host vehicle in an image captured by an imaging unit that images the front of the host vehicle (attention) Region) and other regions (non-attention region) are set, the distribution of luminance values of the attention region and the non-attention region in the image captured by the imaging means, and the variation in the history thereof are detected, and the attention region If the variance of the luminance value distribution history in the image is larger than the predetermined value and the luminance value distribution in the non-attention area is larger than the predetermined value, the portion where the image processing is difficult is a local portion in the image Then, it is determined that the vehicle is in a traveling environment in which the situation analysis ahead of the vehicle can be performed by image processing.
(8) According to the eighth aspect of the present invention, an area in which an obstacle is detected by an obstacle detection unit that detects an obstacle ahead of the host vehicle in the image captured by the imaging unit that images the front of the host vehicle (attention) Region) and other regions (non-attention region) are set, the distribution of luminance values of the attention region and the non-attention region in the image captured by the imaging means, and the variation in the history thereof are detected, and the attention region If the dispersion of the luminance value distribution history in the non-attention area is larger than the predetermined value, the entire image is the part where image processing is difficult, and it is difficult to analyze the situation ahead of the vehicle by image processing. Judge that there is.

本発明によれば、カメラ画像を処理して自車前方の状況分析を行うことが困難な走行環境にあるかどうか正確に判定でき、カメラ画像による情報分析の信頼性を向上させることができる。   ADVANTAGE OF THE INVENTION According to this invention, it can be determined correctly whether it is in the driving | running environment where it is difficult to process a camera image and to analyze the situation ahead of the own vehicle, and the reliability of the information analysis by a camera image can be improved.

図1に一実施の形態の構成を示す。レーザーレーダー1はスキャンニング式のレーザーレーダーであり、車両前部に設置されて車両前方の所定範囲をレーザー光で走査する。レーダー処理装置2は、レーザーレーダー1で走査した結果から前方車両を含む物体を抽出する。このレーダー処理装置2では、各物体に対して自車両を原点とする二次元(車間距離方向と車幅方向)座標値の算出と物体の幅(大きさ)の算出とを行う。   FIG. 1 shows the configuration of an embodiment. The laser radar 1 is a scanning type laser radar that is installed in the front part of the vehicle and scans a predetermined range in front of the vehicle with laser light. The radar processing device 2 extracts an object including a vehicle ahead from the result of scanning with the laser radar 1. The radar processing device 2 calculates a two-dimensional (inter-vehicle distance direction and vehicle width direction) coordinate value and an object width (size) with respect to each object as the origin of the own vehicle.

カメラ3はプログレッシブスキャン式3CCDカメラであり、車室内のフロントウインドウ上部中央に設置されて自車前方の状況を高速に撮像する。画像処理装置4は、カメラ3により撮像した画像の中のレーダー処理装置2で捕捉した物体の座標付近を注目領域として画像処理し、自車両のピッチング変動などによりレーダー1で検知物体を見失った場合にもカメラ画像により物体を検知し続ける。   The camera 3 is a progressive scan type 3CCD camera, and is installed in the center of the upper part of the front window in the passenger compartment, and images the situation ahead of the host vehicle at high speed. When the image processing apparatus 4 performs image processing using the vicinity of the coordinates of the object captured by the radar processing apparatus 2 in the image captured by the camera 3 as a region of interest, and the radar 1 loses sight of the detected object due to pitching variation of the host vehicle. In addition, it continues to detect objects from camera images.

外界認識装置5は外界の状況を認識する。外界認識装置5にはレーダー処理装置2と画像処理装置4の他に、従動車輪の回転により車速を検出する車速検出装置6と操舵角を検出する操舵角検出装置7が接続され、レーダー処理装置2で検知した各物***置と画像処理装置4で追跡している物***置とを選択して自車両にとって障害物であるか否かを正確に判断し、判断結果を自動ブレーキ制御装置8へ出力する。自動ブレーキ制御装置8は負圧ブレーキブースター9に駆動信号を出力し、前後輪に制動力を発生させる。   The outside world recognition device 5 recognizes the situation of the outside world. In addition to the radar processing device 2 and the image processing device 4, a vehicle speed detection device 6 that detects a vehicle speed by rotation of a driven wheel and a steering angle detection device 7 that detects a steering angle are connected to the outside recognition device 5. Each object position detected in 2 and the object position tracked by the image processing device 4 are selected to accurately determine whether or not the vehicle is an obstacle, and the determination result is output to the automatic brake control device 8. To do. The automatic brake control device 8 outputs a driving signal to the negative pressure brake booster 9 to generate a braking force on the front and rear wheels.

レーダー処理装置2、画像処理装置4、外界認識装置5、自動ブレーキ制御装置8はそれぞれマイクロコンピューターと各種アクチュエーターの駆動回路を備え、互いに通信回路を介して情報の授受を行う。   Each of the radar processing device 2, the image processing device 4, the external environment recognition device 5, and the automatic brake control device 8 includes a microcomputer and a drive circuit for various actuators, and exchanges information with each other via a communication circuit.

図2は、外界認識装置5で実行されるカメラ画像処理による障害物の検知性能低下判定プログラムを示すフローチャートである。このフローチャートにより、一実施の形態の動作を説明する。外界認識装置5はこのカメラ検知性能低下判定プログラムを所定時間間隔、例えば50msecごとに実行する。   FIG. 2 is a flowchart showing an obstacle detection performance degradation determination program by camera image processing executed by the external environment recognition device 5. The operation of the embodiment will be described with reference to this flowchart. The external environment recognition apparatus 5 executes this camera detection performance degradation determination program at predetermined time intervals, for example, every 50 msec.

ステップ201において、レーザーレーダー1とレーダー処理装置2により検知した物体の位置を(rPx_z0[i],rPy_z0[i])として読み込む。ここで、添え字xは横方向(車幅方向)の物***置であって、添え字yは縦方向(車間距離方向)の物***置である。また、iは各検知物体のID番号を示す0以上の整数であり、z0は今回の値を示し、z1は1サンプリング周期(100msec)前の値を示す。ステップ202では、カメラ3から今回のサンプリング時の撮像結果の画像を読み込む。   In step 201, the position of the object detected by the laser radar 1 and the radar processing device 2 is read as (rPx_z0 [i], rPy_z0 [i]). Here, the subscript x is an object position in the horizontal direction (vehicle width direction), and the subscript y is an object position in the vertical direction (inter-vehicle distance direction). Further, i is an integer of 0 or more indicating the ID number of each sensing object, z0 indicates the current value, and z1 indicates the value one sampling period (100 msec) before. In step 202, the image of the imaging result at the time of the current sampling is read from the camera 3.

ステップ203では、ステップ201で検知した物体の中から最も注目する物体を1つ選択する。具体的な方法としては、各検知物体の横位置の絶対値が所定値より小さい、すなわち自車両正面に近い物体を複数個選択し、その中で最も縦位置が小さい、すなわち最も自車両に接近している物体を選択する。そして、その物体のID番号をslctとする。横位置の条件を満足する物体が存在しない場合にはslct=−1とする。なお、最も注目する物体は自車前方の車両の他に、道路標識であってもよい。   In step 203, one of the most noticed objects is selected from the objects detected in step 201. As a specific method, the absolute value of the lateral position of each detected object is smaller than a predetermined value, that is, a plurality of objects close to the front of the host vehicle are selected, and the smallest vertical position among them, that is, the closest to the host vehicle is selected. Select the object you are using. The ID number of the object is slct. If there is no object that satisfies the horizontal position condition, slct = -1. In addition to the vehicle in front of the host vehicle, the object of interest may be a road sign.

続くステップ204では、自車両正面に近く、かつ最も自車両に接近している物体が存在したか否かを確認する。ID番号slctが負で最も注目する物体が存在しない場合はステップ224へ進み、ID番号slctが正で最も注目する物体が存在する場合はステップ205へ進む。   In the following step 204, it is confirmed whether or not there is an object that is close to the front of the host vehicle and closest to the host vehicle. If the ID number slct is negative and there is no object of interest, the process proceeds to step 224. If the ID number slct is positive and the object of attention is most present, the process proceeds to step 205.

自車両正面に近く、かつ最も自車両に接近している物体、つまり前方車両や道路標識などの最も注目する物体が存在する場合には、ステップ205で次式により注目物体の位置をカメラ画像内の領域に座標変換し、この領域を注目領域とする。
disp_obj_YA=y0+(focusV・CAM_h2/rPy_z0[slct]),
disp_obj_YB=y0+(focusV・CAM_h/rPy_z0[slct]),
disp_obj_XL=x0+(focusH/rPy_z0[slct]・rPx_z0[slct])
−(focusH・width/rPy_z0[slct]),
disp_obj_XR=x0+(focusH/rPy_z0[slct]・rPx_z0[slct])
+(focusH・width/rPy_z0[slct]) ・・・(1)
(1)式において、disp_obj_**は画像処理を行う矩形領域の端っこの座標値であって、disp_obj_YAは矩形の上側、disp_obj_YBは矩形の下側、disp_obj_XLは矩形の左側、disp_obj_XRは矩形の右側の画像座標を表している。また、y0は消失点の縦座標[pix;画素数]を表し、x0は消失点の横座標[pix]を表す。これらの座標x0、y0はカメラ3の取り付け位置と向きで決まるパラメーターである。focusVは画素換算したカメラ3の鉛直方向の焦点距離[pix]であり、focusHは画素換算したカメラ3の水平方向の焦点距離[pix]である。focusVとfocusHはカメラ3の画角と受光素子の解像度で決まるパラメーターであり、受光面が正方格子である場合にはfocusV=focusHである。CAM_hはカメラ3の取り付け高さ(単位はメートル)であり、CAM_h2はCAM_hから障害物候補として考慮すべき物体の高さobj_H(単位はメートル)を減算した値である。
If there is an object that is close to the front of the host vehicle and is closest to the host vehicle, that is, an object of most interest such as a forward vehicle or a road sign, the position of the target object is The coordinates are converted into a region of interest, and this region is set as a region of interest.
disp_obj_YA = y0 + (focusV • CAM_h2 / rPy_z0 [slct]),
disp_obj_YB = y0 + (focusV • CAM_h / rPy_z0 [slct]),
disp_obj_XL = x0 + (focusH / rPy_z0 [slct] ・ rPx_z0 [slct])
− (FocusH · width / rPy_z0 [slct]),
disp_obj_XR = x0 + (focusH / rPy_z0 [slct] ・ rPx_z0 [slct])
+ (FocusH · width / rPy_z0 [slct]) (1)
In equation (1), disp_obj _ ** is the coordinate value of the edge of the rectangular area for image processing, disp_obj_YA is the upper side of the rectangle, disp_obj_YB is the lower side of the rectangle, disp_obj_XL is the left side of the rectangle, and disp_obj_XR is the right side of the rectangle Represents image coordinates. Y0 represents the ordinate [pix; number of pixels] of the vanishing point, and x0 represents the abscissa [pix] of the vanishing point. These coordinates x0 and y0 are parameters determined by the mounting position and orientation of the camera 3. focusV is the vertical focal length [pix] of the camera 3 converted to pixels, and focusH is the horizontal focal length [pix] of the camera 3 converted to pixels. focusV and focusH are parameters determined by the angle of view of the camera 3 and the resolution of the light receiving element. When the light receiving surface is a square lattice, focusV = focusH. CAM_h is the mounting height (unit is meter) of the camera 3, and CAM_h2 is a value obtained by subtracting the height obj_H (unit is meter) of the object to be considered as an obstacle candidate from CAM_h.

widthは次式により求めることができる。
width=focusH/rPy_z0[slct]・{(Rw[slct]+Rx_vari+Rw_vari)/2}・・・(2)
(2)式において、Rw[i]はレーザーレーダー1で検知した物体群におけるID番号iの物体の幅であって、Rx_variはレーザーレーダー1の横位置に関する検知精度(標準偏差[m])であり、Rw_variはレーザーレーダー1の幅に関する検知精度(標準偏差[m])である。
The width can be obtained by the following formula.
width = focusH / rPy_z0 [slct] · {(Rw [slct] + Rx_vari + Rw_vari) / 2} (2)
In equation (2), Rw [i] is the width of the object of ID number i in the object group detected by the laser radar 1, and Rx_vari is the detection accuracy (standard deviation [m]) regarding the lateral position of the laser radar 1. Yes, Rw_vari is the detection accuracy (standard deviation [m]) related to the width of the laser radar 1.

なお、この一実施の形態ではスキャンニング式のレーザーレーダー1を用いたため、検知物体の幅を得ることができるが、悪天候に強いミリ波レーダーや非常に安価なマルチビーム式のレーザーレーダーの場合には、検知物体の幅を得ることができない。この場合は、レーダーの横方向位置の検知精度(標準偏差[m])と障害物として考慮すべき物体の最大幅との和で決まる幅[m]の半分の値を画素換算し、その値[pix]を(2)式の代わりに用いればよい。   In this embodiment, since the scanning type laser radar 1 is used, the width of the sensing object can be obtained. However, in the case of a millimeter wave radar which is resistant to bad weather or a very inexpensive multi-beam type laser radar. Cannot obtain the width of the sensing object. In this case, half the width [m] determined by the sum of the radar lateral position detection accuracy (standard deviation [m]) and the maximum width of the object to be considered as an obstacle is converted into a pixel value. [pix] may be used instead of equation (2).

ステップ206では次式により非注目領域を画面上に定義する。
negligible_area_YA=func_bigger(disp_obj_YB,y_upper_bottom),
negligible_area_YB=y_lower_bottom,
negligible_area_XL=x_left_hand,
negligible_area_XR=x_right_hand ・・・(3)
(3)式において、func_bigger(a1,a2)はa1とa2を比較して大きい方の値を選択する関数である。このため、所定値y_upper_bottomよりdisp_obj_YBの方が画面下側(=大きい値)に位置する場合はnegligible_aera_YA=disp_obj_YBとなる。また、y_lower_bottomとは画面上に写る道路領域(自車両のボンネットが写らない領域)の中で最も下に位置する鉛直座標値のことである。さらに、x_left_handとx_right_handは自車線幅の間隔よりも狭い所定の間隔を有する水平座標値である。これは白線を含まないように設定する。つまり、路面の模様は一様という前提が白線を含むと成立しないため、非注目領域内に白線が含まれると後述の輝度値の分散が正しく機能しない。
In step 206, a non-attention area is defined on the screen by the following equation.
negligible_area_YA = func_bigger (disp_obj_YB, y_upper_bottom),
negligible_area_YB = y_lower_bottom,
negligible_area_XL = x_left_hand,
negligible_area_XR = x_right_hand (3)
In equation (3), func_bigger (a1, a2) is a function that compares a1 and a2 and selects the larger value. Therefore, negligible_aera_YA = disp_obj_YB when disp_obj_YB is located on the lower side of the screen (= larger value) than the predetermined value y_upper_bottom. Moreover, y_lower_bottom is the vertical coordinate value located at the lowest position in the road area (area where the bonnet of the host vehicle is not shown) on the screen. Furthermore, x_left_hand and x_right_hand are horizontal coordinate values having a predetermined interval narrower than the interval of the own lane width. This is set so as not to include white lines. In other words, the assumption that the road surface pattern is uniform does not hold when the white line is included, and therefore, when the white line is included in the non-attention area, the luminance value distribution described later does not function correctly.

ステップ207では、ステップ205で設定した注目領域について、輝度値の分散を算出する。この一実施の形態では分かり易さの観点から標準偏差として算出する。
notable_bright_V_z0=√{(region〔X_left,Y_top〕−a)
+(region〔X_left+1,Y_top〕−a)
+ ・・・
+(region〔X_right,Y_btm〕−a)}/TotalPix
a=(region〔X_left,Y_top〕
+region〔X_left+1,Y_top〕
+ ・・・
+region〔X_right,Y_btm〕)/TotalPix ・・・(4)
(4)式において、region〔X,Y〕はステップ205で求めた注目領域を指しており、XとYはその領域内の座標値を表す。つまり、注目領域における輝度値の分散を算出する場合では、X_left=disp_obj_XL、Y_top=disp_obj_YA、X_right=disp_obj_XR、Y_btm=disp_obj_YBとなって、領域内部の左上から右下までの輝度の平均aを求めて標準偏差notable_brigh_V_z0を算出する。また、TotalPixは演算する領域の全画素数(面積)である。さらに、_z0は今回のサンプリング周期における輝度値の分散であることを表し、_znはnサンプリング過去の周期における輝度値の分散を表す。
In step 207, the variance of the luminance value is calculated for the attention area set in step 205. In this embodiment, the standard deviation is calculated from the viewpoint of easy understanding.
notable_bright_V_z0 = √ {(region [X_left, Y_top] −a) 2
+ (Region [X_left + 1, Y_top] -a) 2
+ ...
+ (Region [X_right, Y_btm] −a) 2 } / TotalPix
a = (region [X_left, Y_top]
+ Region [X_left + 1, Y_top]
+ ...
+ Region [X_right, Y_btm]) / TotalPix (4)
In equation (4), region [X, Y] indicates the attention area obtained in step 205, and X and Y represent coordinate values in the area. That is, when calculating the variance of the luminance value in the attention area, X_left = disp_obj_XL, Y_top = disp_obj_YA, X_right = disp_obj_XR, Y_btm = disp_obj_YB, and the average a of luminance from the upper left to the lower right inside the area is obtained. Standard deviation notable_brigh_V_z0 is calculated. TotalPix is the total number of pixels (area) of the area to be calculated. Furthermore, _z0 represents the variance of the luminance value in the current sampling cycle, and _zn represents the variance of the luminance value in the n sampling past cycle.

ステップ208では、ステップ206で設定した非注目領域においてステップ207と同様に輝度値の分散を求め、これをnegligble_bright_V_z0とする。   In step 208, the variance of the luminance value is obtained in the non-attention area set in step 206 in the same manner as in step 207, and this is set as negligble_bright_V_z0.

次に、ステップ209で、ステップ207で求めた注目領域における輝度値の分散の履歴から時間的なバラツキを算出する。
notable_bright_VV=√{(notable_bright_V_z0−av)
+(notable_bright_V_z1−av)
+ ・・・
+(notable_bright_V_zTC−av)}/(TC+1)
av=(notable_bright_V_z0
+notable_bright_V_z1
+ ・・・
+notable_bright_V_zTC)/(TC+1) ・・・(5)
(5)式において、TCとは所定の整数であって、考慮したい過去の履歴の時間をサンプリング周期で除算することにより求まる。
Next, in step 209, temporal variation is calculated from the history of luminance value dispersion in the attention area obtained in step 207.
notable_bright_VV = √ {(notable_bright_V_z0−av) 2
+ (Notable_bright_V_z1-av) 2
+ ...
+ (Notable_bright_V_zTC−av) 2 } / (TC + 1)
av = (notable_bright_V_z0
+ Notable_bright_V_z1
+ ...
+ Notable_bright_V_zTC) / (TC + 1) (5)
In equation (5), TC is a predetermined integer, and is obtained by dividing the past history time to be considered by the sampling period.

ステップ210では、ステップ208で求めた非注目領域における輝度値の分散の履歴から時間的なバラツキを算出し、これをnegligible_bright_VVとする。   In step 210, temporal variation is calculated from the distribution of luminance values in the non-attention area obtained in step 208, and this is defined as negligible_bright_VV.

ステップ211において、ステップ207からステップ210で求めた各領域の輝度値の分散とその履歴の時間的なバラツキを比較し、次の条件1と条件2のどちらかを満足すればステップ212へ進み、そうでなければステップ213へ進む。
条件1;notable_bright_V_z0<Th_ntb_v1 かつ negligible_bright_V_z0>Th_ngl_v1
条件2;notable_bright_V_z0<Th_ntb_v1 かつ negligible_bright_VV>Th_ngl_vv
・・・(6)
(6)式において、Th_ntb_v1は注目領域の輝度値の分散が小さいことを判定するためのしきい値であり、Th_ngl_v1は非注目領域の輝度値の分散が大きいことを判定するためのしきい値である。さらに、Th_ngl_vvは非注目領域の輝度値の分散の履歴のバラツキが多い、すなわち輝度値の分散の履歴が不安定であることを判定するためのしきい値である。
In step 211, the variance of the luminance values obtained in steps 207 to 210 is compared with the temporal variation of the history. If either of the following conditions 1 and 2 is satisfied, the process proceeds to step 212. Otherwise, go to step 213.
Condition 1; notable_bright_V_z0 <Th_ntb_v1 and negligible_bright_V_z0> Th_ngl_v1
Condition 2; notable_bright_V_z0 <Th_ntb_v1 and negligible_bright_VV> Th_ngl_vv
... (6)
In Expression (6), Th_ntb_v1 is a threshold value for determining that the variance of the luminance value of the attention area is small, and Th_ngl_v1 is a threshold value for determining that the dispersion of the luminance value of the non-attention area is large. It is. Further, Th_ngl_vv is a threshold value for determining that there are many variations in the distribution of luminance values in the non-attention area, that is, the luminance value distribution history is unstable.

ステップ212では、今回のサンプリングにおける画像処理の困難さの度合いdiagnosis_state_z0に3を設定する。また、画像処理の困難さの状態camera_state1_z0からcamera_state4_z0を平滑する。
diagnosis_state_z0=3
camera_state1_z0=fg1 camera_state_z1+0
camera_state2_z0=fg2 camera_state_z2+0
camera_state3_z0=fg3 camera_state_z3+(1−fg3)
camera_state4_z0=fg4 camera_state_z4+0 ・・・(7)
(7)式において、fg1〜fg4は各困難さに関する忘却係数を意味する1未満の正数である。
In step 212, 3 is set to the degree of difficulty of image processing diagnosis_state_z0 in the current sampling. Further, the camera_state4_z0 from the camera_state1_z0 to the image processing difficulty state is smoothed.
diagnosis_state_z0 = 3
camera_state1_z0 = fg1 camera_state_z1 + 0
camera_state2_z0 = fg2 camera_state_z2 + 0
camera_state3_z0 = fg3 camera_state_z3 + (1-fg3)
camera_state4_z0 = fg4 camera_state_z4 + 0 (7)
In the equation (7), fg1 to fg4 are positive numbers less than 1 meaning forgetting factors related to each difficulty.

つまり、注目領域における輝度値の分散が所定値より小さく、かつ、非注目領域における輝度値の分散が所定値より大きい(条件1)と、注目領域における輝度値の分散が所定値より小さく、かつ、非注目領域における輝度値の分散の履歴のバラツキが所定値より大きく不安定な(条件2)のいずれかが成立したときはステップ211からステップ212へ進み、今回のサンプリングにおける画像処理の困難さの度合いdiagnosis_state_z0に画像処理困難な状態3を設定する。詳細を後述するが、この場合にはカメラ画像処理による自車前方の状況分析の性能低下原因が画像中の局所的な部分、すなわち注目領域にあると判定する。これにより、カメラ画像処理が困難な原因を正しく把握することができる。   That is, when the variance of the luminance value in the attention area is smaller than the predetermined value and the distribution of the luminance value in the non-attention area is larger than the predetermined value (condition 1), the variance of the luminance value in the attention area is smaller than the predetermined value, and If any of the fluctuations in the luminance value dispersion history in the non-attention area is larger than a predetermined value and unstable (condition 2), the process proceeds from step 211 to step 212, and it is difficult to perform image processing in the current sampling. The state 3 in which image processing is difficult is set in the degree diagnosis_state_z0. Although details will be described later, in this case, it is determined that the cause of the performance degradation in the situation analysis in front of the vehicle by the camera image processing is a local portion in the image, that is, the attention area. Thereby, it is possible to correctly grasp the cause of the difficulty in camera image processing.

ステップ213では、ステップ207からステップ210で求めた各領域の輝度値の分散とその履歴の時間的なバラツキを比較し、次の条件3と条件4のどちらかを満足すればステップ214へ進み、そうでなければステップ215へ進む。
条件3;notable_bright_V_z0<Th_ntb_v1 かつ negligible_bright_V_z0<Th_ngl_v2
条件4;notable_bright_VV>Th_ntb_vv かつ negligible_bright_V_z0<Th_ngl_v2
・・・(8)
(8)式において、Th_ngl_v2は非注目領域の輝度値の分散が小さいことを判定するためのしきい値であり、Th_ntb_vvは注目領域の輝度値の分散の履歴のバラツキが多い、すなわち輝度値の分散の履歴が不安定なことを判定するためのしきい値である。
In step 213, the variance of the luminance value of each area obtained in steps 207 to 210 is compared with the temporal variation of the history. If either of the following conditions 3 and 4 is satisfied, the process proceeds to step 214. Otherwise, go to step 215.
Condition 3; notable_bright_V_z0 <Th_ntb_v1 and negligible_bright_V_z0 <Th_ngl_v2
Condition 4: notable_bright_VV> Th_ntb_vv and negligible_bright_V_z0 <Th_ngl_v2
... (8)
In Expression (8), Th_ngl_v2 is a threshold value for determining that the variance of the luminance value of the non-target region is small, and Th_ntb_vv is a large variation in the history of variance of the luminance value of the target region. This is a threshold value for determining that the dispersion history is unstable.

ステップ214では、今回のサンプリングにおける画像処理自体の困難さの度合いdiagnosis_state_z0に1を設定する。また、画像処理の困難さの状態camera_state1_z0からcamera_state4_z0を平滑する。
diagnosis_state_z0=1
camera_state1_z0=fg1 camera_state1_z1+(1−fg1)
camera_state2_z0=fg2 camera_state2_z1+0
camera_state3_z0=fg3 camera_state3_z1+0
camera_state4_z0=fg4 camera_state4_z1+0 ・・・(9)
In step 214, 1 is set to the degree diagnosis_state_z0 of the difficulty of the image processing itself in the current sampling. Further, the camera_state4_z0 from the camera_state1_z0 to the image processing difficulty state is smoothed.
diagnosis_state_z0 = 1
camera_state1_z0 = fg1 camera_state1_z1 + (1-fg1)
camera_state2_z0 = fg2 camera_state2_z1 + 0
camera_state3_z0 = fg3 camera_state3_z1 + 0
camera_state4_z0 = fg4 camera_state4_z1 + 0 (9)

つまり、注目領域における輝度値の分散が所定値より小さく、かつ、非注目領域における輝度値の分散が所定値より小さい(条件3)と、注目領域における輝度値の分散の履歴のバラツキが所定値より大きく不安定で、かつ、非注目領域における輝度値の分散が所定値より小さい(条件4)のいずれかが成立したときはステップ213からステップ214へ進み、今回のサンプリングにおける画像処理の困難さの度合いdiagnosis_state_z0に画像処理困難な状態1を設定する。詳細を後述するが、この場合にはカメラ画像処理による自車前方の状況分析の性能低下原因が画像全体、すなわち注目領域と非注目領域の両方にあると判定する。これにより、カメラ画像処理が困難な原因を正しく把握することができる。   That is, when the variance of the luminance value in the attention area is smaller than the predetermined value and the distribution of the luminance value in the non-attention area is smaller than the predetermined value (condition 3), the variation in the distribution of luminance values in the attention area is the predetermined value. When any of the larger and unstable and the variance of the luminance value in the non-attention area is smaller than the predetermined value (condition 4), the process proceeds from step 213 to step 214, and it is difficult to perform image processing in the current sampling. The state 1 in which image processing is difficult is set in the degree diagnosis_state_z0. Although details will be described later, in this case, it is determined that the cause of the performance degradation in the situation analysis ahead of the vehicle by the camera image processing is in the entire image, that is, both the attention area and the non-attention area. Thereby, it is possible to correctly grasp the cause of the difficulty in camera image processing.

ステップ215では、ステップ207からステップ210で求めた各領域の輝度値の分散とその履歴の時間的なバラツキを比較する。ここで、次の条件5を満足すればステップ216へ進み、そうでなければステップ217へ進む。
条件5;notable_bright_VV>Th_ntb_vv かつ negligible_bright_V_z0>Th_ngl_v1
・・・(10)
In step 215, the variance of the luminance values obtained in steps 207 to 210 is compared with the temporal variation of the history. If the following condition 5 is satisfied, the process proceeds to step 216. Otherwise, the process proceeds to step 217.
Condition 5: notable_bright_VV> Th_ntb_vv and negligible_bright_V_z0> Th_ngl_v1
(10)

ステップ216では、今回のサンプリングにおける画像処理自体の困難さの度合いdiagnosis_state_z0に2を設定する。また、画像処理の困難さの状態camera_state1_z0からcamera_state4_z0を平滑する。
diagnosis_state_z0=2
camera_state1_z0=fg1 camera_state1_z1+0
camera_state2_z0=fg2 camera_state2_z1+(1−fg2)
camera_state3_z0=fg3 camera_state3_z1+0
camera_state4_z0=fg4 camera_state4_z1+0 ・・・(11)
In step 216, 2 is set to the degree of difficulty diagnosis_state_z0 of the image processing itself in the current sampling. Further, the camera_state4_z0 from the camera_state1_z0 to the image processing difficulty state is smoothed.
diagnosis_state_z0 = 2
camera_state1_z0 = fg1 camera_state1_z1 + 0
camera_state2_z0 = fg2 camera_state2_z1 + (1-fg2)
camera_state3_z0 = fg3 camera_state3_z1 + 0
camera_state4_z0 = fg4 camera_state4_z1 + 0 (11)

つまり、注目領域における輝度値の分散の履歴のバラツキが所定値より大きくが不安定で、かつ非注目領域における輝度値の分散が所定値より大きい(条件5)が成立したときはステップ215からステップ216へ進み、今回のサンプリングにおける画像処理の困難さの度合いdiagnosis_state_z0に画像処理困難な状態2を設定する。詳細を後述するが、この場合にはカメラ画像処理による自車前方の状況分析の性能低下原因が画像中の局所的な部分、すなわち注目領域にあると判定する。これにより、カメラ画像処理が困難な原因を正しく把握することができる。   In other words, when the dispersion of the luminance value dispersion history in the attention area is larger than the predetermined value and unstable, and the dispersion of the luminance value in the non-attention area is larger than the predetermined value (Condition 5), step 215 to step Proceeding to 216, state 2 in which image processing is difficult is set in diagnosis_state_z0 of the difficulty level of image processing in the current sampling. Although details will be described later, in this case, it is determined that the cause of the performance degradation in the situation analysis in front of the vehicle by the camera image processing is a local portion in the image, that is, the attention area. Thereby, it is possible to correctly grasp the cause of the difficulty in camera image processing.

ステップ217では、ステップ207からステップ210で求めた各領域の輝度値の分散とその履歴の時間的なバラツキを比較する。ここで、次の条件6を満たせばステップ218へ進み、そうでなければステップステップ219へ進む。
条件6;notable_bright_VV>Th_ntb_vv かつ negligible_bright_VV>Th_ngl_vv
・・・(12)
In step 217, the variance of the luminance value obtained in steps 207 to 210 is compared with the temporal variation of the history. If the next condition 6 is satisfied, the process proceeds to step 218. Otherwise, the process proceeds to step step 219.
Condition 6: notable_bright_VV> Th_ntb_vv and negligible_bright_VV> Th_ngl_vv
(12)

ステップ218では、今回のサンプリングにおける画像処理自体の困難さの度合いdiagnosis_state_z0に4を設定する。また、画像処理の困難さの状態camera_state1_z0からcamera_state4_z0を平滑する。そして、ステップ220へ進む。
diagnosis_state_z0=4
camera_state1_z0=fg1 camera_state1_z1+0
camera_state2_z0=fg2 camera_state2_z1+0
camera_state3_z0=fg3 camera_state3_z1+0
camera_state4_z0=fg4 camera_state4_z1+(1−fg4) ・・・(13)
In step 218, 4 is set to the degree diagnosis_state_z0 of the difficulty of the image processing itself in the current sampling. Further, the camera_state4_z0 from the camera_state1_z0 to the image processing difficulty state is smoothed. Then, the process proceeds to step 220.
diagnosis_state_z0 = 4
camera_state1_z0 = fg1 camera_state1_z1 + 0
camera_state2_z0 = fg2 camera_state2_z1 + 0
camera_state3_z0 = fg3 camera_state3_z1 + 0
camera_state4_z0 = fg4 camera_state4_z1 + (1-fg4) (13)

つまり、注目領域における輝度値の分散の履歴が不安定で、かつ、非注目領域における輝度値の分散の履歴が不安定な(条件6)が成立したときはステップ217からステップ218へ進み、今回のサンプリングにおける画像処理の困難さの度合いdiagnosis_state_z0に画像処理困難な状態4を設定する。詳細を後述するが、この場合にはカメラ画像処理による自車前方の状況分析の性能低下原因が画像全体、すなわち注目領域と非注目領域の両方にあると判定する。これにより、カメラ画像処理が困難な原因を正しく把握することができる。   That is, when the distribution history of luminance values in the attention area is unstable and the distribution history of luminance values in the non-target area is unstable (condition 6), the process proceeds from step 217 to step 218. State 4 in which image processing is difficult is set in diagnosis_state_z0 of the degree of difficulty in image processing in sampling. Although details will be described later, in this case, it is determined that the cause of the performance degradation in the situation analysis ahead of the vehicle by the camera image processing is in the entire image, that is, both the attention area and the non-attention area. Thereby, it is possible to correctly grasp the cause of the difficulty in camera image processing.

ステップ219では、今回のサンプリングにおける画像処理自体の困難さの度合いdiagnosis_state_z0に0を設定する。また、画像処理の困難さの状態camera_state1_z0からcamera_state4_z0を平滑する。そしてステップ220へ進む。
diagnosis_state_z0=0
camera_state1_z0=fg1 camera_state1_z1+0
camera_state2_z0=fg2 camera_state2_z1+0
camera_state3_z0=fg3 camera_state3_z1+0
camera_state4_z0=fg4 camera_state4_z1+0 ・・・(14)
In step 219, 0 is set to the degree diagnosis_state_z0 of the difficulty of the image processing itself in the current sampling. Further, the camera_state4_z0 from the camera_state1_z0 to the image processing difficulty state is smoothed. Then, the process proceeds to Step 220.
diagnosis_state_z0 = 0
camera_state1_z0 = fg1 camera_state1_z1 + 0
camera_state2_z0 = fg2 camera_state2_z1 + 0
camera_state3_z0 = fg3 camera_state3_z1 + 0
camera_state4_z0 = fg4 camera_state4_z1 + 0 (14)

ステップ220では、各画像処理の困難さの状態camera_state1_z0からcamera_state4_z0を比較して、所定値以上のものを選択する。ここで、選択された状態がcamera_state2_z0あるいはcamera_state3_z0の場合には、ステップ221へ進み、選択された状態がcamera_state1_z0あるいはcamera_state4_z0の場合にはステップ222へ進み、所定値以上となる画像処理の困難さの状態が一つも存在しない場合にはステップ223へ進む。   In step 220, the camera_state1_z0 to the camera_state4_z0, which are the difficulty levels of each image processing, are compared, and those having a predetermined value or more are selected. Here, if the selected state is camera_state2_z0 or camera_state3_z0, the process proceeds to step 221. If the selected state is camera_state1_z0 or camera_state4_z0, the process proceeds to step 222. If none exists, the process proceeds to step 223.

ステップ221では、カメラ画像処理による自車前方の状況分析の性能低下原因がカメラ画像中の局所的な部分、すなわち外界の前方車両や道路標識などの注目物体に対応する注目領域にあると判断する。注目領域に対応する外界の注目物体周辺では、例えば前方車両の後輪で巻上がる雪やスプラッシュなどがあり、これらの原因により画像処理による情報分析の性能が低下していると考えることができる。そこで、この一実施の形態では注目領域を画面の上方に移動して設定し直す。これにより前方車両の後輪で巻上がる雪やスプラッシュの映像を注目領域から除外することができ、注目領域における映像が安定して輝度値の分散も小さくなる。注目領域の再設定後、ステップ224へ進む。   In step 221, it is determined that the cause of the performance degradation of the situation analysis ahead of the host vehicle by the camera image processing is in a local part in the camera image, that is, an attention area corresponding to an attention object such as a forward vehicle in the outside or a road sign. . In the vicinity of the object of interest in the external environment corresponding to the region of interest, for example, there is snow or splash that rolls up on the rear wheels of the front vehicle, and it can be considered that the performance of information analysis by image processing is reduced due to these causes. Therefore, in this embodiment, the attention area is moved to the upper part of the screen and set again. As a result, the image of snow and splash that rolls up on the rear wheels of the front vehicle can be excluded from the attention area, the image in the attention area is stable, and the variance of the luminance value is reduced. After resetting the region of interest, the process proceeds to step 224.

ステップ222では、カメラ画像処理による自車前方の状況分析の性能低下原因が画像全体、すなわち外界全体にあると判断する。つまり、降雨や降雪により画像処理による情報分析の性能が低下していると考えることができる。そこで、この一実施の形態ではカメラ画像処理による自車前方の状況分析結果を用いず、レーザーレーダー1とレーダー処理装置2による自車前方の障害物検知結果を用いる。具体的には、後段のフュージョン処理系へ、カメラ3で検知する先行車位置情報よりもレーダー1で検知する位置情報の信頼性を向上、すなわちレーダー1を重視するように働きかける情報を送信する。その後、ステップ224へ進む。   In step 222, it is determined that the cause of the performance degradation in the situation analysis in front of the host vehicle by the camera image processing is the entire image, that is, the entire outside world. That is, it can be considered that the performance of information analysis by image processing is degraded due to rain or snow. Therefore, in this embodiment, the result of obstacle detection in front of the vehicle by the laser radar 1 and the radar processing device 2 is used instead of the situation analysis result in front of the vehicle by camera image processing. More specifically, information that works to improve the reliability of the position information detected by the radar 1 over the preceding vehicle position information detected by the camera 3, that is, to place importance on the radar 1 is transmitted to the subsequent fusion processing system. Thereafter, the process proceeds to step 224.

一方、ステップ223では、カメラ画像処理が困難な状態ではなく、カメラ画像処理による自車前方の状況分析において性能低下はないと判断する。この場合は、後段のフュージョン処理系へ、カメラ3で検知する先行車の位置情報が信頼できることを知らせる信号を送信する。その後、ステップ224へ進む。   On the other hand, in step 223, it is determined that the camera image processing is not difficult and that there is no performance degradation in the situation analysis in front of the vehicle by the camera image processing. In this case, a signal notifying that the position information of the preceding vehicle detected by the camera 3 is reliable is transmitted to the subsequent fusion processing system. Thereafter, the process proceeds to step 224.

ステップ224では輝度値の分散やその履歴などの過去値を更新して終了する。   In step 224, the past values such as the distribution of luminance values and the history thereof are updated, and the process ends.

このように、カメラ画像を処理して自車前方の状況分析を行うことが困難な走行環境にあるかどうかを判断し、カメラ画像処理による情報分析が困難な場合には、その画像処理による情報分析の性能低下原因が撮像画像の局所的なものか全体的なものかを把握する。そして、性能低下原因が画像の局所的な部分にあると考えられる場合には、レーダー1で検知した先行車のトラッキングに関する注目領域を再設定する。これにより、カメラ画像処理による情報分析が困難な走行環境にある場合でも、そのような走行環境の影響を抑制してカメラ画像処理による情報分析を可能にすることができる。また、後段でレーダーによる検知位置とカメラ画像処理による検知位置とを統合する場合には、レーダーとカメラの信頼性に関する情報を提供することができる。   In this way, it is determined whether or not it is in a driving environment where it is difficult to analyze the situation in front of the vehicle by processing the camera image. If information analysis by camera image processing is difficult, information by the image processing is determined. Determine whether the cause of the degradation in the analysis performance is local or overall of the captured image. When it is considered that the cause of the performance degradation is a local part of the image, the attention area regarding the tracking of the preceding vehicle detected by the radar 1 is reset. Thereby, even in a driving environment where information analysis by camera image processing is difficult, the influence of such driving environment can be suppressed and information analysis by camera image processing can be performed. Further, when the detection position by the radar and the detection position by the camera image processing are integrated in the subsequent stage, information on the reliability of the radar and the camera can be provided.

以上説明したように、一実施の形態によれば自車前方を撮像するカメラ3により撮像された画像の中で、自車前方の障害物を検出するレーダー1により障害物が検出された注目領域とそれ以外の非注目領域とを設定し、カメラ3により撮像された画像の中の注目領域と非注目領域における輝度情報を検出する。そして、注目領域と非注目領域の輝度情報に基づいて画像処理による自車前方の状況分析が困難な走行環境にあるか否かを判定するようにしたので、非注目領域の背景の輝度の濃淡と対比して注目領域の前方車両、道路標識、道路白線などの輝度の濃淡を把握することができ、カメラ画像を処理して自車前方の状況分析を行うことが困難な走行環境にあるかどうかを正確に判定できる。そして、カメラ画像による情報分析の信頼性を向上させることができる。   As described above, according to an embodiment, the attention area in which an obstacle is detected by the radar 1 that detects an obstacle ahead of the host vehicle in the image captured by the camera 3 that images the front of the host vehicle. And other non-attention areas are set, and luminance information in the attention area and the non-attention area in the image captured by the camera 3 is detected. Then, based on the luminance information of the attention area and the non-attention area, it is determined whether or not the driving environment is difficult to analyze the situation ahead of the vehicle by image processing. Compared to the above, whether the vehicle is in a driving environment where it is difficult to analyze the situation in front of the vehicle by processing the camera image, as it can grasp the brightness intensity of the vehicle ahead, road signs, road white lines, etc. Whether it can be determined accurately. And the reliability of the information analysis by a camera image can be improved.

また、一実施の形態によればカメラ3により撮像された画像の中の注目領域と非注目領域における輝度値の分散とその履歴のバラツキとを検出し、注目領域と非注目領域における輝度値の分散とその履歴のバラツキとを比較して画像処理による自車前方の状況分析が困難な走行環境にあるか否かを判定するようにしたので、輝度値の分散とその履歴のバラツキの大きさから定量的にカメラ画像処理による自車前方の状況分析が困難な走行環境にあるかどうかを正確に判定できる。   Further, according to one embodiment, the dispersion of luminance values in the attention area and the non-attention area in the image captured by the camera 3 and the variation in the history are detected, and the luminance values in the attention area and the non-attention area are detected. By comparing the variance and the variation of the history, it is determined whether or not the driving environment is difficult to analyze the situation ahead of the vehicle by image processing, so the variance of the luminance value and the size of the variation of the history Therefore, it is possible to accurately determine whether or not the vehicle is in a driving environment where it is difficult to analyze the situation in front of the vehicle by camera image processing.

さらに、一実施の形態によれば画像処理が困難な部分が画像中の局所的な部分か画像全体かに基づいて、画像処理による自車前方の状況分析が困難な走行環境にあるか否かを判定するようにしたので、画像処理が困難な原因を正しく把握することができる。   Furthermore, according to one embodiment, whether or not the vehicle is in a driving environment in which it is difficult to analyze the situation ahead of the vehicle by image processing based on whether the portion where image processing is difficult is a local portion in the image or the entire image Therefore, it is possible to correctly grasp the cause of the difficulty in image processing.

さらにまた、一実施の形態によれば画像処理が困難な部分が画像中の局所的な部分であるとされた場合に注目領域を画像中の上方へ移動させ、カメラ3により撮像された画像の中の新しい注目領域と非注目領域の輝度値の分散とその履歴のバラツキとを検出し、新しい前記注目領域と前記非注目領域の輝度値の分散とその履歴のバラツキとに基づいて画像処理による自車前方の状況分析が困難な走行環境にあるか否かを判定するようにしたので、例えば前方車両の後部からスプラッシュや雪の巻き上げが生じても、カメラ画像処理による自車先方の状況分析をあきらめずに、自車前方の障害物を検知できる可能性を高くすることができる。   Furthermore, according to an embodiment, when a portion where image processing is difficult is a local portion in the image, the attention area is moved upward in the image, and the image captured by the camera 3 is moved. The luminance value dispersion and the history variation of the new attention area and the non-attention area are detected, and image processing is performed based on the new dispersion of the brightness value of the attention area and the non-attention area and the history variation. Since it is determined whether or not the driving environment is difficult to analyze the situation ahead of the host vehicle, for example, even if splash or snow rolls up from the rear part of the preceding vehicle, the situation analysis of the destination of the host vehicle by camera image processing It is possible to increase the possibility that an obstacle ahead of the host vehicle can be detected without giving up.

上述した一実施の形態ではレーザーレーダー1を車両前部に設置し、カメラ3を車室内のフロントウインドウ上部中央に設置する例を示した。降雨や降雪などの悪い走行環境下でもフロントウインドウはワイパーにより払拭することができるので、カメラ3のレンズに水滴や雪が付着して撮像性能が低下する心配はないが、レーダー1は車両前部の車室外に設置されるため、水滴や雪、あるいはゴミが付着しやすく、したがって、レーダー1の方が走行環境に影響を受けやすい。一実施の形態によればカメラ画像による情報分析の信頼性が向上するので、レーダー1とカメラ3の車両用統合システムの信頼性をさらに向上させることができる。   In the above-described embodiment, the laser radar 1 is installed at the front of the vehicle, and the camera 3 is installed at the center of the upper part of the front window in the vehicle compartment. The front window can be wiped with a wiper even under bad driving conditions such as rain or snow, so there is no concern about water droplets or snow adhering to the lens of the camera 3 and deteriorating imaging performance. Since it is installed outside the passenger compartment, water drops, snow, or dust is likely to adhere to it, and therefore the radar 1 is more susceptible to the traveling environment. According to the embodiment, the reliability of the information analysis using the camera image is improved, so that the reliability of the integrated system for the vehicle of the radar 1 and the camera 3 can be further improved.

特許請求の範囲の構成要素と一実施の形態の構成要素との対応関係は次の通りである。すなわち、カメラ3が撮像手段を、外界認識装置5が領域設定手段、輝度情報検出手段および画像処理判定手段を、レーダー1が障害物検出手段をそれぞれ構成する。なお、本発明の特徴的な機能を損なわない限り、各構成要素は上記構成に限定されるものではない。   The correspondence between the constituent elements of the claims and the constituent elements of the embodiment is as follows. That is, the camera 3 constitutes an imaging means, the external environment recognition device 5 constitutes an area setting means, a luminance information detection means and an image processing determination means, and the radar 1 constitutes an obstacle detection means. In addition, as long as the characteristic function of this invention is not impaired, each component is not limited to the said structure.

なお、上述した一実施の形態ではレーダー1とカメラ3の統合システムにおいて、そのカメラ画像処理の困難さを把握する方法を説明したが、カメラ単独のシステムの場合でもステップ205の注目領域の設定時に、レーダーの情報を用いずに画像処理でトラッキング中の物体の座標値を用いるなどの変更を行うだけでよい。なお、画像処理によるトラッキング対象には自車前方の車両、道路の白線、道路標識などがあり、カメラ画像をエッジ抽出処理などにより検出することができ、カメラ画像の中でこれらのものが含まれる領域を注目領域とする。これにより、上述した一実施の形態と同様な効果が得られる。   In the above-described embodiment, the method of grasping the difficulty of the camera image processing in the integrated system of the radar 1 and the camera 3 has been described. However, even in the case of the camera-only system, the attention area is set in step 205. It is only necessary to make changes such as using the coordinate value of the object being tracked in image processing without using radar information. In addition, tracking objects by image processing include vehicles ahead of the host vehicle, road white lines, road signs, etc., and camera images can be detected by edge extraction processing, etc., and these are included in camera images Let the region be the region of interest. Thereby, the same effect as that of the above-described embodiment can be obtained.

一実施の形態の構成を示す図である。It is a figure which shows the structure of one embodiment. カメラの検知性能低下判定プログラムを示すフローチャートである。It is a flowchart which shows the detection performance fall determination program of a camera.

符号の説明Explanation of symbols

1 レーザーレーダー
2 レーダー処理装置
3 カメラ
4 画像処理装置
5 外界認識装置
6 車速検出装置
7 操舵角検出装置
8 自動ブレーキ制御装置
9 負圧ブレーキブースター
DESCRIPTION OF SYMBOLS 1 Laser radar 2 Radar processing device 3 Camera 4 Image processing device 5 External field recognition device 6 Vehicle speed detection device 7 Steering angle detection device 8 Automatic brake control device 9 Negative pressure brake booster

Claims (8)

自車前方を撮像する撮像手段と、
前記撮像手段により撮像された画像の中で、前方車両または道路白線または標識を含む領域(以下、注目領域という)とそれ以外の領域(以下、非注目領域という)とを設定する領域設定手段と、
前記撮像手段により撮像された画像の中の前記注目領域と前記非注目領域における輝度情報を検出する輝度情報検出手段と、
前記輝度情報検出手段により検出された前記注目領域と前記非注目領域の輝度情報に基づいて、画像処理による自車前方の状況分析が困難な走行環境にあるか否かを判定する画像処理判定手段とを備え
前記輝度情報検出手段は、前記撮像手段により撮像された画像の中の前記注目領域と前記非注目領域の輝度値の分散とその履歴のバラツキとを検出し、
前記画像処理判定手段は、前記注目領域における輝度値の分散が所定値より小さく、かつ、前記非注目領域における輝度値の分散が所定値より大きいか、または、前記非注目領域における輝度値の分散の履歴のバラツキが所定値より大きい場合には、画像処理が困難な部分が画像中の局所的な部分であるとし、画像処理による自車前方の状況分析が可能な走行環境にあると判定することを特徴とする車両用画像処理装置。
Imaging means for imaging the front of the vehicle;
A region setting unit for setting a region including a preceding vehicle or a road white line or a sign (hereinafter referred to as a region of interest) and a region other than the region (hereinafter referred to as a non-attention region) in the image captured by the imaging unit; ,
Luminance information detection means for detecting luminance information in the attention area and the non-attention area in the image captured by the imaging means;
Image processing determination means for determining whether or not the vehicle is in a driving environment where it is difficult to analyze the situation ahead of the vehicle by image processing based on the luminance information of the attention area and the non-attention area detected by the luminance information detection means. It equipped with a door,
The luminance information detecting means detects a dispersion of luminance values of the attention area and the non-attention area in the image picked up by the image pickup means and variations in the history thereof,
The image processing determination means has a luminance value variance in the region of interest smaller than a predetermined value and a luminance value variance in the non-region of interest is larger than a predetermined value, or a luminance value variance in the non-region of interest. When the variation in the history of the vehicle is larger than a predetermined value, it is determined that the portion in which the image processing is difficult is a local portion in the image, and the driving environment is capable of analyzing the situation ahead of the vehicle by the image processing. An image processing apparatus for a vehicle.
自車前方を撮像する撮像手段と、
前記撮像手段により撮像された画像の中で、前方車両または道路白線または標識を含む領域(以下、注目領域という)とそれ以外の領域(以下、非注目領域という)とを設定する領域設定手段と、
前記撮像手段により撮像された画像の中の前記注目領域と前記非注目領域における輝度情報を検出する輝度情報検出手段と、
前記輝度情報検出手段により検出された前記注目領域と前記非注目領域の輝度情報に基づいて、画像処理による自車前方の状況分析が困難な走行環境にあるか否かを判定する画像処理判定手段とを備え、
前記輝度情報検出手段は、前記撮像手段により撮像された画像の中の前記注目領域と前記非注目領域の輝度値の分散とその履歴のバラツキとを検出し、
前記画像処理判定手段は、前記注目領域における輝度値の分散が所定値より小さいか、または、前記注目領域における輝度値の分散の履歴のバラツキが所定値より大きく、かつ、前記非注目領域における輝度値の分散が所定値より小さい場合には、画像処理が困難な部分が画像全体であるとし、画像処理による自車前方の状況分析が困難な走行環境にあると判定することを特徴とする車両用画像処理装置。
Imaging means for imaging the front of the vehicle;
A region setting unit for setting a region including a preceding vehicle or a road white line or a sign (hereinafter referred to as a region of interest) and a region other than the region (hereinafter referred to as a non-attention region) in the image captured by the imaging unit; ,
Luminance information detection means for detecting luminance information in the attention area and the non-attention area in the image captured by the imaging means;
Image processing determination means for determining whether or not the vehicle is in a driving environment where it is difficult to analyze the situation ahead of the vehicle by image processing based on the luminance information of the attention area and the non-attention area detected by the luminance information detection means. And
The luminance information detecting means detects a dispersion of luminance values of the attention area and the non-attention area in the image picked up by the image pickup means and variations in the history thereof,
The image processing determination means has a luminance value variance in the region of interest smaller than a predetermined value, or a variation in luminance value dispersion history in the region of interest is larger than a predetermined value, and a luminance in the non-region of interest. When the variance of the values is smaller than a predetermined value , the vehicle is characterized in that the portion where image processing is difficult is the entire image, and it is determined that the vehicle is in a driving environment where it is difficult to analyze the situation ahead of the vehicle by image processing Image processing device.
自車前方を撮像する撮像手段と、
前記撮像手段により撮像された画像の中で、前方車両または道路白線または標識を含む領域(以下、注目領域という)とそれ以外の領域(以下、非注目領域という)とを設定する領域設定手段と、
前記撮像手段により撮像された画像の中の前記注目領域と前記非注目領域における輝度情報を検出する輝度情報検出手段と、
前記輝度情報検出手段により検出された前記注目領域と前記非注目領域の輝度情報に基づいて、画像処理による自車前方の状況分析が困難な走行環境にあるか否かを判定する画像処理判定手段とを備え、
前記輝度情報検出手段は、前記撮像手段により撮像された画像の中の前記注目領域と前記非注目領域の輝度値の分散とその履歴のバラツキとを検出し、
前記画像処理判定手段は、前記注目領域における輝度値の分散の履歴のバラツキが所定値より大きく、かつ、前記非注目領域における輝度値の分散が所定値より大きい場合には、画像処理が困難な部分が画像中の局所的な部分であるとし、画像処理による自車前方の状況分析が可能な走行環境にあると判定することを特徴とする車両用画像処理装置。
Imaging means for imaging the front of the vehicle;
A region setting unit for setting a region including a preceding vehicle or a road white line or a sign (hereinafter referred to as a region of interest) and a region other than the region (hereinafter referred to as a non-attention region) in the image captured by the imaging unit; ,
Luminance information detection means for detecting luminance information in the attention area and the non-attention area in the image captured by the imaging means;
Image processing determination means for determining whether or not the vehicle is in a driving environment where it is difficult to analyze the situation ahead of the vehicle by image processing based on the luminance information of the attention area and the non-attention area detected by the luminance information detection means. And
The luminance information detecting means detects a dispersion of luminance values of the attention area and the non-attention area in the image picked up by the image pickup means and variations in the history thereof,
The image processing determination unit is difficult to perform image processing when variation in luminance value dispersion history in the attention area is larger than a predetermined value and luminance value dispersion in the non-attention area is larger than a predetermined value. An image processing apparatus for a vehicle, characterized in that the portion is a local portion in an image and is determined to be in a driving environment in which a situation analysis in front of the vehicle can be performed by image processing.
自車前方を撮像する撮像手段と、
前記撮像手段により撮像された画像の中で、前方車両または道路白線または標識を含む領域(以下、注目領域という)とそれ以外の領域(以下、非注目領域という)とを設定する領域設定手段と、
前記撮像手段により撮像された画像の中の前記注目領域と前記非注目領域における輝度情報を検出する輝度情報検出手段と、
前記輝度情報検出手段により検出された前記注目領域と前記非注目領域の輝度情報に基づいて、画像処理による自車前方の状況分析が困難な走行環境にあるか否かを判定する画像処理判定手段とを備え、
前記輝度情報検出手段は、前記撮像手段により撮像された画像の中の前記注目領域と前記非注目領域の輝度値の分散とその履歴のバラツキとを検出し、
前記画像処理判定手段は、前記注目領域と前記非注目領域における輝度値の分散の履歴のバラツキが所定値より大きい場合には、画像処理が困難な部分が画像全体であるとし、画像処理による自車前方の状況分析が困難な走行環境にあると判定することを特徴とする車両用画像処理装置。
Imaging means for imaging the front of the vehicle;
A region setting unit for setting a region including a preceding vehicle or a road white line or a sign (hereinafter referred to as a region of interest) and a region other than the region (hereinafter referred to as a non-attention region) in the image captured by the imaging unit; ,
Luminance information detection means for detecting luminance information in the attention area and the non-attention area in the image captured by the imaging means;
Image processing determination means for determining whether or not the vehicle is in a driving environment where it is difficult to analyze the situation ahead of the vehicle by image processing based on the luminance information of the attention area and the non-attention area detected by the luminance information detection means. And
The luminance information detecting means detects a dispersion of luminance values of the attention area and the non-attention area in the image picked up by the image pickup means and variations in the history thereof,
The image processing determination means determines that the portion where the image processing is difficult is the entire image when the variation of the luminance value distribution history in the attention area and the non-attention area is larger than a predetermined value. An image processing apparatus for a vehicle characterized by determining that the vehicle is in a driving environment where it is difficult to analyze a situation in front of the vehicle.
自車前方の障害物を検出する障害物検出手段と、
自車前方を撮像する撮像手段と、
前記撮像手段により撮像された画像の中で、前記障害物検出手段により障害物が検出された領域(以下、注目領域という)とそれ以外の領域(以下、非注目領域という)とを設定する領域設定手段と、
前記撮像手段により撮像された画像の中の前記注目領域と前記非注目領域における輝度情報を検出する輝度情報検出手段と、
前記輝度情報検出手段により検出された前記注目領域と前記非注目領域の輝度情報に基づいて、画像処理による自車前方の状況分析が困難な走行環境にあるか否かを判定する画像処理判定手段とを備え、
前記輝度情報検出手段は、前記撮像手段により撮像された画像の中の前記注目領域と前記非注目領域の輝度値の分散とその履歴のバラツキとを検出し、
前記画像処理判定手段は、前記注目領域における輝度値の分散が所定値より小さく、かつ、前記非注目領域における輝度値の分散が所定値より大きいか、または、前記非注目領域における輝度値の分散の履歴のバラツキが所定値より大きい場合には、画像処理が困難な部分が画像中の局所的な部分であるとし、画像処理による自車前方の状況分析が可能な走行環境にあると判定することを特徴とする車両用画像処理装置。
Obstacle detection means for detecting an obstacle ahead of the vehicle;
Imaging means for imaging the front of the vehicle;
An area for setting an area in which an obstacle is detected by the obstacle detection means (hereinafter referred to as an attention area) and an area other than that (hereinafter referred to as a non-attention area) in the image captured by the imaging means. Setting means;
Luminance information detection means for detecting luminance information in the attention area and the non-attention area in the image captured by the imaging means;
Image processing determination means for determining whether or not the vehicle is in a driving environment where it is difficult to analyze the situation ahead of the vehicle by image processing based on the luminance information of the attention area and the non-attention area detected by the luminance information detection means. And
The luminance information detecting means detects a dispersion of luminance values of the attention area and the non-attention area in the image picked up by the image pickup means and variations in the history thereof,
The image processing determination means has a luminance value variance in the region of interest smaller than a predetermined value and a luminance value variance in the non-region of interest is larger than a predetermined value, or a luminance value variance in the non-region of interest. If the variation in the history of the vehicle is larger than a predetermined value, it is determined that the portion in which the image processing is difficult is a local portion in the image, and the driving environment is capable of analyzing the situation ahead of the vehicle by the image processing. An image processing apparatus for a vehicle.
自車前方の障害物を検出する障害物検出手段と、
自車前方を撮像する撮像手段と、
前記撮像手段により撮像された画像の中で、前記障害物検出手段により障害物が検出された領域(以下、注目領域という)とそれ以外の領域(以下、非注目領域という)とを設定する領域設定手段と、
前記撮像手段により撮像された画像の中の前記注目領域と前記非注目領域における輝度情報を検出する輝度情報検出手段と、
前記輝度情報検出手段により検出された前記注目領域と前記非注目領域の輝度情報に基づいて、画像処理による自車前方の状況分析が困難な走行環境にあるか否かを判定する画像処理判定手段とを備え、
前記輝度情報検出手段は、前記撮像手段により撮像された画像の中の前記注目領域と前記非注目領域の輝度値の分散とその履歴のバラツキとを検出し、
前記画像処理判定手段は、前記注目領域における輝度値の分散が所定値より小さいか、または、前記注目領域における輝度値の分散の履歴のバラツキが所定値より大きく、かつ、前記非注目領域における輝度値の分散が所定値より小さい場合には、画像処理が困難な部分が画像全体であるとし、画像処理による自車前方の状況分析が困難な走行環境にあると判定することを特徴とする車両用画像処理装置。
Obstacle detection means for detecting an obstacle ahead of the vehicle;
Imaging means for imaging the front of the vehicle;
An area for setting an area in which an obstacle is detected by the obstacle detection means (hereinafter referred to as an attention area) and an area other than that (hereinafter referred to as a non-attention area) in the image captured by the imaging means. Setting means;
Luminance information detection means for detecting luminance information in the attention area and the non-attention area in the image captured by the imaging means;
Image processing determination means for determining whether or not the vehicle is in a driving environment where it is difficult to analyze the situation ahead of the vehicle by image processing based on the luminance information of the attention area and the non-attention area detected by the luminance information detection means. And
The luminance information detecting means detects a dispersion of luminance values of the attention area and the non-attention area in the image picked up by the image pickup means and variations in the history thereof,
The image processing determination means has a luminance value variance in the region of interest smaller than a predetermined value, or a variation in luminance value dispersion history in the region of interest is larger than a predetermined value, and a luminance in the non-region of interest. When the variance of the values is smaller than a predetermined value , the vehicle is characterized in that the portion where image processing is difficult is the entire image, and it is determined that the vehicle is in a driving environment where it is difficult to analyze the situation ahead of the vehicle by image processing Image processing device.
自車前方の障害物を検出する障害物検出手段と、
自車前方を撮像する撮像手段と、
前記撮像手段により撮像された画像の中で、前記障害物検出手段により障害物が検出された領域(以下、注目領域という)とそれ以外の領域(以下、非注目領域という)とを設定する領域設定手段と、
前記撮像手段により撮像された画像の中の前記注目領域と前記非注目領域における輝度情報を検出する輝度情報検出手段と、
前記輝度情報検出手段により検出された前記注目領域と前記非注目領域の輝度情報に基づいて、画像処理による自車前方の状況分析が困難な走行環境にあるか否かを判定する画像処理判定手段とを備え、
前記輝度情報検出手段は、前記撮像手段により撮像された画像の中の前記注目領域と前記非注目領域の輝度値の分散とその履歴のバラツキとを検出し、
前記画像処理判定手段は、前記注目領域における輝度値の分散の履歴のバラツキが所定値より大きく、かつ、前記非注目領域における輝度値の分散が所定値より大きい場合には、画像処理が困難な部分が画像中の局所的な部分であるとし、画像処理による自車前方の状況分析が可能な走行環境にあると判定することを特徴とする車両用画像処理装置。
Obstacle detection means for detecting an obstacle ahead of the vehicle;
Imaging means for imaging the front of the vehicle;
An area for setting an area in which an obstacle is detected by the obstacle detection means (hereinafter referred to as an attention area) and an area other than that (hereinafter referred to as a non-attention area) in the image captured by the imaging means. Setting means;
Luminance information detection means for detecting luminance information in the attention area and the non-attention area in the image captured by the imaging means;
Image processing determination means for determining whether or not the vehicle is in a driving environment where it is difficult to analyze the situation ahead of the vehicle by image processing based on the luminance information of the attention area and the non-attention area detected by the luminance information detection means. And
The luminance information detecting means detects a dispersion of luminance values of the attention area and the non-attention area in the image picked up by the image pickup means and variations in the history thereof,
The image processing determination unit is difficult to perform image processing when variation in luminance value dispersion history in the attention area is larger than a predetermined value and luminance value dispersion in the non-attention area is larger than a predetermined value. An image processing apparatus for a vehicle, characterized in that the portion is a local portion in an image and is determined to be in a driving environment in which a situation analysis in front of the vehicle can be performed by image processing.
自車前方の障害物を検出する障害物検出手段と、
自車前方を撮像する撮像手段と、
前記撮像手段により撮像された画像の中で、前記障害物検出手段により障害物が検出された領域(以下、注目領域という)とそれ以外の領域(以下、非注目領域という)とを設定する領域設定手段と、
前記撮像手段により撮像された画像の中の前記注目領域と前記非注目領域における輝度情報を検出する輝度情報検出手段と、
前記輝度情報検出手段により検出された前記注目領域と前記非注目領域の輝度情報に基づいて、画像処理による自車前方の状況分析が困難な走行環境にあるか否かを判定する画像処理判定手段とを備え、
前記輝度情報検出手段は、前記撮像手段により撮像された画像の中の前記注目領域と前記非注目領域の輝度値の分散とその履歴のバラツキとを検出し、
前記画像処理判定手段は、前記注目領域と前記非注目領域における輝度値の分散の履歴のバラツキが所定値より大きい場合には、画像処理が困難な部分が画像全体であるとし、画像処理による自車前方の状況分析が困難な走行環境にあると判定することを特徴とする車両用画像処理装置。
Obstacle detection means for detecting an obstacle ahead of the vehicle;
Imaging means for imaging the front of the vehicle;
An area for setting an area in which an obstacle is detected by the obstacle detection means (hereinafter referred to as an attention area) and an area other than that (hereinafter referred to as a non-attention area) in the image captured by the imaging means. Setting means;
Luminance information detection means for detecting luminance information in the attention area and the non-attention area in the image captured by the imaging means;
Image processing determination means for determining whether or not the vehicle is in a driving environment where it is difficult to analyze the situation ahead of the vehicle by image processing based on the luminance information of the attention area and the non-attention area detected by the luminance information detection means. And
The luminance information detecting means detects a dispersion of luminance values of the attention area and the non-attention area in the image picked up by the image pickup means and variations in the history thereof,
The image processing determination means determines that the portion where the image processing is difficult is the entire image when the variation of the luminance value distribution history in the attention area and the non-attention area is larger than a predetermined value. An image processing apparatus for a vehicle characterized by determining that the vehicle is in a driving environment where it is difficult to analyze a situation in front of the vehicle.
JP2003372959A 2003-10-31 2003-10-31 Image processing apparatus for vehicle Expired - Fee Related JP4052226B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2003372959A JP4052226B2 (en) 2003-10-31 2003-10-31 Image processing apparatus for vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2003372959A JP4052226B2 (en) 2003-10-31 2003-10-31 Image processing apparatus for vehicle

Publications (2)

Publication Number Publication Date
JP2005135308A JP2005135308A (en) 2005-05-26
JP4052226B2 true JP4052226B2 (en) 2008-02-27

Family

ID=34649191

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2003372959A Expired - Fee Related JP4052226B2 (en) 2003-10-31 2003-10-31 Image processing apparatus for vehicle

Country Status (1)

Country Link
JP (1) JP4052226B2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7400522B2 (en) * 2020-02-14 2023-12-19 株式会社デンソー Support management device, support management method, and support management program

Also Published As

Publication number Publication date
JP2005135308A (en) 2005-05-26

Similar Documents

Publication Publication Date Title
US10078789B2 (en) Vehicle parking assist system with vision-based parking space detection
JP5022609B2 (en) Imaging environment recognition device
US9690996B2 (en) On-vehicle image processor
EP2879370B1 (en) In-vehicle image recognizer
JP3925488B2 (en) Image processing apparatus for vehicle
US9836657B2 (en) System and method for periodic lane marker identification and tracking
JP3822515B2 (en) Obstacle detection device and method
JP6174975B2 (en) Ambient environment recognition device
EP2993654B1 (en) Method and system for forward collision warning
EP2889641B1 (en) Image processing apparatus, image processing method, program and image processing system
US8279280B2 (en) Lane departure warning method and system using virtual lane-dividing line
US7366325B2 (en) Moving object detection using low illumination depth capable computer vision
JP5399027B2 (en) A device having a system capable of capturing a stereoscopic image to assist driving of an automobile
RU2017109073A (en) DETECTION AND PREDICTION OF PEDESTRIAN TRAFFIC USING A RETURNED BACK CAMERA
JP3931891B2 (en) In-vehicle image processing device
US20130286205A1 (en) Approaching object detection device and method for detecting approaching objects
US20150120160A1 (en) Method and device for detecting a braking situation
US20150302257A1 (en) On-Vehicle Control Device
CN103917989B (en) For detecting the method and CCD camera assembly of the raindrop in vehicle windscreen
Mori et al. Recognition of foggy conditions by in-vehicle camera and millimeter wave radar
US9230189B2 (en) Method of raindrop detection on a vehicle windscreen and driving assistance device
JP4052226B2 (en) Image processing apparatus for vehicle
JP4033106B2 (en) Ranging performance degradation detection device for vehicles
JP6429101B2 (en) Image determination apparatus, image processing apparatus, image determination program, image determination method, moving object
JP4381394B2 (en) Obstacle detection device and method

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20051226

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20070725

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20070807

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20071005

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20071113

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20071126

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20101214

Year of fee payment: 3

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20111214

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20121214

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20121214

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20131214

Year of fee payment: 6

LAPS Cancellation because of no payment of annual fees