JP2010165142A - Device and method for recognizing road sign for vehicle - Google Patents

Device and method for recognizing road sign for vehicle Download PDF

Info

Publication number
JP2010165142A
JP2010165142A JP2009006340A JP2009006340A JP2010165142A JP 2010165142 A JP2010165142 A JP 2010165142A JP 2009006340 A JP2009006340 A JP 2009006340A JP 2009006340 A JP2009006340 A JP 2009006340A JP 2010165142 A JP2010165142 A JP 2010165142A
Authority
JP
Japan
Prior art keywords
vehicle
imaging
image signal
angle
road marking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2009006340A
Other languages
Japanese (ja)
Other versions
JP5146330B2 (en
Inventor
Chikao Tsuchiya
千加夫 土谷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nissan Motor Co Ltd
Original Assignee
Nissan Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nissan Motor Co Ltd filed Critical Nissan Motor Co Ltd
Priority to JP2009006340A priority Critical patent/JP5146330B2/en
Publication of JP2010165142A publication Critical patent/JP2010165142A/en
Application granted granted Critical
Publication of JP5146330B2 publication Critical patent/JP5146330B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To provide a device and a method for recognizing a road sign for a vehicle capable of acquiring recognition data of a road sign for a vehicle that can be a base for sufficiently and precisely estimating a road form. <P>SOLUTION: A first variable group calculating section 105 calculates a curvature of a curve in a corresponding travel road, based on an image signal acquired by a telephotographic camera 201. A second variable group calculating section 106 calculates a relative position in the vehicle width direction of an own vehicle to a road sign for a vehicle, vertical distance of the own vehicle from a travel road surface, an amount of displacement in a pitching direction of the own vehicle, an amount of displacement in a yaw direction of the own vehicle, and the like based on the image signal acquired by a wide angle camera 202. Further, a processing section 107 for recognizing a road sign for a vehicle recognizes a position of the road sign for the vehicle to the own vehicle and acquires recognition data expressing the recognition based on the data of the variable group acquired in the first variable group calculating section 105 and the second variable group calculating section 106, respectively. <P>COPYRIGHT: (C)2010,JPO&INPIT

Description

本発明は、車線区分線等の車両用道路標示を認識する車両用道路標示認識装置および車両用道路標示認識方法に関する。該認識による情報は、例えば、車線逸脱警報装置や車線維持支援装置等に提供してこれらの装置での利用に供し得る。   The present invention relates to a vehicular road marking recognition apparatus and a vehicular road marking recognition method for recognizing vehicular road markings such as lane markings. The information based on the recognition can be provided to, for example, a lane departure warning device, a lane keeping support device, or the like for use in these devices.

水平方向に動くことができる広角カメラと望遠カメラとを備え、それらのカメラによって取得した車両前方の撮像視野に係る画像データに基づくエッジ点座標を同一の道路系座標上の座標に変換し、車線を認識する技術が既に提案されている(例えば、特許文献1参照)。   It is equipped with a wide-angle camera and a telephoto camera that can move in the horizontal direction, and converts the edge point coordinates based on the image data relating to the imaging field of view in front of the vehicle acquired by these cameras to the coordinates on the same road system coordinates, and the lane Has already been proposed (see, for example, Patent Document 1).

特許第3588728号公報Japanese Patent No. 3588728

しかしながら、特許文献1所載の技術では、車両のピッチ角、走行路面からの自車の上下方向の距離に依存する広角カメラおよび望遠カメラの取り付け高さを考慮していない。このため、画像中のエッジ点を表すデータを道路系座標へ変換する時に誤差が発生し、道路形状の推定精度を確保し難いといった課題を残している。
本発明は上述のような状況に対応するべくなされたものであり、道路形状を十分な精度で推定するための基礎となり得る車両用道路標示の認識データを取得可能な車両用道路標示認識装置および車両用道路標示認識方法を提供することを目的とする。
However, the technique described in Patent Document 1 does not consider the mounting height of the wide-angle camera and the telephoto camera depending on the pitch angle of the vehicle and the vertical distance of the vehicle from the traveling road surface. For this reason, an error occurs when data representing edge points in an image is converted into road system coordinates, and there remains a problem that it is difficult to ensure the estimation accuracy of the road shape.
The present invention has been made to cope with the situation as described above, and is a vehicle road marking recognition device capable of acquiring recognition data of a vehicle road marking that can be a basis for estimating a road shape with sufficient accuracy, and An object is to provide a road marking recognition method for vehicles.

上記課題を解決するために、本発明では、相対的に狭画角且つ長撮像距離で規定する第一撮像対象領域に対応する画像信号に基づいて、第一変数群算出部において、走行路におけるカーブの曲率を算出する。
また、相対的に広画角且つ短撮像距離で規定する第二撮像対象領域に対応する画像信号に基づいて、第二変数群算出部において、車両用道路標示に対する自車の車幅方向の相対位置、走行路面からの自車の上下方向の距離、自車のピッチング方向の変位量、および、自車のヨー方向の変位量等を算出する。
更に、第一変数群算出部および第二変数群算出部において各取得した変数群のデータに基づいて、車両用道路標示認識処理部において、自車に対する車両用道路標示の位置を認識し該認識を表す認識データを得る。
In order to solve the above-described problem, in the present invention, the first variable group calculation unit in the traveling road is based on an image signal corresponding to the first imaging target area defined by a relatively narrow angle of view and a long imaging distance. Calculate the curvature of the curve.
In addition, based on the image signal corresponding to the second imaging target area defined by a relatively wide angle of view and a short imaging distance, the second variable group calculation unit compares the vehicle width direction relative to the vehicle road marking in the vehicle width direction. The position, the vertical distance of the host vehicle from the traveling road surface, the displacement amount of the host vehicle in the pitching direction, the displacement amount of the host vehicle in the yaw direction, and the like are calculated.
Further, based on the data of the variable groups acquired in the first variable group calculation unit and the second variable group calculation unit, the vehicle road marking recognition processing unit recognizes the position of the vehicle road marking relative to the host vehicle and recognizes the position. Recognition data representing is obtained.

本発明では、車両用道路標示に対する自車の車幅方向の相対位置、走行路面からの自車の上下方向の距離、自車のピッチング方向の変位量、および、自車のヨー方向の変位量を勘案する。このため、当該車両用道路標示の位置に関する認識データを得るに際し、データの精度が高くなる。   In the present invention, the relative position in the vehicle width direction of the own vehicle with respect to the vehicle road marking, the vertical distance of the own vehicle from the traveling road surface, the displacement amount in the pitching direction of the own vehicle, and the displacement amount in the yaw direction of the own vehicle Is taken into consideration. For this reason, when obtaining the recognition data regarding the position of the vehicle road marking, the accuracy of the data is increased.

本発明の第1の実施の形態のとしての車両用道路標示認識装置を表す機能ブロック図である。It is a functional block diagram showing the road marking recognition apparatus for vehicles as the 1st Embodiment of this invention. 図1の車両用道路標示認識装置を車両(自車)に装備した状態を表す概念図である。It is a conceptual diagram showing the state which equipped the vehicle road marking recognition apparatus of FIG. 1 with the vehicle (own vehicle). 第一撮像手段(望遠カメラ)の撮像視野を表す概念図である。It is a conceptual diagram showing the imaging visual field of a 1st imaging means (telephoto camera). 第二撮像手段(広角カメラ)の撮像視野を表す概念図である。It is a conceptual diagram showing the imaging visual field of a 2nd imaging means (wide angle camera). 図1の車両用道路標示認識装置の動作の概要を表すフローチャートである。It is a flowchart showing the outline | summary of operation | movement of the road marking recognition apparatus for vehicles of FIG. 図2の車両用道路標示認識装置における自車両の移動ベクトルの算定処理について説明するための図である。It is a figure for demonstrating the calculation process of the movement vector of the own vehicle in the road marking recognition apparatus for vehicles of FIG. 本発明の実施の形態で適用する道路モデルを表す図である。It is a figure showing the road model applied in embodiment of this invention. 車両側面視での自車と自車に取り付けられた広角カメラと路面との位置関係を表す図である。It is a figure showing the positional relationship of the own vehicle and the wide-angle camera attached to the own vehicle in a vehicle side view, and a road surface. 車両上面視における車線形状を表す図である。It is a figure showing the lane shape in a vehicle top view. 車両側面視での自車と、自車に取り付けられた望遠カメラと路面との位置関係を表す図である。It is a figure showing the positional relationship of the own vehicle in a vehicle side view, the telephoto camera attached to the own vehicle, and a road surface. 本発明の第2の実施の形態のとしての車両用道路標示認識装置を表す機能ブロック図である。It is a functional block diagram showing the road marking recognition apparatus for vehicles as the 2nd Embodiment of this invention. 本発明の第3の実施の形態のとしての車両用道路標示認識方法を表すフローチャートである。It is a flowchart showing the road marking recognition method for vehicles as the 3rd Embodiment of this invention.

以下、図面を参照して本発明の実施の形態について詳述することにより本発明を明らかにする。
尚、以下に参照する各図において、便宜上、説明の主題となる要部は適宜誇張し、要部以外については適宜簡略化し乃至省略している。
(本発明の第1の実施の形態としての車両用道路標示認識装置)
図1は、本発明の一つの実施の形態のとしての車両用道路標示認識装置を表す機能ブロック図である。この車両用道路標示認識装置100は、次の各部を備えている。
Hereinafter, the present invention will be clarified by describing embodiments of the present invention in detail with reference to the drawings.
In each of the drawings referred to below, for the sake of convenience, the main part that is the subject of the description is exaggerated as appropriate, and other than the main part is appropriately simplified or omitted.
(Vehicle road marking recognition apparatus as a first embodiment of the present invention)
FIG. 1 is a functional block diagram showing a vehicular road marking recognition apparatus according to an embodiment of the present invention. The vehicle road sign recognition apparatus 100 includes the following units.

即ち、画像信号を各受ける第一画像信号入力部101および第二画像信号入力部102、受けた画像信号を各保持する第一画像メモリ103および第二画像メモリ104、第一変数群算出部105および第二変数群算出部106、車両用道路標示認識処理部107を備える。
第一画像信号入力部101は、所定の第一の撮像画角と所定の第一の撮像距離とで規定する第一撮像対象領域に対応する画像信号を受ける。
That is, the first image signal input unit 101 and the second image signal input unit 102 that receive the image signals, the first image memory 103 and the second image memory 104 that hold the received image signals, and the first variable group calculation unit 105, respectively. And a second variable group calculation unit 106 and a vehicle road sign recognition processing unit 107.
The first image signal input unit 101 receives an image signal corresponding to a first imaging target area defined by a predetermined first imaging angle of view and a predetermined first imaging distance.

本実施の形態において、第一画像信号入力部101で受ける画像信号は、上述の第一撮像対象領域たる、相対的に狭い所謂狭画角である第一の撮像画角と相対的に遠方の所謂長撮像距離である第一の撮像距離とで規定する第一撮像対象領域に対応する画像信号である。
このような第一撮像対象領域に対応する画像信号は、光学系を望遠側(長焦点距離)に調整し或いは望遠仕様の光学系を備えた第一撮像手段201としての望遠カメラによって取得し、上述の第一画像信号入力部101に供給する。
In the present embodiment, the image signal received by the first image signal input unit 101 is relatively distant from the first imaging field angle that is a relatively narrow so-called narrow field angle, which is the above-described first imaging target region. It is an image signal corresponding to a first imaging target area defined by a first imaging distance which is a so-called long imaging distance.
An image signal corresponding to such a first imaging target region is acquired by a telephoto camera as the first imaging means 201 that adjusts the optical system to the telephoto side (long focal length) or includes an optical system with a telephoto specification, This is supplied to the first image signal input unit 101 described above.

一方、第二画像信号入力部102は、所定の第二の撮像画角と所定の第二の撮像距離とで規定する第二撮像対象領域に対応する画像信号を受ける。
上述の第二画像信号入力部102で受ける画像信号は、上述の第二撮像対象領域たる、相対的に広い所謂広角である第二の撮像画角と相対的に近傍の所謂短撮像距離である第二の撮像距離とで規定する第二撮像対象領域に対応する画像信号である。
On the other hand, the second image signal input unit 102 receives an image signal corresponding to a second imaging target area defined by a predetermined second imaging angle of view and a predetermined second imaging distance.
The image signal received by the second image signal input unit 102 is a so-called short imaging distance that is relatively close to a relatively wide second imaging angle of view, which is the above-described second imaging target region. It is an image signal corresponding to the second imaging target area defined by the second imaging distance.

このような第二撮像対象領域に対応する画像信号は、光学系を広角側(短焦点距離)に調整し或いは広角仕様の光学系を備えた第二撮像手段202としての広角カメラ202によって取得し、上述の第二画像信号入力部102に供給する。
尚、第一画像信号入力部101および第二画像信号入力部102は、該当する画像信号を受けるインターフェース部として機能する部分である。これらは、導体の接触子を備えたコネクタとして構成され得るが、光信号や近距離無線信号による非接触のインターフェースの形態を採り得る。
Such an image signal corresponding to the second imaging target area is acquired by the wide-angle camera 202 as the second imaging means 202 having the optical system adjusted to the wide-angle side (short focal length) or provided with a wide-angle optical system. , And supplied to the second image signal input unit 102 described above.
The first image signal input unit 101 and the second image signal input unit 102 are portions that function as an interface unit that receives a corresponding image signal. These can be configured as connectors with conductor contacts, but can take the form of non-contact interfaces with optical signals and short-range wireless signals.

一方、第一変数群算出部105は、第一画像信号入力部101で受けた上述のような画像信号に含まれる情報を利用して、該当する走行路におけるカーブの曲率を算出する。この算出に関しては後に詳細する。
更に、第二変数群算出部106は、第二画像信号入力部102で受けた上述のような画像信号に含まれる情報を利用して、自車の走行状態を認識するために利用する複数の車両用道路標示間の該当する間隔および該複数の車両用道路標示に対する自車の車幅方向の相対位置を算出する。更にまた、第二変数群算出部106は、走行路面からの自車の上下方向の距離、自車のピッチング方向の変位量、および、自車のヨー方向の変位量を算出する。この算出に関しては後に詳細する。
On the other hand, the first variable group calculation unit 105 uses the information included in the image signal as described above received by the first image signal input unit 101 to calculate the curvature of the curve on the corresponding travel route. This calculation will be described in detail later.
Further, the second variable group calculation unit 106 uses a plurality of information used for recognizing the traveling state of the own vehicle using the information included in the image signal as described above received by the second image signal input unit 102. A corresponding interval between the vehicle road markings and a relative position in the vehicle width direction of the own vehicle with respect to the plurality of vehicle road markings are calculated. Furthermore, the second variable group calculation unit 106 calculates the distance in the vertical direction of the host vehicle from the traveling road surface, the amount of displacement in the pitching direction of the host vehicle, and the amount of displacement in the yaw direction of the host vehicle. This calculation will be described in detail later.

車両用道路標示認識処理部107は、第一変数群算出部105における算出結果と第二変数群算出部106における算出結果とに含まれる情報を利用して、自車に対する当該車両用道路標示の位置を認識し該認識を表す認識データを得る。
尚、既述の第一画像メモリ103は、第一画像信号入力部101で受けた画像信号を第一変数群算出部105から任意のタイミングでアクセス可能に保持する。
The vehicle road marking recognition processing unit 107 uses the information included in the calculation result in the first variable group calculation unit 105 and the calculation result in the second variable group calculation unit 106 to display the vehicle road marking for the vehicle. Recognition data representing the recognition is obtained by recognizing the position.
The first image memory 103 described above holds the image signal received by the first image signal input unit 101 so as to be accessible from the first variable group calculation unit 105 at an arbitrary timing.

従って、第一変数群算出部105は、第一撮像手段201としての望遠カメラによって取得し第一画像メモリ103に保持した画像データを適宜のタイミングで利用し、該当する変数の値を算出する。
同様に、第二画像メモリ104は、第二画像信号入力部102で受けた画像信号を第二変数群算出部106から任意のタイミングでアクセス可能に保持する。
Therefore, the first variable group calculation unit 105 uses the image data acquired by the telephoto camera as the first imaging unit 201 and stored in the first image memory 103 at an appropriate timing, and calculates the value of the corresponding variable.
Similarly, the second image memory 104 holds the image signal received by the second image signal input unit 102 so as to be accessible from the second variable group calculation unit 106 at an arbitrary timing.

従って、第二変数群算出部106は、第二撮像手段202としての広角カメラによって取得し第二画像メモリ104に保持した画像データを適宜のタイミングで利用し、該当する変数の値を算出する。
一方、本実施の形態の車両用道路標示認識装置は、上述のような第一撮像手段201としての望遠カメラ、および、第二撮像手段202としての広角カメラ202をも備えた形態を採ることができる。
Therefore, the second variable group calculation unit 106 uses the image data acquired by the wide-angle camera as the second imaging unit 202 and stored in the second image memory 104 at an appropriate timing, and calculates the value of the corresponding variable.
On the other hand, the vehicular road sign recognition apparatus of the present embodiment may take a form including the telephoto camera as the first imaging unit 201 and the wide-angle camera 202 as the second imaging unit 202 as described above. it can.

望遠カメラ201および広角カメラ202は、既述のような第一撮像対象領域に対応する画像信号および第二撮像対象領域に対応する画像信号をそれぞれ取得するために、図2に例示するように、各別の用途に合致した設置形態で該当する車両に取り付ける。
図2は、望遠カメラ201および広角カメラ202を持つ図1の車両用道路標示認識装置100を自車200(自車)に装備した状態を表す概念図である。
As illustrated in FIG. 2, the telephoto camera 201 and the wide-angle camera 202 acquire the image signal corresponding to the first imaging target area and the image signal corresponding to the second imaging target area as described above, as illustrated in FIG. Attach to the corresponding vehicle in the installation form that matches each different application.
FIG. 2 is a conceptual diagram showing a state in which the vehicle road marking recognition apparatus 100 of FIG. 1 having the telephoto camera 201 and the wide-angle camera 202 is mounted on the own vehicle 200 (own vehicle).

第一撮像手段201としての望遠カメラは、既述のような相対的に狭い第一の撮像画角と相対的に遠方の第一の撮像距離とで規定する第一撮像対象領域に対応する画像信号を取得する用途に合致するように設置する。
即ち、望遠カメラ201は、取り付け水準位置が自車200の中央の極力高所に、その撮像光軸が水平よりも若干下向きとなる程度の浅い俯角をなすように設置する。
The telephoto camera as the first imaging unit 201 is an image corresponding to a first imaging target area defined by a relatively narrow first imaging angle of view and a relatively far first imaging distance as described above. Install so that it matches the purpose of acquiring signals.
That is, the telephoto camera 201 is installed so that the mounting level position is as high as possible in the center of the host vehicle 200 so as to form a shallow depression angle such that the imaging optical axis is slightly downward from the horizontal.

一方、第二撮像手段202としての広角カメラは、既述のような相対的に広い第二の撮像画角と比較的近傍の第二の撮像距離とで規定する第二撮像対象領域に対応する画像信号を取得する用途に合致するように設置する。
即ち、広角カメラ202は、例えば、自車200前面の中央部位に、自車200に極力近く且つ広い範囲の車線区分線をその撮像視野に収められるように、その撮像光軸が比較的深い俯角をなすように設置する。
On the other hand, the wide-angle camera as the second imaging unit 202 corresponds to the second imaging target area defined by the relatively wide second imaging angle of view and the relatively second imaging distance as described above. Install so that it matches the purpose of acquiring image signals.
That is, for example, the wide-angle camera 202 has a relatively deep depression angle whose imaging optical axis is relatively deep so that a wide range of lane markings as close as possible to the vehicle 200 can be accommodated in the imaging field of view at the central portion of the front surface of the vehicle 200. Install to make

上述のような第一撮像手段(望遠カメラ)201および第二撮像手段(広角カメラ)202の設置形態は、結果的に、次のようなものとなる。
即ち、望遠カメラ201を、その取り付け水準位置が広角カメラ202の取り付け水準位置よりも高所となるように、且つ、その撮像光軸の俯角が広角カメラ202の撮像光軸の俯角よりも浅い角度をなすように設置する。
The installation form of the first imaging means (telephoto camera) 201 and the second imaging means (wide-angle camera) 202 as described above results in the following.
In other words, the telephoto camera 201 is positioned so that its mounting level position is higher than the mounting level position of the wide-angle camera 202, and the depression angle of the imaging optical axis is shallower than the depression angle of the imaging optical axis of the wide-angle camera 202. Install to make

上記の相対的関係を広角カメラ202側を基準に見れば、広角カメラ202を、その取り付け水準位置が望遠カメラ201の取り付け水準位置よりも低所で、且つ、その撮像光軸の俯角が望遠カメラ201の撮像光軸の俯角よりも深い角度をなすように設置することになる。
他方、本発明の実施の形態では、第一撮像手段(望遠カメラ)201および第二撮像手段(広角カメラ)202は、適宜の共通の同期信号(その供給系統は図示省略)を利用するなどして、撮像の時点および周期に関して相互に同期した撮像動作を行う。
If the relative relationship is viewed with the wide-angle camera 202 side as a reference, the wide-angle camera 202 is mounted at a position where the mounting level position is lower than the mounting level position of the telephoto camera 201 and the depression angle of the imaging optical axis is the telephoto camera. It is installed so as to form an angle deeper than the depression angle of the imaging optical axis 201.
On the other hand, in the embodiment of the present invention, the first image pickup means (telephoto camera) 201 and the second image pickup means (wide-angle camera) 202 use an appropriate common synchronization signal (its supply system is not shown). Thus, imaging operations synchronized with each other with respect to the time and cycle of imaging are performed.

従って、第一撮像手段(望遠カメラ)201および第二撮像手段(広角カメラ)202については、それらによる画像取得の同時性が確保される。
このため、第一変数群算出部105による算出結果と第二変数群算出部106による算出結果とを利用して車両用道路標示認識処理部107で統合的な処理を実行する際の精度を高水準に維持することができる。
Therefore, for the first image pickup means (telephoto camera) 201 and the second image pickup means (wide-angle camera) 202, the image acquisition simultaneity is ensured.
For this reason, using the calculation result by the first variable group calculation unit 105 and the calculation result by the second variable group calculation unit 106, the accuracy when the vehicle road sign recognition processing unit 107 executes integrated processing is improved. Can be maintained at a level.

図3は、第一撮像手段(望遠カメラ)201の撮像視野を表す概念図であり、図4は第二撮像手段(広角カメラ)202の撮像視野を表す概念図である。
図3および図4を適宜対照して明らかなように、自車の先行車両300が接近している場合には、望遠カメラ201ではその撮像視野が図3におけるように妨げられて車線区分線301、302の延長方向を見通すことがでず、その撮像信号を取得できない。
FIG. 3 is a conceptual diagram showing the imaging field of view of the first imaging means (telephoto camera) 201, and FIG. 4 is a conceptual diagram showing the imaging field of view of the second imaging means (wide-angle camera) 202.
As apparent from the comparison between FIG. 3 and FIG. 4, when the preceding vehicle 300 of the own vehicle is approaching, the telephoto camera 201 is obstructed as shown in FIG. , 302 cannot be seen and the image pickup signal cannot be acquired.

これに対し、広角カメラ202では、先行車両300が図3の場合と同様に接近している場合においても、図4におけるように撮像視野の大部分は妨げられず、車線区分線301、302の延長方向を見通すことができ、その撮像信号を取得できる。
即ち、広角カメラ202によれば、先行車両300が異常に接近しない限り、車線区分線301、302の延長方向に沿ってその撮像信号の取得を維持できる。
On the other hand, in the wide-angle camera 202, even when the preceding vehicle 300 is approaching as in the case of FIG. 3, most of the imaging field of view is not disturbed as in FIG. The extension direction can be seen and the imaging signal can be acquired.
That is, according to the wide-angle camera 202, acquisition of the imaging signal can be maintained along the extending direction of the lane markings 301 and 302 unless the preceding vehicle 300 approaches abnormally.

現実には、車両用道路標示認識装置100の認識データを用いることを想定している車線逸脱警報装置(LDP:Lane Departure Prevention)、或いは、車線維持支援装置(LKS: Lane Keeping Support)等では、車速は所定値以上であり、先行車両との車間距離は自ずから或る程度離隔する。
このため、広角カメラ202によっても車線区分線301、302の延長方向に沿ってその撮像信号を取得することが困難になるおそれは殆ど生じない。
In reality, in a lane departure warning device (LDP: Lane Departure Prevention) or a lane keeping support device (LKS) that is assumed to use the recognition data of the vehicle road sign recognition device 100, The vehicle speed is equal to or higher than a predetermined value, and the inter-vehicle distance from the preceding vehicle is naturally separated to some extent.
For this reason, even with the wide-angle camera 202, there is almost no possibility that it is difficult to acquire the imaging signal along the extending direction of the lane markings 301 and 302.

図5は、図1の車両用道路標示認識装置100の動作の概要を表すフローチャートである。
先ず、第一画像信号入力部101および第二画像信号入力部102から、望遠カメラ201および広角カメラ201による撮像出力である画像信号を取得して第一画像メモリ105および第二画像メモリ106に各別に保持する(S501)。
FIG. 5 is a flowchart showing an outline of the operation of the vehicular road marking recognition apparatus 100 of FIG.
First, from the first image signal input unit 101 and the second image signal input unit 102, an image signal that is an imaging output by the telephoto camera 201 and the wide-angle camera 201 is acquired and stored in the first image memory 105 and the second image memory 106, respectively. It is held separately (S501).

ステップS501で取得し保持する画像信号(画像データ)は、色信号成分を含まない輝度成分による輝度(モノクローム)画像の画像データでよい。
次いで、ステップS501で取得し第一画像メモリ105および第二画像メモリ106に各別に保持した画像信号による画像に対応する微分画像を生成する(S502)。
ステップS502における微分画像の生成処理では、図6に示すようなSobelオペレータのカーネルを適用して重み行列と輝度値の積和をとる。
The image signal (image data) acquired and held in step S501 may be image data of a luminance (monochrome) image with a luminance component not including a color signal component.
Next, a differential image corresponding to the image based on the image signal obtained in step S501 and separately stored in the first image memory 105 and the second image memory 106 is generated (S502).
In the differential image generation processing in step S502, the product sum of the weight matrix and the luminance value is obtained by applying the kernel of the Sobel operator as shown in FIG.

即ち、図6(a)の水平微分用のカーネルを左から右へと主走査方向に沿って適用して水平方向での微分画像出力を取得し、また、図6(b)の垂直微分用のカーネルを上から下へと副走査方向に沿って適用して、垂直方向での微分画像出力を取得する。
ここに、水平方向の微分画像では、画面内の主に垂直方向(上下方向)に延びたエッジが検出され、垂直方向の微分画像では、画面内の主に水平方向(左右方向)に延びたエッジが検出される。
That is, the horizontal differential kernel of FIG. 6A is applied from the left to the right along the main scanning direction to obtain a differential image output in the horizontal direction, and the vertical differential kernel of FIG. Are applied from top to bottom along the sub-scanning direction to obtain a differential image output in the vertical direction.
Here, in the differential image in the horizontal direction, an edge extending mainly in the vertical direction (up and down direction) in the screen is detected, and in the differential image in the vertical direction, it extends mainly in the horizontal direction (left and right direction) in the screen. Edges are detected.

上記の何れの場合も、低輝度から高輝度に変化するエッジ部の微分値が正に、高輝度から輝低度に変化するエッジ部の微分値が負になり、また、輝度変化が小さい部分の微分値は略ゼロになる。
本実施の形態では、上記のうち、水平方向の微分画像に関するデータを利用して二値画像を得る(S503)。
ステップS503における処理では、ステップS502で生成した微分画像に対して微分値が所定閾値TH_BIN_H以上の画素値を1、それ未満の画素値を0とした二値画像(以下、正エッジ画像という)を生成する。
また、微分値が所定閾値TH_BIN_L未満の画素値を1、それ以上の画素値を0とした二値画像(以下、負エッジ画像という)を生成する。
In any of the above cases, the differential value of the edge portion changing from low luminance to high luminance is positive, the differential value of the edge portion changing from high luminance to low brightness is negative, and the luminance change is small The derivative value of becomes substantially zero.
In the present embodiment, a binary image is obtained using the data related to the differential image in the horizontal direction among the above (S503).
In the processing in step S503, a binary image (hereinafter referred to as a positive edge image) in which the pixel value having a differential value equal to or greater than a predetermined threshold TH_BIN_H is 1 and the pixel value less than 0 is set to 0 for the differential image generated in step S502. Generate.
In addition, a binary image (hereinafter referred to as a negative edge image) is generated in which a pixel value having a differential value less than a predetermined threshold TH_BIN_L is 1 and a pixel value greater than that is 0.

ここで、閾値TH_BIN_Hは正エッジ画像において車線区分線の左側のエッジ点が1、路面が0となるように実験を通して設定する。
同様に、閾値TH_BIN_Lは負エッジ画像において車線区分線の右側のエッジ点が1、路面が0となるように設定する。
次いで、広角カメラ202で取得した画像データに対し後述する道路モデル式で表される画像上の点列と、ステップS503で生成した正エッジ画像および負エッジ画像との適合度を計算し、その適合度を最大化する道路モデルパラメータを決定する(S504)。
Here, the threshold value TH_BIN_H is set through experiments so that the left edge point of the lane marking is 1 and the road surface is 0 in the positive edge image.
Similarly, the threshold TH_BIN_L is set so that the edge point on the right side of the lane marking is 1 and the road surface is 0 in the negative edge image.
Next, the degree of matching between the image data acquired by the wide-angle camera 202 and the point sequence on the image represented by the road model formula described later and the positive edge image and the negative edge image generated in step S503 is calculated, and the matching is performed. The road model parameter that maximizes the degree is determined (S504).

以下、ステップS504における処理について図面を参照しつつ詳述する。
図7は、本実施の形態で適用する道路モデルを表す図である。
図7において、カメラを中心とした座標系(X−Y−Z)を定義する。Z軸は車両進行方向に定義し、これは水平面に投影したカメラの光軸方向とも一致するものとする。
Y軸は鉛直上向きに、X軸は車両進行方向に対して左向きに定義する。図7では、車両上面視での自車700と左右の車線区分線701、702を表している。
自車700は車線に対して反時計回りにφ[rad]の角度を持って位置し、広角カメラ202が撮影する自車近傍領域においては車線は直線で近似できるものとすると、左右の車線区分線701、702の方程式は次の式1で表すことができる。
Hereinafter, the process in step S504 will be described in detail with reference to the drawings.
FIG. 7 is a diagram illustrating a road model applied in the present embodiment.
In FIG. 7, a coordinate system (XYZ) centered on the camera is defined. The Z-axis is defined as the vehicle traveling direction, which also coincides with the optical axis direction of the camera projected on the horizontal plane.
The Y axis is defined vertically upward, and the X axis is defined leftward with respect to the vehicle traveling direction. FIG. 7 shows the host vehicle 700 and the left and right lane markings 701 and 702 in a top view of the vehicle.
If the own vehicle 700 is positioned with an angle of φ [rad] counterclockwise with respect to the lane, and the lane can be approximated by a straight line in the vicinity of the own vehicle imaged by the wide-angle camera 202, the left and right lane divisions The equations of the lines 701 and 702 can be expressed by the following equation 1.

Figure 2010165142
Figure 2010165142

ここで、ysは右側の車線区分線702からの自車の横変位量、Wは左右の車線区分線701、702の間隔、即ち、車線の幅であり、このWは車線幅として平均的な定数に予め設定しておく。
一方、図8は、車両側面視での自車700と自車に取り付けられた広角カメラ202と路面800との位置関係を表す図である。カメラ202の光軸が路面800に対してη2[rad]の角度をもって取り付けられているとすると、路面の方程式は式2で表すことができる。
Here, y s is the lateral displacement of the vehicle from the right side of the lane marker 702, W is distance between the left and right lane markings 701 and 702, i.e., the width of the lane, the average as the W lane width Set in advance.
On the other hand, FIG. 8 is a diagram showing the positional relationship between the own vehicle 700, the wide-angle camera 202 attached to the own vehicle, and the road surface 800 in a side view of the vehicle. Assuming that the optical axis of the camera 202 is attached to the road surface 800 at an angle of η2 [rad], the road surface equation can be expressed by Equation 2.

Figure 2010165142
Figure 2010165142

ここで、h2は路面に対するカメラ202の高さ(第二画像信号の取得に関する撮像視点の走行路面からの高さ)を表す。
今、カメラ202で取得された画像上の2次元座標系x−yを考える。3次元座標系X−Y−Zと2次元座標系x−yの関係はカメラ202の光学系の特性に係る当該カメラの固有値に依存する。
ここでは、カメラ固有の関数をfとすると、左右の車線区分線は画像上で式3のように表される。
Here, h2 represents the height of the camera 202 with respect to the road surface (the height from the traveling road surface of the imaging viewpoint regarding acquisition of the second image signal).
Consider a two-dimensional coordinate system xy on an image acquired by the camera 202. The relationship between the three-dimensional coordinate system XYZ and the two-dimensional coordinate system xy depends on the intrinsic value of the camera related to the characteristics of the optical system of the camera 202.
Here, assuming that the camera-specific function is f, the left and right lane markings are expressed as in Expression 3 on the image.

Figure 2010165142
Figure 2010165142

以上、式1、式2、式3で定義される道路モデルを適用すると、この道路モデルのパラメータは、ヨー角φ、横変位量ys、ピッチ変位量η2、カメラ高さh2の4つとなる。
これは、本出願人の特許第3733875号に示されている道路モデルにおいて、道路曲率ρを0とした場合に相当する。
As described above, when the road model defined by Equation 1, Equation 2, and Equation 3 is applied, there are four parameters of the road model: yaw angle φ, lateral displacement amount ys, pitch displacement amount η2, and camera height h2.
This corresponds to a case where the road curvature ρ is set to 0 in the road model shown in Japanese Patent No. 3733875 of the present applicant.

ステップS504における処理では、上述の道路モデル式のパラメータを最適化手法、例えばSimulated Annealing(SA)によって最適化する。
その際、画面上の所定領域での式3による自車左側の車線区分線の点列が負エッジ画像の画素値1の点と重なる点の数と、同じく式3による自車右側の車線区分線の点列が正エッジ画像の画素値1の点と重なる点の数とを合計した値を、パラメータの適合度合の評価値とする。
In the processing in step S504, the parameters of the road model formula described above are optimized by an optimization method, for example, simulated annealing (SA).
At that time, in the predetermined area on the screen, the number of points where the point sequence of the lane marking line on the left side of the vehicle according to Equation 3 overlaps the point of pixel value 1 of the negative edge image, A value obtained by summing the number of points where the point sequence of the line overlaps the pixel value 1 of the positive edge image is set as an evaluation value of the degree of matching of the parameters.

次いで、望遠カメラ201によって取得された画像に対し、既知の技術である先行車認識アルゴリズム、例えば、本出願人の保有する特許第3146809号の技術を適用して、先行車が存在するか否かを判定する(S505)。
先行車が存在する場合は(S505:Yes)、望遠カメラ201では、図3を参照して説明したように、先行車に撮像視野が妨げられて車線区分線が隠されてしまい正しい車線認識が期待できない。
Next, whether or not there is a preceding vehicle is applied to the image acquired by the telephoto camera 201 by applying a known vehicle recognition algorithm, for example, the technology of Patent No. 3146809 owned by the present applicant. Is determined (S505).
If there is a preceding vehicle (S505: Yes), as described with reference to FIG. 3, in the telephoto camera 201, the preceding vehicle interferes with the field of view and the lane line is hidden, so that the correct lane recognition is performed. I can't expect it.

従って、この場合は、ステップS504における処理で算出された4つの道路モデルパラメータを認識結果として出力して処理を終了する。
先行車が存しない場合は(S505:No)、ステップS504における処理で算出された4つの道路モデルパラメータをもとに、望遠カメラ201によって取得された画像から道路曲率ρを算出し(S506)、ρを含む5つのパラメータを認識結果として出力して終了する。
Therefore, in this case, the four road model parameters calculated in the process in step S504 are output as recognition results, and the process ends.
When there is no preceding vehicle (S505: No), the road curvature ρ is calculated from the image acquired by the telephoto camera 201 based on the four road model parameters calculated in the process in step S504 (S506). Five parameters including ρ are output as recognition results, and the process ends.

次に、図9および図10を参照して、道路曲率ρの算出の方法を説明する。
図9は、車両上面視における車線形状を表す図である。
図10は、車両側面視での自車700と、自車700に取り付けられた望遠カメラ201と路面800との位置関係を表す図である。
カメラを中心とした座標系および車両のヨー角、ピッチ変位量、カメラ高さ、横変位量はステップS504における場合と同一の定義である。
但し、広角カメラ202と望遠カメラ201とでは、図2および図10のとおり、取り付け位置および姿勢が異なる。
Next, a method for calculating the road curvature ρ will be described with reference to FIGS. 9 and 10.
FIG. 9 is a diagram illustrating a lane shape in a vehicle top view.
FIG. 10 is a diagram illustrating the positional relationship between the own vehicle 700 and the telephoto camera 201 attached to the own vehicle 700 and the road surface 800 in a side view of the vehicle.
The coordinate system centered on the camera, the yaw angle of the vehicle, the pitch displacement amount, the camera height, and the lateral displacement amount have the same definitions as in step S504.
However, the wide-angle camera 202 and the telephoto camera 201 have different mounting positions and postures as shown in FIGS.

従って、望遠カメラ201のピッチ変位量η1とカメラ高さh1(第一画像信号の取得に関する撮像視点の走行路面からの高さ)は、広角カメラ202のピッチ変位量η2とカメラ高さh2とは異なる。
しかし、広角カメラ202と望遠カメラ201との位置関係は不変であるから、望遠カメラ201のピッチ変位量η1とカメラ高さh1は、それぞれ広角カメラ202のピッチ変位量η2とカメラ高さh2から相対的に算出される。
車線形状を図9に示すような円弧と仮定し、その曲率をρとすると、車両上面視における左右の車線区分線の方程式および車両側面視における路面の方程式は式4で表すことができる。
Therefore, the pitch displacement amount η1 of the telephoto camera 201 and the camera height h1 (the height from the traveling road surface of the imaging viewpoint regarding acquisition of the first image signal) are the pitch displacement amount η2 and the camera height h2 of the wide-angle camera 202. Different.
However, since the positional relationship between the wide-angle camera 202 and the telephoto camera 201 is unchanged, the pitch displacement amount η1 and the camera height h1 of the telephoto camera 201 are relative to the pitch displacement amount η2 and the camera height h2 of the wide-angle camera 202, respectively. Is calculated automatically.
Assuming that the lane shape is an arc as shown in FIG. 9 and its curvature is ρ, the equation of the left and right lane markings in the vehicle top view and the equation of the road surface in the vehicle side view can be expressed by Equation 4.

Figure 2010165142
ここで、車線幅Wは既述の例と同様に定数とし、横変位量ys、ヨー角φ、ピッチ変位量ηはステップS504で算出された値を適用する。
カメラ高さh1はステップS504で算出された広角カメラ202の高さh2から相対的に算出される望遠カメラ201の高さとする。
即ち、ここで用いる道路モデルのパラメータのうち、未知のパラメータは道路曲率ρだけである。
従って、道路曲率ρの最適化にはステップS504で用いた上述の最適化手法SAよりも簡素な手法、例えば黄金分割法などを用いることが可能であり、必要な計算量はステップS504における最適化に比べて大幅に削減される。
Figure 2010165142
Here, the lane width W is a constant as in the above-described example, and the values calculated in step S504 are applied to the lateral displacement amount y s , the yaw angle φ, and the pitch displacement amount η.
The camera height h1 is the height of the telephoto camera 201 calculated relatively from the height h2 of the wide-angle camera 202 calculated in step S504.
That is, of the road model parameters used here, the only unknown parameter is the road curvature ρ.
Therefore, a method simpler than the above-described optimization method SA used in step S504, such as the golden section method, can be used to optimize the road curvature ρ, and the necessary calculation amount is optimized in step S504. It is greatly reduced compared to.

(第1の実施の形態の作用・効果)
(1−1)光学系が相対的に望遠側に調整されたカメラ(望遠カメラ)201で取得した画像信号を受ける第一画像信号入力部101と、光学系が相対的に広角側に調整されたカメラ(広角カメラ)202で取得した画像信号を受ける第二画像信号入力部102とを備えている。
第一画像信号入力部101で受けた画像信号を保持する第一画像メモリ103と、第二画像信号入力部102で受けた画像信号を保持する第二画像メモリ104とを備えている。
このため、第一画像メモリ103に保持した望遠カメラ201による画像信号(画像データ)と、第二画像メモリ104に保持した広角カメラ202による画像信号(画像データ)とを、各別の使途に応じて有効に利用して車両用道路標示を的確に認識することが可能になる。
(Operations and effects of the first embodiment)
(1-1) The first image signal input unit 101 that receives an image signal acquired by the camera (telephoto camera) 201 whose optical system is relatively adjusted to the telephoto side, and the optical system is relatively adjusted to the wide-angle side. And a second image signal input unit 102 that receives an image signal acquired by the camera (wide-angle camera) 202.
A first image memory 103 that holds an image signal received by the first image signal input unit 101 and a second image memory 104 that holds an image signal received by the second image signal input unit 102 are provided.
Therefore, the image signal (image data) from the telephoto camera 201 held in the first image memory 103 and the image signal (image data) from the wide-angle camera 202 held in the second image memory 104 are used according to different uses. It can be used effectively and the vehicle road marking can be accurately recognized.

(1−2)第一画像信号入力部101で受け、第一画像メモリ103に保持した望遠カメラ201による画像データを利用して、該当する走行路におけるカーブの曲率を算出する第一変数群算出部105を備えている。
このため、望遠カメラ201による撮像視野が先行車両等に妨げられず確保されている場合には、該当する走行路におけるカーブの曲率が速やかに高い精度で算定され得る。
(1-2) First variable group calculation that calculates the curvature of the curve on the corresponding travel path using image data from the telephoto camera 201 received by the first image signal input unit 101 and held in the first image memory 103 Part 105 is provided.
For this reason, when the field of view taken by the telephoto camera 201 is secured without being obstructed by a preceding vehicle or the like, the curvature of the curve on the corresponding travel path can be calculated quickly and with high accuracy.

(1−3)第二画像信号入力部102で受け、第二画像メモリ104に保持した広角カメラ201による画像データを利用して、車線区分線間の間隔および該車線区分線間と自車の相対位置、走行路面からの自車の上下方向の距離、自車のピッチング方向の変位量、および、自車のヨー方向の変位量を算出する第二変数群算出部106を備えている。
このため、撮像視野が先行車両等に妨げられ難い広角カメラ201による画像データを利用して、上述の相対位置、走行路面からの自車の上下方向の距離、自車のピッチング方向の変位量、および、自車のヨー方向の変位量を的確に算出することができる。
(1-3) Using the image data from the wide-angle camera 201 received by the second image signal input unit 102 and stored in the second image memory 104, the distance between the lane markings and the distance between the lane markings and the own vehicle A second variable group calculation unit 106 is provided that calculates the relative position, the distance in the vertical direction of the host vehicle from the traveling road surface, the amount of displacement in the pitching direction of the host vehicle, and the amount of displacement in the yaw direction of the host vehicle.
For this reason, using the image data from the wide-angle camera 201 whose imaging field of view is not easily obstructed by the preceding vehicle or the like, the relative position described above, the distance in the vertical direction of the vehicle from the traveling road surface, the amount of displacement in the pitching direction of the vehicle, And the displacement amount of the own vehicle in the yaw direction can be accurately calculated.

(1−4)第一変数群算出部105の算出結果と第二変数群算出部106の算出結果とに含まれる情報を利用して、自車に対する当該車両用道路標示の位置を認識し該認識を表す認識データを得る車両用道路標示認識処理部107を備えている。
このため、当該車両用道路標示の位置に関する認識データを得るに際し、車両用道路標示に対する自車の車幅方向の相対位置、走行路面からの自車の上下方向の距離、自車のピッチング方向の変位量、および、自車のヨー方向の変位量が勘案され、認識データの精度が高くなる。
(1-4) Using the information included in the calculation result of the first variable group calculation unit 105 and the calculation result of the second variable group calculation unit 106, the position of the vehicle road marking relative to the host vehicle is recognized and the A vehicular road marking recognition processing unit 107 is provided for obtaining recognition data representing recognition.
For this reason, when obtaining recognition data regarding the position of the vehicle road marking, the relative position in the vehicle width direction of the vehicle relative to the vehicle road marking, the vertical distance of the vehicle from the traveling road surface, the pitching direction of the vehicle Considering the amount of displacement and the amount of displacement of the vehicle in the yaw direction, the accuracy of the recognition data is increased.

(1−5)光学系が相対的に望遠側に調整され取得した画像信号を第一画像信号入力部101に供給するカメラ(望遠カメラ)201と、光学系が相対的に広角側に調整され取得した画像信号を第二画像信号入力部102に供給するカメラ(広角カメラ)202とを備えている。
このため、望遠カメラ201による遠方を見通す撮像視野の画像データと、広角カメラ202、による先行車両の妨害を受け難い自車近傍の広がりを持った撮像視野の画像データとが得られ、各別の使途に応じて有効に利用し車両用道路標示を的確に認識することが可能になる。
(1-5) A camera (telephoto camera) 201 that supplies the first image signal input unit 101 with an image signal obtained by adjusting the optical system relatively to the telephoto side, and the optical system is relatively adjusted to the wide-angle side. A camera (wide-angle camera) 202 that supplies the acquired image signal to the second image signal input unit 102 is provided.
For this reason, image data of the imaging field of view through the distance from the telephoto camera 201 and image data of the imaging field of view in the vicinity of the own vehicle that is not easily disturbed by the preceding vehicle by the wide-angle camera 202 are obtained. It can be used effectively according to the purpose of use, and the vehicle road marking can be accurately recognized.

(1−6)第一撮像手段(望遠カメラ)201と、第二撮像手段(広角カメラ)202とが相互に同期した撮像動作を行うように構成する。
このため、両カメラ201、202による画像取得の同時性が確保され、第一変数群算出部105による算出結果と第二変数群算出部106による算出結果とを利用して車両用道路標示認識処理部107で統合的な処理を実行する際の精度を高水準に維持することができる。
(1-6) The first imaging unit (telephoto camera) 201 and the second imaging unit (wide-angle camera) 202 are configured to perform imaging operations synchronized with each other.
For this reason, the simultaneous acquisition of images by both cameras 201 and 202 is ensured, and the vehicle road marking recognition process is performed using the calculation result by the first variable group calculation unit 105 and the calculation result by the second variable group calculation unit 106. The accuracy when the integrated processing is executed by the unit 107 can be maintained at a high level.

(1−7)望遠カメラ201を、その取り付け水準位置が広角カメラ202の取り付け水準位置よりも高所となるように、且つ、その撮像光軸の俯角が広角カメラ202の撮像光軸の俯角よりも浅い角度をなすように設置する。
このため、望遠カメラ201は遠方を見通す撮像視野の画像データを取得するに適合した取付け位置を占め、広角カメラ202は先行車両の妨害を受け難い自車近傍の広がりを持った撮像視野の画像データを取得するに適合した取付け位置を占めることができる。
(1-7) The telephoto camera 201 is mounted so that its mounting level position is higher than the mounting level position of the wide-angle camera 202, and the depression angle of its imaging optical axis is smaller than the depression angle of the imaging optical axis of the wide-angle camera 202. Install at a shallow angle.
For this reason, the telephoto camera 201 occupies a mounting position suitable for acquiring image data of an imaging field of view far away, and the wide-angle camera 202 is image data of an imaging field of view having a spread in the vicinity of the own vehicle that is not easily disturbed by the preceding vehicle. It can occupy a mounting position suitable for obtaining.

(第2の実施の形態)
図11は、本発明の第2の実施の形態を表す概念図である。図11の例では、望遠カメラ201および広角カメラ202aを持つ図1の車両用道路標示認識装置100を自車200(自車)に装備している。
第一撮像手段201としての望遠カメラは、既述のような相対的に狭い第一の撮像画角と相対的に遠方の第一の撮像距離とで規定する第一撮像対象領域に対応する画像信号を取得する用途に合致するように設置する。
即ち、望遠カメラ201については、図2を参照して説明した例と同様に、取り付け水準位置が自車200の中央の極力高所に、その撮像光軸が水平よりも若干下向きとなる程度の浅い俯角をなすように設置する。
(Second Embodiment)
FIG. 11 is a conceptual diagram showing a second embodiment of the present invention. In the example of FIG. 11, the vehicle road sign recognition apparatus 100 of FIG. 1 having a telephoto camera 201 and a wide-angle camera 202a is installed in the own vehicle 200 (own vehicle).
The telephoto camera as the first imaging unit 201 is an image corresponding to a first imaging target area defined by a relatively narrow first imaging angle of view and a relatively far first imaging distance as described above. Install so that it matches the purpose of acquiring signals.
That is, for the telephoto camera 201, as in the example described with reference to FIG. 2, the attachment level position is as high as possible in the center of the host vehicle 200, and the imaging optical axis is slightly downward from the horizontal. Install so as to form a shallow depression.

一方、第二撮像手段202aとしての広角カメラは、既述のような相対的に広い第二の撮像画角と比較的近傍の第二の撮像距離とで規定する第二撮像対象領域に対応する画像信号を取得する用途に合致するように設置する点では図2を参照して説明した例と同様である。
しかしながら、この第2の実施の形態では、広角カメラ202aは、例えば、自車200後面の中央部位に後ろ向きに設置して、自車200に極力近く且つ広い範囲の車線区分線を自車後方の撮像視野に収められるように、その撮像光軸が比較的深い俯角をなすように設置する。
On the other hand, the wide-angle camera as the second imaging unit 202a corresponds to the second imaging target area defined by the relatively wide second imaging angle of view and the relatively second imaging distance as described above. It is the same as the example described with reference to FIG. 2 in that it is installed so as to match the application for acquiring the image signal.
However, in this second embodiment, the wide-angle camera 202a is installed, for example, rearwardly at the central portion of the rear surface of the host vehicle 200, and a wide range of lane markings as close as possible to the host vehicle 200 is located behind the host vehicle. It is installed so that its imaging optical axis forms a relatively deep depression angle so that it can be accommodated in the imaging field of view.

上述のような第一撮像手段(望遠カメラ)201および第二撮像手段(広角カメラ)202aの設置形態は、結果的に、次のようなものとなる。
即ち、望遠カメラ201を、その取り付け水準位置が広角カメラ202aの取り付け水準位置よりも高所となるように、且つ、その撮像光軸の俯角が広角カメラ202の撮像光軸の俯角よりも浅い角度をなすように設置する。
The installation form of the first imaging means (telephoto camera) 201 and the second imaging means (wide-angle camera) 202a as described above results in the following.
That is, the telephoto camera 201 is positioned so that its mounting level position is higher than the mounting level position of the wide-angle camera 202 a, and the depression angle of the imaging optical axis is shallower than the depression angle of the imaging optical axis of the wide-angle camera 202. Install to make

上記の相対的関係を広角カメラ202a側を基準に見れば、広角カメラ202aを、その取り付け水準位置が望遠カメラ201の取り付け水準位置よりも低所で、且つ、その撮像光軸の俯角が望遠カメラ201の撮像光軸の俯角よりも深い角度をなすように設置することになる。
この第2の実施の形態においても、第一撮像手段(望遠カメラ)201および第二撮像手段(広角カメラ)202aは、適宜の共通の同期信号(その供給系統は図示省略)を利用するなどして、撮像の時点および周期に関して相互に同期した撮像動作を行う。
If the relative relationship is viewed with the wide-angle camera 202a as a reference, the wide-angle camera 202a is mounted at a position where the mounting level position is lower than the mounting level position of the telephoto camera 201, and the depression angle of the imaging optical axis is the telephoto camera. It is installed so as to form an angle deeper than the depression angle of the imaging optical axis 201.
Also in the second embodiment, the first imaging means (telephoto camera) 201 and the second imaging means (wide-angle camera) 202a use an appropriate common synchronization signal (its supply system is not shown). Thus, imaging operations synchronized with each other with respect to the time and cycle of imaging are performed.

従って、第一撮像手段(望遠カメラ)201および第二撮像手段(広角カメラ)202aについては、それらによる画像取得の同時性が確保される。
従って、第一変数群算出部105による算出結果と第二変数群算出部106による算出結果とを利用して車両用道路標示認識処理部107で統合的な処理を実行する際の精度を高水準に維持することができる。
Therefore, as for the first image pickup means (telephoto camera) 201 and the second image pickup means (wide-angle camera) 202a, the simultaneity of image acquisition by them is ensured.
Therefore, the accuracy in executing integrated processing in the vehicular road marking recognition processing unit 107 using the calculation result by the first variable group calculation unit 105 and the calculation result by the second variable group calculation unit 106 is high. Can be maintained.

尚、第二撮像手段(広角カメラ)202aに関する上述のような好適な取付け位置は、既に広く普及している車両の後方監視カメラの取付け位置と略合致している。
従って、このような既存の後方監視カメラで取得した画像データを、図1の第二画像信号入力部102に供給し、更に、第二画像メモリ104に格納するように構成すれば、以降の車両用道路標示認識処理については、既述の第1の実施の形態におけると同様である。
It should be noted that the preferred mounting position as described above with respect to the second imaging means (wide-angle camera) 202a substantially coincides with the mounting position of the rear monitoring camera for vehicles that are already widely used.
Therefore, if the image data acquired by such an existing rear monitoring camera is supplied to the second image signal input unit 102 of FIG. 1 and further stored in the second image memory 104, the subsequent vehicle The road marking recognition process is the same as in the first embodiment described above.

(第2の実施の形態の作用・効果)
(2−1)広角カメラ202aを、その取り付け水準位置が望遠カメラ201の取り付け水準位置よりも低所で、且つ、その撮像光軸の俯角が望遠カメラ201の撮像光軸の俯角よりも深い角度をなすよう自車200後面の中央部位に後ろ向きに設置する。
この構成を採る場合は、既に広く普及している車両の後方監視カメラを第二撮像手段(広角カメラ)202aとして適用することができる。
(Operations and effects of the second embodiment)
(2-1) The wide-angle camera 202a has an attachment level position lower than the attachment level position of the telephoto camera 201, and the depression angle of the imaging optical axis is deeper than the depression angle of the imaging optical axis of the telephoto camera 201 It is installed in the center part on the rear surface of the host vehicle 200 so as to face backward.
In the case of adopting this configuration, a vehicle rear monitoring camera that has already been widely used can be applied as the second imaging means (wide-angle camera) 202a.

(本発明の第3の実施の形態としての車両用道路標示認識方法)
本発明の第3の実施の形態としての車両用道路標示認識方法をそのフローチャートである図12の参照しつつ説明する。
本発明の実施の形態としての車両用道路標示認識方法は、第一画像信号受信ステップ(S1201)と、第二画像信号受信ステップ(S1202)と、第一変数群算出ステップ(S1203)と、第二変数群算出ステップ(S1204)と、車両用道路標示認識ステップ(S1205)と、を含んでいる。
第一画像信号受信ステップ(S1201)では、所定の第一の撮像画角と所定の第一の撮像距離とで規定する第一撮像対象領域に対応する画像信号を受ける。
(Vehicle road marking recognition method as a third embodiment of the present invention)
A vehicle road marking recognition method according to a third embodiment of the present invention will be described with reference to FIG.
The vehicle road marking recognition method as an embodiment of the present invention includes a first image signal receiving step (S1201), a second image signal receiving step (S1202), a first variable group calculating step (S1203), A bivariate group calculation step (S1204) and a vehicle road marking recognition step (S1205) are included.
In the first image signal receiving step (S1201), an image signal corresponding to a first imaging target area defined by a predetermined first imaging angle of view and a predetermined first imaging distance is received.

第二画像信号受信ステップ(S1202)では、前記第一の撮像画角よりも広い第二の撮像画角と第一の撮像距離よりも短い第二の撮像距離とで規定する第一撮像対象領域に対応する画像信号を受ける。
第一変数群算出ステップ(S1203)では、第一画像信号受信ステップ(S1201)で受けた画像信号に含まれる情報を利用して、該当する走行路におけるカーブの曲率を算出する。
In the second image signal receiving step (S1202), a first imaging target area defined by a second imaging angle wider than the first imaging angle of view and a second imaging distance shorter than the first imaging distance. The image signal corresponding to is received.
In the first variable group calculation step (S1203), the curvature of the curve on the corresponding travel path is calculated using the information included in the image signal received in the first image signal reception step (S1201).

第二変数群算出ステップ(S1204)では、第二画像信号受信ステップ(S1202)で受けた画像信号に含まれる情報を利用して、自車の走行状態を認識するために利用する複数の車両用道路標示間の該当する間隔、該複数の車両用道路標示に対する自車の車幅方向の相対位置、走行路面からの自車の上下方向の距離、自車のピッチング方向の変位量、および、自車のヨー方向の変位量を算出する。   In the second variable group calculating step (S1204), the information included in the image signal received in the second image signal receiving step (S1202) is used to recognize the traveling state of the host vehicle. The corresponding interval between road markings, the relative position of the vehicle in the vehicle width direction with respect to the plurality of vehicle road markings, the distance in the vertical direction of the vehicle from the road surface, the amount of displacement in the pitching direction of the vehicle, and The amount of displacement in the yaw direction of the car is calculated.

車両用道路標示認識ステップ(S1205)では、第一変数群算出ステップ(S1203)での算出結果と第二変数群算出ステップ(S1204)での算出結果とに含まれる情報を利用して、自車に対する当該車両用道路標示の位置を認識し該認識を表す認識データを得る。
以上において、第一画像信号受信ステップ(S1201)と第二画像信号受信ステップ(S1202)とは、それらの実行順序を問うものでなく、従って、上述したところとはその順序を入れ替えてもよく、或いはまた、並行して実行するようにしてもよい。
In the vehicle road marking recognition step (S1205), the vehicle is used by using information included in the calculation result in the first variable group calculation step (S1203) and the calculation result in the second variable group calculation step (S1204). Recognize the position of the vehicle road marking relative to and obtain recognition data representing the recognition.
In the above, the first image signal receiving step (S1201) and the second image signal receiving step (S1202) do not ask the execution order thereof, and therefore, the order may be changed from the above, Alternatively, it may be executed in parallel.

100 車両用道路標示認識装置
101 第一画像信号入力部
102 第二画像信号入力部
103 第一画像メモリ
104 第二画像メモリ
105 第一変数群算出部
106 第二変数群算出部
107 車両用道路標示認識処理部
200 車両(自車)
201 第一撮像手段(望遠カメラ)
202 第二撮像手段(広角カメラ)
202a 第二撮像手段(広角カメラ)
300 先行車両
301,302 車線区分線
700 自車
701,702 車線区分線
800 路面
100 vehicle road marking recognition apparatus 101 first image signal input unit 102 second image signal input unit 103 first image memory 104 second image memory 105 first variable group calculation unit 106 second variable group calculation unit 107 road marking for vehicle Recognition processing unit 200 Vehicle (own vehicle)
201 First imaging means (telephoto camera)
202 Second imaging means (wide angle camera)
202a Second imaging means (wide angle camera)
300 preceding vehicle 301,302 lane marking 700 own vehicle 701,702 lane marking 800 road surface

Claims (8)

所定の第一の撮像画角と所定の第一の撮像距離とで規定する第一撮像対象領域に対応する画像信号を受ける第一画像信号入力部と、
前記第一の撮像画角よりも広い第二の撮像画角と第一の撮像距離よりも短い第二の撮像距離とで規定する第二撮像対象領域に対応する画像信号を受ける第二画像信号入力部と、
前記第一画像信号入力部で受けた画像信号に含まれる情報を利用して、走行路におけるカーブの曲率を算出する第一変数群算出部と、
前記第二画像信号入力部で受けた画像信号に含まれる情報を利用して、複数の車両用道路標示間の間隔および該複数の車両用道路標示に対する自車の車幅方向の相対位置、走行路面からの自車の上下方向の距離、自車のピッチング方向の変位量、および、自車のヨー方向の変位量を算出する第二変数群算出部と、
前記第一変数群算出部の算出結果と前記第二変数群算出部の算出結果とに含まれる情報を利用して、自車に対する車両用道路標示の位置を認識し該認識を表す認識データを得る車両用道路標示認識処理部と、
を備えていることを特徴とする車両用道路標示認識装置。
A first image signal input unit that receives an image signal corresponding to a first imaging target area defined by a predetermined first imaging angle of view and a predetermined first imaging distance;
A second image signal that receives an image signal corresponding to a second imaging target area defined by a second imaging field angle wider than the first imaging field angle and a second imaging distance shorter than the first imaging distance. An input section;
A first variable group calculation unit that calculates the curvature of the curve on the travel path using information included in the image signal received by the first image signal input unit;
Using information included in the image signal received by the second image signal input unit, the interval between the plurality of vehicle road markings, the relative position in the vehicle width direction of the own vehicle with respect to the plurality of vehicle road markings, traveling A second variable group calculation unit for calculating the distance in the vertical direction of the vehicle from the road surface, the amount of displacement in the pitching direction of the vehicle, and the amount of displacement in the yaw direction of the vehicle;
Using the information included in the calculation result of the first variable group calculation unit and the calculation result of the second variable group calculation unit, the recognition data representing the recognition by recognizing the position of the vehicle road marking relative to the own vehicle A vehicle road marking recognition processing unit,
A road marking recognition apparatus for vehicles, comprising:
前記第一撮像対象領域に対応する画像信号を取得して前記第一画像信号入力部に供給する第一撮像手段と、前記第二撮像対象領域に対応する画像信号を取得して前記第二画像信号入力部に供給する第二撮像手段と、を更に備えていることを特徴とする請求項1に記載の車両用道路標示認識装置。   A first imaging means for acquiring an image signal corresponding to the first imaging target area and supplying the image signal to the first image signal input unit; and an image signal corresponding to the second imaging target area for acquiring the second image The vehicular road marking recognition apparatus according to claim 1, further comprising: a second imaging unit that supplies the signal input unit. 前記第一撮像手段および前記第二撮像手段は相互に同期した撮像動作を行うことを特徴とする請求項2に記載の車両用道路標示認識装置。   The vehicular road marking recognition apparatus according to claim 2, wherein the first imaging unit and the second imaging unit perform an imaging operation synchronized with each other. 前記第一撮像手段を、その取り付け水準位置が前記第二撮像手段の取り付け水準位置よりも高所となるように、且つ、その撮像光軸の俯角が前記第二撮像手段の撮像光軸の俯角よりも浅い角度をなすように設置することを特徴とする請求項2に記載の車両用道路標示認識装置。   The first imaging means is arranged such that its attachment level position is higher than the attachment level position of the second imaging means, and the depression angle of the imaging optical axis is the depression angle of the imaging optical axis of the second imaging means. The vehicle road marking recognition apparatus according to claim 2, wherein the vehicle road marking recognition apparatus is installed so as to form a shallower angle. 前記第二撮像手段を、その取り付け水準位置が前記第一撮像手段の取り付け水準位置よりも低所となるように、且つ、その撮像光軸の俯角が前記第一撮像手段の撮像光軸の俯角よりも深い角度をなすように設置することを特徴とする請求項2に記載の車両用道路標示認識装置。   The second imaging means is arranged such that its attachment level position is lower than the attachment level position of the first imaging means, and the depression angle of the imaging optical axis is the depression angle of the imaging optical axis of the first imaging means. The vehicular road marking recognition apparatus according to claim 2, wherein the vehicular road marking recognition apparatus is installed so as to form a deeper angle. 前記第二撮像手段を、自車の後部に設けることを特徴とする請求項2、3、4、および、5の何れか一項に記載の車両用道路標示認識装置。   The vehicular road marking recognition apparatus according to any one of claims 2, 3, 4, and 5, wherein the second imaging means is provided at a rear portion of the own vehicle. 前記第一画像信号入力部で受けた画像信号を前記第一変数群算出部からアクセス可能に保持する第一画像メモリと、前記第二画像信号入力部で受けた画像信号を前記第二変数群算出部からアクセス可能に保持する第二画像メモリと、を更に備えていることを特徴とする請求項1に記載の車両用道路標示認識装置。   A first image memory that holds the image signal received by the first image signal input unit so as to be accessible from the first variable group calculation unit, and the image signal received by the second image signal input unit is the second variable group. The vehicular road marking recognition apparatus according to claim 1, further comprising a second image memory that is held accessible from the calculation unit. 所定の第一の撮像画角と所定の第一の撮像距離とで規定する第一撮像対象領域に対応する画像信号を受ける第一画像信号受信ステップと、
前記第一の撮像画角よりも広い第二の撮像画角と第一の撮像距離よりも短い第二の撮像距離とで規定する第一撮像対象領域に対応する画像信号を受ける第二画像信号受信ステップと、
前記第一画像信号受信ステップで受けた画像信号に含まれる情報を利用して、該当する走行路におけるカーブの曲率を算出する第一変数群算出ステップと、
前記第二画像信号受信ステップで受けた画像信号に含まれる情報を利用して、自車の走行状態を認識するために利用する複数の車両用道路標示間の該当する間隔、該複数の車両用道路標示に対する自車の車幅方向の相対位置、走行路面からの自車の上下方向の距離、自車のピッチング方向の変位量、および、自車のヨー方向の変位量を算出する第二変数群算出ステップと、
前記第一変数群算出ステップでの算出結果と前記第二変数群算出ステップでの算出結果とに含まれる情報を利用して、自車に対する当該車両用道路標示の位置を認識し該認識を表す認識データを得る車両用道路標示認識ステップと、
を備えていることを特徴とする車両用道路標示認識方法。
A first image signal receiving step for receiving an image signal corresponding to a first imaging target area defined by a predetermined first imaging angle of view and a predetermined first imaging distance;
A second image signal for receiving an image signal corresponding to a first imaging target area defined by a second imaging field angle wider than the first imaging field angle and a second imaging distance shorter than the first imaging distance. Receiving step;
A first variable group calculating step for calculating a curvature of a curve on a corresponding travel path using information included in the image signal received in the first image signal receiving step;
Corresponding intervals between a plurality of vehicle road markings used for recognizing the traveling state of the own vehicle using the information included in the image signal received in the second image signal receiving step, for the plurality of vehicles A second variable that calculates the relative position of the vehicle in the vehicle width direction relative to the road marking, the distance in the vertical direction of the vehicle from the road surface, the amount of displacement in the pitching direction of the vehicle, and the amount of displacement in the yaw direction of the vehicle A group calculation step;
Using the information included in the calculation result in the first variable group calculation step and the calculation result in the second variable group calculation step, the position of the vehicle road marking with respect to the host vehicle is recognized to represent the recognition. A vehicle road sign recognition step for obtaining recognition data;
A vehicle road marking recognition method comprising:
JP2009006340A 2009-01-15 2009-01-15 Vehicle road sign recognition device Active JP5146330B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2009006340A JP5146330B2 (en) 2009-01-15 2009-01-15 Vehicle road sign recognition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2009006340A JP5146330B2 (en) 2009-01-15 2009-01-15 Vehicle road sign recognition device

Publications (2)

Publication Number Publication Date
JP2010165142A true JP2010165142A (en) 2010-07-29
JP5146330B2 JP5146330B2 (en) 2013-02-20

Family

ID=42581255

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2009006340A Active JP5146330B2 (en) 2009-01-15 2009-01-15 Vehicle road sign recognition device

Country Status (1)

Country Link
JP (1) JP5146330B2 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013022153A1 (en) * 2011-08-05 2013-02-14 Lg Electronics Inc. Apparatus and method for detecting lane
JP2013061771A (en) * 2011-09-13 2013-04-04 Glory Ltd Image feature extraction method and image feature extraction device and image collation method and image collation device using the same
JP2013070823A (en) * 2011-09-28 2013-04-22 Ge Medical Systems Global Technology Co Llc Image data processor, magnetic resonance apparatus, image data processing method, and program
JP2013117811A (en) * 2011-12-02 2013-06-13 Fujitsu Ltd Road shape estimation device and program
WO2014065159A1 (en) * 2012-10-22 2014-05-01 ヤマハ発動機株式会社 Distance measurement device and vehicle using same
CN104185588A (en) * 2012-03-28 2014-12-03 金泰克斯公司 Vehicular imaging system and method for determining roadway width
JP2015032082A (en) * 2013-08-01 2015-02-16 日産自動車株式会社 Vehicle detection device
JP2019522287A (en) * 2016-07-12 2019-08-08 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd Method and system for processing images acquired by a moving body
WO2020021949A1 (en) * 2018-07-24 2020-01-30 株式会社東芝 Imaging system for railway vehicle
JP2021051736A (en) * 2019-09-24 2021-04-01 ▲広▼州大学 Vehicle travel route planning method, apparatus, system, medium and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002092796A (en) * 2000-09-18 2002-03-29 Toyota Motor Corp Lane recognizing device
JP2005165972A (en) * 2003-12-05 2005-06-23 Nissan Motor Co Ltd Device for preventing lane deviation
JP2007099124A (en) * 2005-10-05 2007-04-19 Nissan Motor Co Ltd Lane deviation preventing device and method
JP2008250904A (en) * 2007-03-30 2008-10-16 Toyota Motor Corp Traffic lane division line information detecting device, travel traffic lane maintaining device, and traffic lane division line recognizing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002092796A (en) * 2000-09-18 2002-03-29 Toyota Motor Corp Lane recognizing device
JP2005165972A (en) * 2003-12-05 2005-06-23 Nissan Motor Co Ltd Device for preventing lane deviation
JP2007099124A (en) * 2005-10-05 2007-04-19 Nissan Motor Co Ltd Lane deviation preventing device and method
JP2008250904A (en) * 2007-03-30 2008-10-16 Toyota Motor Corp Traffic lane division line information detecting device, travel traffic lane maintaining device, and traffic lane division line recognizing method

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013022153A1 (en) * 2011-08-05 2013-02-14 Lg Electronics Inc. Apparatus and method for detecting lane
JP2013061771A (en) * 2011-09-13 2013-04-04 Glory Ltd Image feature extraction method and image feature extraction device and image collation method and image collation device using the same
JP2013070823A (en) * 2011-09-28 2013-04-22 Ge Medical Systems Global Technology Co Llc Image data processor, magnetic resonance apparatus, image data processing method, and program
JP2013117811A (en) * 2011-12-02 2013-06-13 Fujitsu Ltd Road shape estimation device and program
CN104185588B (en) * 2012-03-28 2022-03-15 万都移动***股份公司 Vehicle-mounted imaging system and method for determining road width
CN104185588A (en) * 2012-03-28 2014-12-03 金泰克斯公司 Vehicular imaging system and method for determining roadway width
US9955136B2 (en) 2012-10-22 2018-04-24 Yamaha Hatsudoki Kabushiki Kaisha Distance measuring device and vehicle provided therewith
JP5955404B2 (en) * 2012-10-22 2016-07-20 ヤマハ発動機株式会社 Distance measuring device and vehicle using the same
WO2014065159A1 (en) * 2012-10-22 2014-05-01 ヤマハ発動機株式会社 Distance measurement device and vehicle using same
JP2015032082A (en) * 2013-08-01 2015-02-16 日産自動車株式会社 Vehicle detection device
JP2019522287A (en) * 2016-07-12 2019-08-08 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd Method and system for processing images acquired by a moving body
US10789722B2 (en) 2016-07-12 2020-09-29 SZ DJI Technology Co., Ltd. Processing images to obtain environmental information
US11288824B2 (en) 2016-07-12 2022-03-29 SZ DJI Technology Co., Ltd. Processing images to obtain environmental information
WO2020021949A1 (en) * 2018-07-24 2020-01-30 株式会社東芝 Imaging system for railway vehicle
JP2020017824A (en) * 2018-07-24 2020-01-30 株式会社東芝 Railway vehicle imaging system
JP7150508B2 (en) 2018-07-24 2022-10-11 株式会社東芝 Imaging system for railway vehicles
JP2021051736A (en) * 2019-09-24 2021-04-01 ▲広▼州大学 Vehicle travel route planning method, apparatus, system, medium and device

Also Published As

Publication number Publication date
JP5146330B2 (en) 2013-02-20

Similar Documents

Publication Publication Date Title
JP5146330B2 (en) Vehicle road sign recognition device
JP4676373B2 (en) Peripheral recognition device, peripheral recognition method, and program
JP4654163B2 (en) Vehicle surrounding environment recognition device and system
JP6202367B2 (en) Image processing device, distance measurement device, mobile device control system, mobile device, and image processing program
US10169667B2 (en) External environment recognizing device for vehicle and vehicle behavior control device
JP6733225B2 (en) Image processing device, imaging device, mobile device control system, image processing method, and program
JP5011049B2 (en) Image processing system
JP4893212B2 (en) Perimeter monitoring device
WO2015125298A1 (en) Local location computation device and local location computation method
JP5959073B2 (en) Detection device, detection method, and program
JP2014115978A (en) Mobile object recognition device, notification apparatus using the device, mobile object recognition program for use in the mobile object recognition device, and mobile object with the mobile object recognition device
JP6743882B2 (en) Image processing device, device control system, imaging device, image processing method, and program
KR20080024772A (en) Method and apparatus for recognizing parking slot marking by using bird&#39;s eye view and parking assist system using same
JP6687039B2 (en) Object detection device, device control system, imaging device, object detection method, and program
JP6237874B2 (en) Self-position calculation device and self-position calculation method
JP6816401B2 (en) Image processing device, imaging device, mobile device control system, image processing method, and program
US9832444B2 (en) Three-dimensional object detection device
US11030761B2 (en) Information processing device, imaging device, apparatus control system, movable body, information processing method, and computer program product
JP2010256995A (en) Object recognition apparatus
WO2015125299A1 (en) Local location computation device and local location computation method
WO2014017521A1 (en) Three-dimensional object detection device
JP2017004176A (en) Road surface marking detection device and road surface marking detection method
JP2011100174A (en) Apparatus and method for detecting vehicle on lane
US11054245B2 (en) Image processing apparatus, device control system, imaging apparatus, image processing method, and recording medium
CN109522779B (en) Image processing apparatus and method

Legal Events

Date Code Title Description
RD04 Notification of resignation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7424

Effective date: 20100917

A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20111128

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20120726

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20120807

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20121003

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20121030

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20121112

R150 Certificate of patent or registration of utility model

Ref document number: 5146330

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20151207

Year of fee payment: 3