JP2006171849A - Image processor - Google Patents

Image processor Download PDF

Info

Publication number
JP2006171849A
JP2006171849A JP2004359778A JP2004359778A JP2006171849A JP 2006171849 A JP2006171849 A JP 2006171849A JP 2004359778 A JP2004359778 A JP 2004359778A JP 2004359778 A JP2004359778 A JP 2004359778A JP 2006171849 A JP2006171849 A JP 2006171849A
Authority
JP
Japan
Prior art keywords
virtual
video
image
image processing
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2004359778A
Other languages
Japanese (ja)
Other versions
JP4569285B2 (en
Inventor
Hiroyoshi Yanagi
柳  拓良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nissan Motor Co Ltd
Original Assignee
Nissan Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nissan Motor Co Ltd filed Critical Nissan Motor Co Ltd
Priority to JP2004359778A priority Critical patent/JP4569285B2/en
Publication of JP2006171849A publication Critical patent/JP2006171849A/en
Application granted granted Critical
Publication of JP4569285B2 publication Critical patent/JP4569285B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To quickly understand the peripheral environment of its own vehicle at the time of switching a video obtained by picking up the image of the peripheral environment of its own vehicle from the switched video. <P>SOLUTION: This image processor is provided with a real image pickup means mounted in its own vehicle for picking up the image of the periphery of its own vehicle, and for outputting a real video, an image processing means for converting a real video obtained by a real image pickup means into a virtual video by a virtual image pickup means set at an arbitrary position and direction, a display means for displaying the real video of the real image pickup means or the virtual video of the image processing means. While the display video of the display means is switched from the first real video or the virtual video to the second real video or the virtual video, an interpolation virtual video viewed from an intermediate position and direction between the position and direction where the first real video or the virtual video is picked up and the position and direction where the second real video or the virtual video is picked up is generated, and displayed at a display means by the image processing means. <P>COPYRIGHT: (C)2006,JPO&NCIPI

Description

本発明は、車載カメラにより撮像した映像を処理して仮想カメラによる映像に変換する画像処理装置に関する。   The present invention relates to an image processing apparatus that processes an image captured by an in-vehicle camera and converts it into an image captured by a virtual camera.

車載カメラにより撮像した映像を処理して仮想カメラによる映像に変換し、自車位置および駐車位置を示すアイコンを重畳表示して後退駐車を支援する装置が知られている(例えば、特許文献1参照)。   An apparatus that supports reverse parking by processing an image captured by an in-vehicle camera, converting the image to a virtual camera image, and superimposing icons indicating the vehicle position and the parking position (see, for example, Patent Document 1) is known. ).

この出願の発明に関連する先行技術文献としては次のものがある。
特開2003−118522号公報
Prior art documents related to the invention of this application include the following.
JP 2003-118522 A

しかしながら、従来の装置では、実カメラの映像や仮想カメラの映像をいきなり切り換えると、切り換わり後の映像上の自車両周囲の物体と、元の映像上の自車両周囲の物体、あるいは実際の自車両周囲の物体との対応関係や位置関係を直ちに運転者に認識させるのが困難である、という問題がある。   However, in the conventional apparatus, when the video of the real camera and the video of the virtual camera are suddenly switched, the object around the own vehicle on the image after switching, the object around the own vehicle on the original image, or the actual vehicle There is a problem that it is difficult for the driver to immediately recognize the correspondence relationship and the positional relationship with objects around the vehicle.

自車両に搭載され、自車両周辺を撮像して実映像を出力する実撮像手段と、実撮像手段による実映像を任意の位置と向きに設定した仮想撮像手段による仮想映像に変換する画像処理手段と、実撮像手段の実映像または画像処理手段の仮想映像を表示する表示手段とを備えた画像処理装置であって、画像処理手段は、表示手段の表示映像を第1の実映像または仮想映像から第2の実映像または仮想映像に切り換える途中に、第1の実映像または仮想映像を撮像した位置および向きと第2の実映像または仮想映像を撮像した位置および向きとの中間の位置および向きから見た補間仮想映像を生成し、表示手段に表示する。   An actual imaging unit mounted on the host vehicle for capturing an image of the periphery of the host vehicle and outputting the actual video, and an image processing unit for converting the actual video by the actual imaging unit into a virtual video by a virtual imaging unit set at an arbitrary position and orientation And a display means for displaying a real video of the real imaging means or a virtual video of the image processing means, wherein the image processing means converts the display video of the display means to the first real video or the virtual video. In the middle of switching from the first real video or the virtual video to the second real video or the virtual video, the intermediate position and orientation between the position and the direction where the first real video or the virtual video is captured and the position and orientation where the second real video or the virtual video is captured. The interpolated virtual video viewed from the above is generated and displayed on the display means.

本発明によれば、自車両の周囲環境を撮像した映像を切り換える場合に、切り換え後の映像から自車両の周囲環境をすぐに理解できるようにする。   According to the present invention, when switching an image obtained by imaging the surrounding environment of the host vehicle, the surrounding environment of the host vehicle can be immediately understood from the switched image.

《発明の第1の実施の形態》
図1は一実施の形態の構成を示す図である。この一実施の形態では4台のカメラ1〜4を自車両に搭載し、自車両周囲の実映像を撮像する。図2に4台のカメラ1〜4の設置位置(視点)と向きすなわち撮影レンズの光軸の方向(視線方向)のベクトルを示す。前方カメラ1は車両左右中央のフェンダー前端に設置され、図3に示すように自車両前方の路面を見下ろした映像を撮像する。右側方カメラ2は車両右側方のフェンダーに設置され、図4に示すように自車両右側方の路面を見下ろした映像を撮像する。左側方カメラ3は車両左側方のフェンダーに設置され、図5に示すように自車両左側方の路面を見下ろした映像を撮像する。後方カメラ4は車両左右中央のルーフ後端あるいはトランクリッド後端に設置され、図6に示すように自車両後方の路面を見下ろした映像を撮像する。なお、この一実施の形態では水平画角が約180度の広角レンズを搭載したカメラを例に上げて説明する。
<< First Embodiment of the Invention >>
FIG. 1 is a diagram showing a configuration of an embodiment. In this embodiment, four cameras 1 to 4 are mounted on the host vehicle, and real images around the host vehicle are captured. FIG. 2 shows vectors of the installation positions (viewpoints) and orientations of the four cameras 1 to 4, that is, the direction of the optical axis of the photographing lens (line-of-sight direction). The front camera 1 is installed at the front end of the fender at the center of the left and right sides of the vehicle, and captures an image looking down on the road surface in front of the host vehicle as shown in FIG. The right-side camera 2 is installed on a fender on the right side of the vehicle, and captures an image looking down on the road surface on the right side of the host vehicle as shown in FIG. The left-side camera 3 is installed in a fender on the left side of the vehicle, and captures an image looking down on the road surface on the left side of the host vehicle as shown in FIG. The rear camera 4 is installed at the rear end of the roof or the rear end of the trunk lid at the center of the left and right sides of the vehicle, and captures an image looking down the road surface behind the host vehicle as shown in FIG. In this embodiment, a camera equipped with a wide-angle lens having a horizontal field angle of about 180 degrees will be described as an example.

画像処理装置5はCPUとメモリやA/Dコンバーターなどの周辺部品を備え、カメラ1〜4による撮像映像に対する歪曲の補正、拡大、縮小、回転、視点変換、合成などの処理を行う。なお、これらの画像処理手法については周知であるから説明を省略する。視点変換については、リアルタイムで変換演算を行ってもよいが、この一実施の形態では予め計算されている変換テーブルを参照し、入力映像のピクセルの色情報を出力映像のピクセルに割り当てる方法を採用する。変換テーブルは視点変換と画像合成の種類に応じて予め用意する。表示器6は画像処理装置5により生成された映像を表示する。   The image processing apparatus 5 includes a CPU, peripheral components such as a memory and an A / D converter, and performs processing such as distortion correction, enlargement, reduction, rotation, viewpoint conversion, and composition on the captured images by the cameras 1 to 4. Since these image processing methods are well known, description thereof will be omitted. For viewpoint conversion, conversion calculation may be performed in real time, but in this embodiment, a method of assigning color information of pixels of an input video to pixels of an output video by referring to a conversion table calculated in advance is adopted. To do. A conversion table is prepared in advance according to the type of viewpoint conversion and image composition. The display 6 displays the video generated by the image processing device 5.

画像処理装置5は、自車両に搭載されたカメラ1〜4による自車両周辺の実映像に視点変換や合成などの処理を施し、任意の位置と向きに設定した仮想カメラによる仮想映像に変換する。図7に、一実施の形態の仮想カメラVA、VB、VC、VD、VE、VFの設置位置(視点)と向きすなわち撮影レンズの光軸方向(視線方向)のベクトルを示す。仮想カメラVAは自車両前方の上空から自車両10を見下ろすように配置され、仮想カメラVBは自車両前方からほぼ水平方向に自車両10を見るように配置される。また、仮想カメラVCは車両フェンダーの左側方から自車両後方を見るように配置され、仮想カメラVDは車両フェンダーの右側方から自車両後方を見るように配置される。さらに、仮想カメラVEは車両フェンダー上の左右中央位置から自車両後方を見るように配置され、仮想カメラVFは車両フェンダーの左右中央位置の真上から自車両10を見下ろすように配置される。なお、仮想カメラVEは、自車両10に遮られて自車両後方の映像を撮像できないが、自車両10がない場合の映像を合成により撮像する。   The image processing device 5 performs processing such as viewpoint conversion and composition on the actual video around the host vehicle by the cameras 1 to 4 mounted on the host vehicle, and converts it into a virtual video by a virtual camera set at an arbitrary position and orientation. . FIG. 7 shows vectors of the installation positions (viewpoints) and orientations of the virtual cameras VA, VB, VC, VD, VE, and VF of one embodiment, that is, the optical axis direction (sight line direction) of the photographing lens. The virtual camera VA is arranged so as to look down at the own vehicle 10 from above the own vehicle, and the virtual camera VB is arranged so as to see the own vehicle 10 in a substantially horizontal direction from the front of the own vehicle. In addition, the virtual camera VC is arranged so as to see the rear of the host vehicle from the left side of the vehicle fender, and the virtual camera VD is arranged so as to see the rear of the host vehicle from the right side of the vehicle fender. Furthermore, the virtual camera VE is disposed so as to look at the rear side of the host vehicle from the left and right center position on the vehicle fender, and the virtual camera VF is disposed so as to look down at the host vehicle 10 from directly above the center position of the left and right sides of the vehicle fender. The virtual camera VE is blocked by the host vehicle 10 and cannot capture an image behind the host vehicle, but captures an image when the host vehicle 10 is not present by synthesis.

この第1の実施の形態では、自車両前方上空から自車両を見下ろす仮想カメラVAの俯瞰映像から、自車両前方からほぼ水平方向に自車両を見る仮想カメラVBの映像に切り換える場合の画像処理を示す。図8に仮想カメラVAの映像例を示し、図9に仮想カメラVBの映像例を示す。これらの映像は、白線で描かれた駐車位置に後退で進入する場合に、4台の車載カメラ1〜4により撮像した映像を視点変換し、それらを合成したものである。今、図8に示す仮想カメラVAの俯瞰映像から図9に示す仮想カメラVBの映像にいきなり切り換えると、切り換え後の図9に示す映像上の物体と図8に示す元の映像上の物体あるいは実際の物体との対応関係や位置関係をすぐに把握できないため、運転者が戸惑うことがある。例えば、図8に示す元の映像には自車両後方の他車が写っていないが、切り換え後の図9に示す映像には自車両後方の他車が写っている。   In the first embodiment, image processing is performed when switching from a bird's-eye view video of a virtual camera VA looking down at the host vehicle from above the host vehicle to a video of the virtual camera VB viewing the host vehicle in a substantially horizontal direction from the front of the host vehicle. Show. FIG. 8 shows a video example of the virtual camera VA, and FIG. 9 shows a video example of the virtual camera VB. These images are obtained by converting the viewpoints of the images captured by the four in-vehicle cameras 1 to 4 and then synthesizing them when entering the parking position drawn with a white line in a backward direction. Now, when the bird's-eye view video of the virtual camera VA shown in FIG. 8 is suddenly switched to the video of the virtual camera VB shown in FIG. 9, the object on the video shown in FIG. 9 and the object on the original video shown in FIG. The driver may be confused because the correspondence and position with the actual object cannot be immediately grasped. For example, although the other vehicle behind the host vehicle is not shown in the original image shown in FIG. 8, the other vehicle behind the host vehicle is shown in the image shown in FIG. 9 after switching.

図10は自車両と仮想カメラの位置関係を示す上面図と側面図であり、図11は映像切り換え時の仮想カメラの位置(視点)と向き(撮影レンズの光軸方向=視線方向)の変化を示す図である。この明細書では仮想カメラの位置をX、Y、Zで表し、自車両の左右方向をX軸とし、左右中央をX=0、それより左方向をX>0、右方向をX<0とする。また、自車両の前後方向をY軸とし、前輪位置をY=0、それより自車両後方をY>0、自車両前方をY<0とする。さらに、自車両の高さ方向をZ軸とし、前輪の接地点をZ=0、それより上方をZ>0、下方をZ<0とする。また、この明細書では仮想カメラの向きをヨー、ピッチ、ロールで表す。ここで、ヨーは仮想カメラを通るZ軸と平行な軸回りの回転を表し、ピッチは仮想カメラを通るX軸と平行な軸回りの回転を表し、ロールは仮想カメラを通るY軸と平行な軸回りの回転を表す。図11では、フレームごとの仮想カメラの位置(X,Y,Z)と向き(ヨー、ピッチ、ロール)を示す。   FIG. 10 is a top view and a side view showing the positional relationship between the host vehicle and the virtual camera, and FIG. 11 is a change in the position (viewpoint) and direction (optical axis direction of the photographing lens = line-of-sight direction) of the virtual camera when switching images. FIG. In this specification, the position of the virtual camera is represented by X, Y, and Z, the left and right direction of the host vehicle is the X axis, the left and right center is X = 0, the left direction is X> 0, and the right direction is X <0. To do. The front-rear direction of the host vehicle is the Y axis, the front wheel position is Y = 0, the rear of the host vehicle is Y> 0, and the front of the host vehicle is Y <0. Further, the height direction of the host vehicle is the Z axis, the ground contact point of the front wheels is Z = 0, the upper side is Z> 0, and the lower side is Z <0. In this specification, the direction of the virtual camera is represented by yaw, pitch, and roll. Here, yaw represents rotation around an axis parallel to the Z axis passing through the virtual camera, pitch represents rotation around an axis parallel to the X axis passing through the virtual camera, and roll is parallel to the Y axis passing through the virtual camera. Represents rotation around the axis. FIG. 11 shows the position (X, Y, Z) and direction (yaw, pitch, roll) of the virtual camera for each frame.

この一実施の形態では、図11に示すように、自車両前方上空の仮想カメラVAの位置から自車両前方の仮想カメラVBの位置に至るまで、フレームごとに仮想カメラの位置がVI1〜VI9と変化したとする。仮想カメラの位置(視点)は、自車両前方上空のVAの位置から自車両前方のVBの位置に至るまで、自車両10との距離Yが徐々に小さくなり、高さZが徐々に低くなるが、車両左右方向の位置Xは一定である。一方、仮想カメラの向き(視線方向)は、自車両前方上空のVAの位置から自車両前方のVBの位置に至るまで、ピッチが下向きから上向きに徐々に変化するが、ヨーとロールは一定である。   In this embodiment, as shown in FIG. 11, the position of the virtual camera is VI1 to VI9 for each frame from the position of the virtual camera VA above the host vehicle to the position of the virtual camera VB in front of the host vehicle. Suppose that it has changed. As for the position (viewpoint) of the virtual camera, the distance Y to the host vehicle 10 gradually decreases and the height Z gradually decreases from the position of the VA in front of the host vehicle to the position of VB in front of the host vehicle. However, the position X in the left-right direction of the vehicle is constant. On the other hand, the direction of the virtual camera (the direction of the line of sight) gradually changes from the downward direction to the upward direction from the position of the VA in front of the host vehicle to the position of VB in front of the host vehicle, but the yaw and roll are constant. is there.

今、仮想カメラVAのピッチを−35度、仮想カメラVBのピッチを−5度とすると、フレームごとに3度ずつピッチが上向きになり、1/30秒間隔でフレームのリフレッシュを行うと、約1/3秒で仮想カメラの位置をVAからVBまで切り換えることができる。つまり、1/30秒間隔で仮想カメラVA、VI1、VI2、・・、VI9、VBの仮想映像が順に表示されることになる。   Now, assuming that the pitch of the virtual camera VA is −35 degrees and the pitch of the virtual camera VB is −5 degrees, the pitch is upward by 3 degrees for each frame. The position of the virtual camera can be switched from VA to VB in 1/3 second. That is, virtual images of the virtual cameras VA, VI1, VI2,..., VI9, VB are displayed in order at 1/30 second intervals.

図12は第1の実施の形態の映像切り換え表示例を示す図である。図12(a)は図8と同様な仮想カメラVAの映像例を示し、図12(e)は図9と同様な仮想カメラVBの映像例を示す。そして、図12(b)、(c)、(d)は、仮想カメラVAとVBとの間の補間的な仮想カメラVI2、VI5、VI8の補間仮想映像例を示す。   FIG. 12 is a diagram illustrating a video switching display example according to the first embodiment. 12A shows a video example of the virtual camera VA similar to FIG. 8, and FIG. 12E shows a video example of the virtual camera VB similar to FIG. 12B, 12C, and 12D show examples of interpolation virtual images of the virtual cameras VI2, VI5, and VI8 that are interpolated between the virtual cameras VA and VB.

上述したように、仮想カメラVAの映像から仮想カメラVBの映像にいきなり切り換わった場合には、運転者が切り換わり後の仮想カメラVBの映像を理解するのに時間がかかるが、仮想カメラVAの仮想映像と仮想カメラVBの仮想映像との間に、補間的に仮想カメラVI1〜VI9の補間仮想映像を順次表示することによって、仮想カメラVAの仮想映像から仮想カメラVBの仮想映像まで約1/3秒で映像が徐々に切り換わり、切り換わり後の仮想カメラVBの仮想映像を直ちに理解させることができる。   As described above, when the video of the virtual camera VB suddenly switches from the video of the virtual camera VA, it takes time for the driver to understand the video of the virtual camera VB after switching, but the virtual camera VA By sequentially displaying the interpolated virtual videos of the virtual cameras VI1 to VI9 in an interpolated manner between the virtual video of the virtual camera VB and the virtual video of the virtual camera VB, approximately 1 from the virtual video of the virtual camera VA to the virtual video of the virtual camera VB is obtained. The video is gradually switched in 3 seconds, and the virtual video of the virtual camera VB after the switching can be immediately understood.

なお、上述した一実施の形態では仮想カメラの位置(X,Y,Z)と向き(ヨー、ピッチ、ロール)を補間的に徐々に切り換える例を示したが、撮影レンズの光学特性による映像の歪曲具合や映像の水平方向および垂直方向の画角などのパラメーターを変化させてもよい。例えばレンズの歪曲特性が近似式
h=a*sin(b*θ) ・・・(1)
で与えられ、仮想カメラVAの場合はa=b=1、仮想カメラVBの場合はa=b=2である場合、これらのパラメーターaとbと連続的に変化させることによって、異なる歪曲特性の映像を違和感なく繋げることができる。なお、(1)式において、θは入射角、hは入射角θの光の撮像面上の中心からの距離、aとbは撮影レンズの歪曲特性を表す定数である。
In the above-described embodiment, an example in which the position (X, Y, Z) and direction (yaw, pitch, roll) of the virtual camera are gradually switched in an interpolating manner has been described. Parameters such as the degree of distortion and the horizontal and vertical angles of view of the image may be changed. For example, the distortion characteristic of the lens is an approximate expression h = a * sin (b * θ) (1)
When a = b = 1 in the case of the virtual camera VA and a = b = 2 in the case of the virtual camera VB, different distortion characteristics are obtained by continuously changing these parameters a and b. The video can be connected without any sense of incongruity. In equation (1), θ is the incident angle, h is the distance from the center of the imaging surface of the light having the incident angle θ, and a and b are constants representing the distortion characteristics of the photographing lens.

《発明の第2の実施の形態》
第2の実施の形態では、図13に示す仮想カメラVCとVDの合成仮想映像から図8に示す仮想カメラVAの映像に切り換える場合の画像処理を示す。図13に示す合成仮想映像において、画面の右半分が仮想カメラVCの仮想映像を示し、画面の左半分が仮想カメラVDの仮想映像を示す。
<< Second Embodiment of the Invention >>
In the second embodiment, image processing when switching from the combined virtual video of the virtual cameras VC and VD shown in FIG. 13 to the video of the virtual camera VA shown in FIG. 8 will be described. In the synthesized virtual video shown in FIG. 13, the right half of the screen shows the virtual video of the virtual camera VC, and the left half of the screen shows the virtual video of the virtual camera VD.

この第2の実施の形態の構成は図1に示す構成と同様であり、説明を省略する。図14は自車両と仮想カメラの位置関係を示す上面図と側面図であり、図15は映像切り換え時の仮想カメラの位置(視点)と向き(視線方向)の変化を示す図である。なお、仮想カメラの位置(X,Y,Z)と向き(ヨー、ピッチ、ロール)の定義は図10および図11に示す定義と同様である。この第2の実施の形態では、仮想カメラVCとVDの位置(X)が徐々に変化して両カメラが自車両の左右中央に寄っていくと同時に、両仮想カメラVCとVDの向き(ヨー)が徐々に変化して両カメラが真後ろの向きに近づいていき、車両フェンダー上の左右中央位置から自車両後方を見た仮想カメラVEに統合した後、仮想カメラの位置(Y,Z)と向き(ピッチ)が徐々に変化して自車両前方上空から自車両を見下ろす仮想カメラVAの位置に至る。仮想カメラVC、VDの位置から仮想カメラVEの位置に至るまでの仮想カメラの位置(X)と向き(ヨー)は、2台の仮想カメラVCとVDの平均値を用いる。   The configuration of the second embodiment is the same as the configuration shown in FIG. FIG. 14 is a top view and a side view showing the positional relationship between the host vehicle and the virtual camera, and FIG. 15 is a view showing changes in the position (viewpoint) and direction (line-of-sight direction) of the virtual camera when switching images. Note that the definitions of the position (X, Y, Z) and direction (yaw, pitch, roll) of the virtual camera are the same as the definitions shown in FIGS. In the second embodiment, the positions (X) of the virtual cameras VC and VD gradually change so that both cameras move toward the center of the left and right of the host vehicle, and at the same time, the directions of the virtual cameras VC and VD (the yaw) ) Gradually change and both cameras approach the direction of the back, and after integrating the virtual camera VE viewed from the left and right center position on the vehicle fender, the virtual camera position (Y, Z) The direction (pitch) gradually changes to reach the position of the virtual camera VA overlooking the host vehicle from above the host vehicle. For the position (X) and direction (yaw) of the virtual camera from the position of the virtual cameras VC and VD to the position of the virtual camera VE, the average value of the two virtual cameras VC and VD is used.

仮想カメラVCとVDの合成仮想映像から仮想カメラVEの仮想映像までの間に、フレームごとに仮想カメラ(VC11、VD11)〜(VC14、VD14)の仮想映像が補間的に表示され、さらに仮想カメラVEの仮想映像から仮想カメラVAの仮想映像までの間に、フレームごとに仮想カメラVE1〜VE4の仮想映像が補間的に表示される。   Between the synthesized virtual video of the virtual cameras VC and VD and the virtual video of the virtual camera VE, the virtual videos of the virtual cameras (VC11, VD11) to (VC14, VD14) are displayed in an interpolated manner for each frame, and further the virtual camera Between the virtual video of VE and the virtual video of virtual camera VA, the virtual videos of virtual cameras VE1 to VE4 are displayed in an interpolated manner for each frame.

図16は第2の実施の形態の映像切り換え表示例を示す図である。(a)は図13と同様な仮想カメラVC、VDの合成仮想映像例を示し、(c)は仮想カメラVEの仮想映像例を示す。そして、(b)は仮想カメラVC、VDと仮想カメラVEとの間の補間的な仮想カメラ(VC11,VD11)の合成仮想映像例を示す。さらに、(e)は図8と同様な仮想カメラVAの仮想映像例を示し、(d)は仮想カメラVEと仮想カメラVAとの間の補間的な仮想カメラVE2の仮想映像例を示す。   FIG. 16 is a diagram illustrating a video switching display example according to the second embodiment. (a) shows an example of a combined virtual video of virtual cameras VC and VD similar to FIG. 13, and (c) shows an example of a virtual video of virtual camera VE. And (b) shows the example of a synthetic | combination virtual image | video of the virtual camera (VC11, VD11) of interpolation between virtual camera VC, VD and virtual camera VE. Further, (e) shows a virtual video example of the virtual camera VA similar to FIG. 8, and (d) shows a virtual video example of the virtual camera VE2 that is interpolated between the virtual camera VE and the virtual camera VA.

このように、仮想カメラVC、VDの合成仮想映像から仮想カメラVAの仮想映像にいきなり切り換わった場合には、運転者が切り換わり後の仮想カメラVAの仮想映像を理解するのに時間がかかるが、仮想カメラVC、VDの合成仮想映像と仮想カメラVAの仮想映像との間に、補間的に仮想カメラ(VC11、VD11)〜(VC14、VD14)、VE、VE11〜VE14の仮想映像を1/30秒ごとに順次表示することによって、仮想カメラVC、VDの合成仮想映像から仮想カメラVAの仮想映像までの約1/3秒の間にフレームごとに映像が徐々に切り換わり、切り換わり後の仮想カメラVAの仮想映像を直ちに理解させることができる。   As described above, when the virtual video of the virtual camera VA is suddenly switched from the combined virtual video of the virtual cameras VC and VD to the virtual video of the virtual camera VA, it takes time for the driver to understand the virtual video of the virtual camera VA after the switching. However, the virtual images of the virtual cameras (VC11, VD11) to (VC14, VD14), VE, VE11 to VE14 are interpolated between the synthesized virtual video of the virtual cameras VC and VD and the virtual video of the virtual camera VA. / Sequentially displaying every 30 seconds, the video gradually switches from frame to frame within about 1/3 second from the combined virtual video of the virtual cameras VC and VD to the virtual video of the virtual camera VA. The virtual video of the virtual camera VA can be immediately understood.

《発明の第3の実施の形態》
上述した第2の実施の形態では、仮想カメラVC、VDの合成仮想映像からいったん仮想カメラVEの仮想映像に切り換えた後、さらに仮想カメラVAの仮想映像に切り換えたが、この第3の実施の形態では、仮想カメラVC、VDの合成仮想映像(図13参照)から直接、仮想カメラVAの仮想映像(図8参照)に切り換える場合の画像処理を示す。なお、この第3の実施の形態の構成は図1に示す構成と同様であり、説明を省略する。
<< Third Embodiment of the Invention >>
In the second embodiment described above, the composite virtual video of the virtual cameras VC and VD is once switched to the virtual video of the virtual camera VE, and then further switched to the virtual video of the virtual camera VA. In the embodiment, image processing in the case of switching directly from the combined virtual video (see FIG. 13) of the virtual cameras VC and VD to the virtual video (see FIG. 8) of the virtual camera VA is shown. The configuration of the third embodiment is the same as the configuration shown in FIG.

図17は自車両と仮想カメラの位置関係を示す上面図と側面図であり、図18は映像切り換え時の仮想カメラの位置(視点)と向き(視線方向)の変化を示す図である。なお、仮想カメラの位置(X,Y,Z)と向き(ヨー、ピッチ、ロール)の定義は図10および図11に示す定義と同様である。この第3の実施の形態では、仮想カメラVCとVDの位置(X)が徐々に変化して両カメラが自車両の左右中央に寄っていくとともに、両仮想カメラVCとVDの向き(ヨー)が徐々に変化して両カメラが真後ろの向きに近づいていく。また、これらの位置(X)と向き(ヨー)の変化と同時に、両仮想カメラVC、VDの位置(Y、Z)が徐々に変化して仮想カメラVAの位置(Y、Z)に近づいていくとともに、両仮想カメラVC、VDの向き(ピッチ)が徐々に変化して仮想カメラVAの向き(ピッチ)に近づいていく。仮想カメラVC、VDの位置から仮想カメラVAの位置に至るまでの仮想カメラの位置(X)と向き(ヨー)は、2台の仮想カメラVCとVDの平均値を用いる。   FIG. 17 is a top view and a side view showing the positional relationship between the host vehicle and the virtual camera, and FIG. 18 is a diagram showing changes in the position (viewpoint) and direction (line-of-sight direction) of the virtual camera when switching images. Note that the definitions of the position (X, Y, Z) and direction (yaw, pitch, roll) of the virtual camera are the same as the definitions shown in FIGS. In the third embodiment, the positions (X) of the virtual cameras VC and VD are gradually changed so that both cameras approach the center of the left and right of the host vehicle, and the directions (yaw) of the virtual cameras VC and VD. As the camera gradually changes, both cameras approach the back direction. Simultaneously with the change in position (X) and direction (yaw), the positions (Y, Z) of both virtual cameras VC, VD gradually change and approach the position (Y, Z) of virtual camera VA. At the same time, the direction (pitch) of both virtual cameras VC and VD gradually changes and approaches the direction (pitch) of the virtual camera VA. For the position (X) and direction (yaw) of the virtual camera from the position of the virtual cameras VC and VD to the position of the virtual camera VA, an average value of the two virtual cameras VC and VD is used.

この第3の実施の形態では、仮想カメラVC、VDの合成仮想映像から仮想カメラVAの仮想映像までの間に、フレームごとに仮想カメラ(VC11、VD11)〜(VC19、VD19)の合成仮想映像が補間的に表示される。図19は第3の実施の形態の映像切り換え表示例を示す図である。図19(a)は図13と同様な仮想カメラVC、VDの合成仮想映像例を示し、図19(e)は図8と同様な仮想カメラVAの仮想映像例を示す。そして、図19(b)、(c)、(d)は、仮想カメラVC、VDと仮想カメラVAとの間の補間的な仮想カメラ(VC12、VD12)、(VC15、VD15)、(VC18、VD18)の合成仮想映像例を示す。   In the third embodiment, between the virtual images of the virtual cameras VC and VD and the virtual video of the virtual camera VA, the virtual images of the virtual cameras (VC11 and VD11) to (VC19 and VD19) are obtained for each frame. Is displayed in an interpolated manner. FIG. 19 is a diagram illustrating a video switching display example according to the third embodiment. FIG. 19A shows an example of a combined virtual video of virtual cameras VC and VD similar to FIG. 13, and FIG. 19E shows a virtual video example of a virtual camera VA similar to FIG. FIGS. 19B, 19C, and 19D show virtual cameras (VC12, VD12), (VC15, VD15), (VC18, interpolating) between the virtual cameras VC, VD and the virtual camera VA. An example of a synthesized virtual video of VD18) is shown.

このように、仮想カメラVC、VDの合成仮想映像から仮想カメラVAの仮想映像にいきなり切り換わった場合には、運転者が切り換わり後の仮想カメラVAの仮想映像を理解するのに時間がかかるが、仮想カメラVC、VDの合成仮想映像と仮想カメラVAの仮想映像との間に、補間的に仮想カメラ(VC11、VD11)〜(VC19、VD19)の仮想映像を1/30秒のフレームごとに順次表示することによって、仮想カメラVC、VDの合成仮想映像から仮想カメラVAの仮想映像までの約1/3秒の間にフレームごとに映像が徐々に切り換わり、切り換わり後の仮想カメラVAの仮想映像を直ちに理解させることができる。   As described above, when the virtual video of the virtual camera VA is suddenly switched from the combined virtual video of the virtual cameras VC and VD to the virtual video of the virtual camera VA, it takes time for the driver to understand the virtual video of the virtual camera VA after the switching. However, the virtual images of the virtual cameras (VC11, VD11) to (VC19, VD19) are interpolated between the combined virtual images of the virtual cameras VC and VD and the virtual image of the virtual camera VA every 1/30 second frame. By sequentially displaying the images, the image is gradually switched for each frame in about 1/3 second from the synthesized virtual image of the virtual cameras VC and VD to the virtual image of the virtual camera VA, and the virtual camera VA after the switching is performed. Can immediately understand the virtual video.

《発明の第4の実施の形態》
第4の実施の形態では、図13に示す仮想カメラVC、VDの合成仮想映像からいったん仮想カメラVFの仮想映像に切り換えた後、さらに仮想カメラVAの仮想映像に切り換える。なお、この第4の実施の形態の構成は図1に示す構成と同様であり、説明を省略する。
<< Fourth Embodiment of the Invention >>
In the fourth embodiment, the virtual video of the virtual cameras VC and VD shown in FIG. 13 is once switched to the virtual video of the virtual camera VF and then switched to the virtual video of the virtual camera VA. The configuration of the fourth embodiment is the same as the configuration shown in FIG.

図20は自車両と仮想カメラの位置関係を示す上面図と側面図であり、図21は映像切り換え時の仮想カメラの位置(視点)と向き(視線方向)の変化を示す。なお、仮想カメラの位置(X、Y、Z)と向き(ヨー、ピッチ、ロール)の定義は図10および図11に示す定義と同様である。この第4の実施の形態では、仮想カメラVCとVDの位置(X)が徐々に変化して両カメラが自車両の左右中央に寄っていくとともに、両カメラの向き(ヨー)が徐々に変化して真後ろの向きに近づいていく。また、これらの位置(X)と向き(ヨー)の変化と同時に、両仮想カメラVC、VDの位置(Y、Z)が徐々に変化して仮想カメラVFの位置(Y、Z)に近づいていくとともに、両仮想カメラVC、VDの向き(ピッチ)が徐々に変化して仮想カメラVFの向き(ピッチ)に近づいていく。仮想カメラVC、VDが仮想カメラVFに統合された後、仮想カメラの位置(Y、Z)と向き(ピッチ)は仮想カメラVAの位置(Y、Z)と向き(ピッチ)に近づいていく。   FIG. 20 is a top view and a side view showing the positional relationship between the host vehicle and the virtual camera, and FIG. 21 shows changes in the position (viewpoint) and orientation (line-of-sight direction) of the virtual camera when switching images. Note that the definitions of the position (X, Y, Z) and direction (yaw, pitch, roll) of the virtual camera are the same as the definitions shown in FIGS. In the fourth embodiment, the positions (X) of the virtual cameras VC and VD are gradually changed so that both cameras move toward the center of the left and right of the host vehicle, and the directions (yaw) of both cameras are gradually changed. Then approach the direction directly behind. Simultaneously with the change in the position (X) and the direction (yaw), the positions (Y, Z) of both virtual cameras VC, VD gradually change and approach the position (Y, Z) of the virtual camera VF. At the same time, the direction (pitch) of both virtual cameras VC and VD gradually changes and approaches the direction (pitch) of the virtual camera VF. After the virtual cameras VC and VD are integrated with the virtual camera VF, the position (Y, Z) and direction (pitch) of the virtual camera approach the position (Y, Z) and direction (pitch) of the virtual camera VA.

仮想カメラVC、VDの位置から仮想カメラVFの位置に至るまでの仮想カメラの位置(X)と向き(ヨー)は、2台の仮想カメラVC、VDの平均値を用いる。また、仮想カメラVC、VD、VA、VFの位置を(X1,Y1)、(X2,Y2)、(X3,Y3)、(X4,Y4)とした場合に、仮想カメラVFのX座標X4とY座標Y4は、
X4=(X1+X2+X3)/3,
Y4=(Y1+Y3)/2=(Y2+Y3)/2
とする。
For the position (X) and direction (yaw) of the virtual camera from the position of the virtual cameras VC and VD to the position of the virtual camera VF, the average value of the two virtual cameras VC and VD is used. When the positions of the virtual cameras VC, VD, VA, VF are (X1, Y1), (X2, Y2), (X3, Y3), (X4, Y4), the X coordinate X4 of the virtual camera VF Y coordinate Y4 is
X4 = (X1 + X2 + X3) / 3
Y4 = (Y1 + Y3) / 2 = (Y2 + Y3) / 2
And

この第4の実施の形態では、仮想カメラVC、VDの合成仮想映像から仮想カメラVFの仮想映像までの間に、フレームごとに仮想カメラ(VC11、VD11)〜(VC14、VD14)の合成仮想映像が補間的に表示される。図22は第4の実施の形態の映像切り換え表示例を示す図である。図22(a)は図13と同様な仮想カメラVC、VDの合成仮想映像例を示し、図22(e)は図8と同様な仮想カメラVAの仮想映像例を示す。そして、図22(b)は仮想カメラVC、VDと仮想カメラVFとの間の補間的な仮想カメラ(VC12、VD12)の合成仮想映像例を示し、図22(c)、(d)は仮想カメラVFと仮想カメラVAとの間の補間的な仮想カメラVF1、VF3の仮想映像例を示す。   In the fourth embodiment, the synthesized virtual video of the virtual cameras (VC11, VD11) to (VC14, VD14) for each frame between the synthesized virtual video of the virtual cameras VC and VD and the virtual video of the virtual camera VF. Is displayed in an interpolated manner. FIG. 22 is a diagram illustrating a video switching display example according to the fourth embodiment. FIG. 22A shows an example of a synthesized virtual video of virtual cameras VC and VD similar to FIG. 13, and FIG. 22E shows an example of a virtual video of a virtual camera VA similar to FIG. FIG. 22B shows an example of a synthesized virtual video of an interpolated virtual camera (VC12, VD12) between the virtual cameras VC and VD and the virtual camera VF. FIGS. 22C and 22D are virtual images. An example of virtual images of virtual cameras VF1 and VF3 that are interpolated between the camera VF and the virtual camera VA is shown.

このように、仮想カメラVC、VDの合成仮想映像から仮想カメラVAの仮想映像にいきなり切り換わった場合には、運転者が切り換わり後の仮想カメラVAの仮想映像を理解するのに時間がかかるが、仮想カメラVC、VDの合成映像と仮想カメラVAの仮想映像との間に、補間的に仮想カメラ(VC11、VD11)〜(VC14、VD14)、VF、VF1〜VF4の仮想映像を1/30秒のフレームごとに順次表示することによって、仮想カメラVC、VDの合成仮想映像から仮想カメラVAの仮想映像までの約1/3秒間に映像が徐々に切り換わり、切り換わり後の仮想カメラVAの仮想映像を直ちに理解させることができる。   As described above, when the virtual video of the virtual camera VA is suddenly switched from the combined virtual video of the virtual cameras VC and VD to the virtual video of the virtual camera VA, it takes time for the driver to understand the virtual video of the virtual camera VA after the switching. However, the virtual images of the virtual cameras (VC11, VD11) to (VC14, VD14), VF, VF1 to VF4 are interpolated between the synthesized video of the virtual cameras VC and VD and the virtual video of the virtual camera VA. By sequentially displaying every 30-second frame, the video is gradually switched in about 1/3 second from the synthesized virtual video of the virtual cameras VC and VD to the virtual video of the virtual camera VA, and the virtual camera VA after the switching Can immediately understand the virtual video.

《発明の第5の実施の形態》
第5の実施の形態では、図13に示す仮想カメラVC、VDの合成仮想映像からいったん仮想カメラVBの仮想映像に切り換えた後、さらに仮想カメラVAの仮想映像に切り換える。なお、この第5の実施の形態の構成は図1に示す構成と同様であり、説明を省略する。
<< Fifth Embodiment of the Invention >>
In the fifth embodiment, the virtual video of the virtual cameras VC and VD shown in FIG. 13 is first switched to the virtual video of the virtual camera VB, and then switched to the virtual video of the virtual camera VA. The configuration of the fifth embodiment is the same as the configuration shown in FIG.

図23は自車両と仮想カメラの位置関係を示す上面図と側面図であり、図24は映像切り換え時の仮想カメラの位置(視点)と向き(視線方向)の変化を示す。なお、仮想カメラの位置(X、Y、Z)と向き(ヨー、ピッチ、ロール)の定義は図10および図11に示す定義と同様である。この第5の実施の形態では、仮想カメラVCとVDの位置(X)が徐々に変化して両カメラが自車両の左右中央に寄っていくとともに、両カメラの向き(ヨー)が徐々に変化して真後ろの向きに近づいていく。また、これらの位置(X)と向き(ヨー)の変化と同時に、両仮想カメラVC、VDの位置(Y)が徐々に変化して仮想カメラVBの位置(Y)に近づいていく。仮想カメラVC、VDの位置から仮想カメラVBの位置に至るまでの仮想カメラの位置(X)と向き(ヨー)は、2台の仮想カメラVC、VDの平均値を用いる。仮想カメラVC、VDが仮想カメラVBに統合された後、仮想カメラの位置(Y、Z)と向き(ピッチ)は仮想カメラVAの位置(Y、Z)と向き(ピッチ)に近づいていく。   FIG. 23 is a top view and a side view showing the positional relationship between the host vehicle and the virtual camera, and FIG. 24 shows changes in the position (viewpoint) and orientation (line-of-sight direction) of the virtual camera when switching images. Note that the definitions of the position (X, Y, Z) and direction (yaw, pitch, roll) of the virtual camera are the same as the definitions shown in FIGS. In the fifth embodiment, the positions (X) of the virtual cameras VC and VD are gradually changed so that both cameras move toward the center of the left and right of the host vehicle, and the directions (yaw) of both cameras are gradually changed. Then approach the direction directly behind. Simultaneously with the change in the position (X) and the direction (yaw), the positions (Y) of both virtual cameras VC and VD gradually change and approach the position (Y) of the virtual camera VB. For the position (X) and direction (yaw) of the virtual camera from the position of the virtual cameras VC and VD to the position of the virtual camera VB, an average value of the two virtual cameras VC and VD is used. After the virtual cameras VC and VD are integrated into the virtual camera VB, the position (Y, Z) and direction (pitch) of the virtual camera approach the position (Y, Z) and direction (pitch) of the virtual camera VA.

この第5の実施の形態では、仮想カメラVC、VDの合成仮想映像から仮想カメラVBの仮想映像までの間に、フレームごとに仮想カメラ(VC11、VD11)〜(VC14、VD14)の合成仮想映像が補間的に表示される。図25は第5の実施の形態の映像切り換え表示例を示す図である。図25(a)は図13と同様な仮想カメラVC、VDの合成仮想映像例を示し、図25(e)は図8と同様な仮想カメラVAの仮想映像例を示す。そして、図25(b)は仮想カメラVC、VDと仮想カメラVFとの間の補間的な仮想カメラ(VC12、VD12)の合成仮想映像例を示し、図25(c)、(d)は仮想カメラVBと仮想カメラVAとの間の補間的な仮想カメラVB1、VB3の仮想映像例を示す。   In the fifth embodiment, the synthesized virtual video of virtual cameras (VC11, VD11) to (VC14, VD14) for each frame between the synthesized virtual video of virtual cameras VC and VD and the virtual video of virtual camera VB. Is displayed in an interpolated manner. FIG. 25 is a diagram illustrating a video switching display example according to the fifth embodiment. FIG. 25A shows an example of a combined virtual video of virtual cameras VC and VD similar to FIG. 13, and FIG. 25E shows an example of a virtual video of a virtual camera VA similar to FIG. FIG. 25B shows an example of a synthesized virtual video of an interpolated virtual camera (VC12, VD12) between the virtual cameras VC, VD and the virtual camera VF, and FIGS. 25C, 25D are virtual images. An example of virtual images of virtual cameras VB1 and VB3 that are interpolated between the camera VB and the virtual camera VA is shown.

このように、仮想カメラVC、VDの合成仮想映像から仮想カメラVAの仮想映像にいきなり切り換わった場合には、運転者が切り換わり後の仮想カメラVAの仮想映像を理解するのに時間がかかるが、仮想カメラVC、VDの合成仮想映像と仮想カメラVAの仮想映像との間に、補間的に仮想カメラ(VC11、VD11)〜(VC14、VD14)、VB、VB1〜VB4の仮想映像を1/30秒のフレームごとに順次表示することによって、仮想カメラVC、VDの合成仮想映像から仮想カメラVAの仮想映像までの約1/3秒間に映像が徐々に切り換わり、切り換わり後の仮想カメラVAの仮想映像を直ちに理解させることができる。   As described above, when the virtual video of the virtual camera VA is switched suddenly from the virtual video of the virtual cameras VC and VD to the virtual video of the virtual camera VA, it takes time for the driver to understand the virtual video of the virtual camera VA after the switching. However, the virtual images of the virtual cameras (VC11, VD11) to (VC14, VD14), VB, VB1 to VB4 are interpolated between the synthesized virtual video of the virtual cameras VC and VD and the virtual video of the virtual camera VA. By sequentially displaying every 30 second frame, the video gradually switches in about 1/3 second from the synthesized virtual video of the virtual cameras VC and VD to the virtual video of the virtual camera VA, and the virtual camera after the switching The virtual image of VA can be immediately understood.

《発明の第6の実施の形態》
自車両周囲の映像に自車両のワイヤーフレームを重畳表示する実施例を説明する。図26(a)〜(e)は、仮想カメラVAの仮想映像から仮想カメラVBの仮想映像に切り換える場合の各映像に、自車両のワイヤーフレームを重畳して表示した例を示す。このように、自車両のワイヤーフレームを自車両周囲の仮想映像に重畳表示することによって、自車両周囲の映像上の駐車枠や他車両との位置関係を容易に把握することができ、最終的に表示される自車両周囲の映像を直ちに理解することができる。
<< Sixth Embodiment of the Invention >>
An embodiment will be described in which the wire frame of the host vehicle is superimposed and displayed on an image around the host vehicle. FIGS. 26A to 26E show examples in which the wire frame of the host vehicle is superimposed on each video when switching from the virtual video of the virtual camera VA to the virtual video of the virtual camera VB. Thus, by superimposing and displaying the wire frame of the host vehicle on the virtual video around the host vehicle, the positional relationship with the parking frame on the video around the host vehicle and other vehicles can be easily grasped. It is possible to immediately understand the image around the vehicle displayed on the screen.

《発明の第7の実施の形態》
自車両周囲の映像にグリッドライン(格子)を重畳表示する実施例を説明する。図27(a)〜(e)は、仮想カメラVAの仮想映像から仮想カメラVBの仮想映像に切り換える場合の各映像に、グリッドラインを重畳表示した例を示す。このグリッドラインは、地上(平面道路地図上)に等間隔に設定されたグリッドラインを仮想カメラの位置(視点)と向き(視線方向)から見たグリッドラインに視点変換し、それを自車両周囲の映像に重畳表示する。これにより、自車両周囲の映像上の駐車枠や他車両との距離を感覚的に把握することができ、最終的に表示される自車両周囲の映像を直ちに理解することができる。
<< Seventh Embodiment of the Invention >>
An embodiment in which grid lines (lattices) are superimposed and displayed on an image around the host vehicle will be described. FIGS. 27A to 27E show examples in which grid lines are superimposed on each video when switching from the virtual video of the virtual camera VA to the virtual video of the virtual camera VB. This grid line converts the grid lines set at equal intervals on the ground (on the planar road map) into grid lines viewed from the virtual camera position (viewpoint) and orientation (line-of-sight direction), and converts them to the surroundings of the vehicle Superimposed on the video. Thereby, it is possible to sensuously grasp the parking frame on the image around the own vehicle and the distance from other vehicles, and it is possible to immediately understand the image around the own vehicle that is finally displayed.

《発明の第8の実施の形態》
上述した一実施の形態では、自車両周囲の映像を切り換える際に切り換え途中の補間映像を表示して最終的に表示される自車両周囲の映像を理解しやすいようにしたが、補間映像の代わりに自車両のワイヤーフレーム画像を表示してもよい。例えば図28に示すように、仮想カメラVAの仮想映像(a)から仮想カメラのVBの仮想映像(e)に切り換える場合に、切り換え途中で補間仮想映像を表示せずに、(b)〜(d)に示すように代わりに自車両のワイヤーフレーム画像を表示する。この自車両のワイヤーフレーム画像では、自車両のワイヤーフレームを、補間仮想映像を撮像する仮想カメラの位置(視点)と向き(視線方向)から見た俯瞰図に変換して表示する。これにより、自車両周囲の仮想映像を撮像する仮想カメラの位置と向き、すなわち自車両周囲映像の視点と視線方向の変化を容易に把握することができ、最終的に表示される自車両周囲映像を直ちに理解することができる。
<< Eighth Embodiment of the Invention >>
In the embodiment described above, when switching the video around the host vehicle, the interpolated video in the middle of switching is displayed to make it easier to understand the video around the host vehicle that is finally displayed. You may display the wire frame image of the own vehicle. For example, as shown in FIG. 28, when switching from the virtual image (a) of the virtual camera VA to the virtual image (e) of the VB of the virtual camera, (b) to (b) without displaying the interpolated virtual image during the switching. Instead, a wire frame image of the host vehicle is displayed as shown in d). In the wire frame image of the host vehicle, the wire frame of the host vehicle is converted into a bird's eye view viewed from the position (viewpoint) and direction (view direction) of the virtual camera that captures the interpolated virtual image. As a result, the position and orientation of the virtual camera that captures the virtual video around the host vehicle, that is, the change in the viewpoint and line-of-sight direction of the host vehicle surrounding video can be easily grasped, and the host vehicle surrounding video that is finally displayed Can be understood immediately.

《発明の第9の実施の形態》
運転者が画像処理装置を起動した直後には、図13に示すような仮想カメラVCとVDの合成仮想映像を表示する。しかしながら、起動直後にいきなり仮想カメラVCとVDの合成仮想映像を表示すると、運転者はその合成仮想映像がどの位置(視点)からどの方向(視線方向)を見た映像なのか理解しにくい。そこで図29(a)に示すように、まず、仮想カメラVFあるいはVAの映像、すなわち自車両の前方上空から自車両を見下ろした俯瞰映像を表示し、次に(b)〜(d)に示すように、仮想カメラの位置(視点)と向き(視線方向)を徐々に変えて最終的に仮想カメラVCとVDの位置まで移動し、(e)に示すように最終的に仮想カメラVCとVDの合成仮想映像を表示する。
<< Ninth Embodiment of the Invention >>
Immediately after the driver activates the image processing apparatus, a composite virtual image of the virtual cameras VC and VD as shown in FIG. 13 is displayed. However, if the synthesized virtual video of the virtual cameras VC and VD is displayed immediately after the activation, it is difficult for the driver to understand from which position (viewpoint) to which direction (gaze direction) the synthesized virtual video is viewed. Therefore, as shown in FIG. 29A, first, an image of the virtual camera VF or VA, that is, an overhead view image looking down at the own vehicle from above the own vehicle is displayed, and then shown at (b) to (d). In this way, the position (viewpoint) and direction (line-of-sight direction) of the virtual camera are gradually changed to finally move to the positions of the virtual cameras VC and VD, and finally the virtual cameras VC and VD are shown as shown in (e). Displays the combined virtual video.

なお、上述した一実施の形態では、仮想カメラの仮想映像から別の仮想カメラの仮想映像に切り換える例を示したが、実カメラの実映像から仮想カメラの仮想映像に切り換える場合、あるいは逆に仮想カメラの仮想映像から実カメラの実映像に切り換える場合にも、本願発明を適用することができる。   In the embodiment described above, an example of switching from a virtual video of a virtual camera to a virtual video of another virtual camera has been described. However, when switching from a real video of a real camera to a virtual video of a virtual camera, or conversely, The present invention can also be applied when switching from a virtual image of a camera to a real image of a real camera.

このように、一実施の形態によれば、自車両に搭載され、自車両周辺を撮像して実映像を出力する実カメラ1〜4と、実カメラ1〜4による実映像を任意の位置と向きに設定した仮想カメラVA、VB、VC、VD、VE、VFによる仮想映像に変換する画像処理装置5と、実カメラ1〜4の実映像または画像処理装置5の仮想映像を表示する表示器6とを備え、画像処理装置5によって、表示器6の表示映像を第1の実映像または仮想映像から第2の実映像または仮想映像に切り換える途中に、第1の実映像または仮想映像を撮像した位置および向きと第2の実映像または仮想映像を撮像した位置および向きとの中間の位置および向きから見た補間仮想映像を生成し、表示器6に表示するようにした。これにより、自車両の周囲環境を撮像した映像を切り換える場合に、切り換わり後の映像上の自車両周囲の物体と、元の映像上の自車両周囲の物体、あるいは実際の自車両周囲の物体との対応関係や位置関係を直ちに運転者に認識させることができ、切り換え後の映像から自車両の周囲環境をすぐに理解させることができる。   As described above, according to the embodiment, the real cameras 1 to 4 that are mounted on the own vehicle and that capture the surroundings of the own vehicle and output the real images, and the real images by the real cameras 1 to 4 can be set to arbitrary positions. An image processing device 5 for converting into virtual images by virtual cameras VA, VB, VC, VD, VE, and VF set in a direction, and a display for displaying real images of real cameras 1 to 4 or virtual images of the image processing device 5 6 and the image processing device 5 captures the first real video or virtual video while switching the display video on the display 6 from the first real video or virtual video to the second real video or virtual video. An interpolated virtual image viewed from a position and orientation intermediate between the position and orientation obtained and the position and orientation at which the second real image or virtual image is captured is generated and displayed on the display 6. As a result, when switching the image that captures the surrounding environment of the host vehicle, the object around the host vehicle on the switched image, the object around the host vehicle on the original image, or the object around the actual host vehicle Can be immediately recognized by the driver, and the surrounding environment of the host vehicle can be immediately understood from the image after switching.

また、一実施の形態によれば、表示器6の表示映像を第1の実映像または仮想映像から第2の実映像または仮想映像に切り換える途中に、第1の実映像または仮想映像を撮像した位置および向きから第2の実映像または仮想映像を撮像した位置および向きに至るまでの中間の複数の位置および向きから見た複数の補間仮想映像を生成し、それらを順に切り換えて表示器6に表示するようにしたので、映像の切り換わりの様子を把握させることができ、切り換え後の映像から自車両の周囲環境をすぐに理解させることができる。   Further, according to the embodiment, the first real video or the virtual video is captured while the display video on the display device 6 is switched from the first real video or the virtual video to the second real video or the virtual video. A plurality of interpolated virtual images viewed from a plurality of intermediate positions and orientations from the position and orientation to the position and orientation from which the second real image or virtual image is imaged are generated, and these are sequentially switched to the display 6 Since it is displayed, it is possible to grasp the state of switching of the video, and to immediately understand the surrounding environment of the host vehicle from the video after switching.

さらに、一実施の形態によれば、画像処理装置5によって、表示器6の表示映像を第1の実映像または仮想映像から第2の実映像または仮想映像に切り換える途中に、実カメラ1〜4および仮想カメラVA、VB、VC、VD、VE、VFの撮影レンズの光学特性を連続的に変化させるようにしたので、仮想映像が見やすくなり、映像の切り換わりを容易に理解させることができる。   Further, according to the embodiment, the real cameras 1 to 4 are switched by the image processing device 5 while the display video of the display device 6 is switched from the first real video or virtual video to the second real video or virtual video. In addition, since the optical characteristics of the photographing lenses of the virtual cameras VA, VB, VC, VD, VE, and VF are continuously changed, it becomes easy to see the virtual video and the switching of the video can be easily understood.

一実施の形態によれば、画像処理装置5によって、複数の実カメラ1〜4または仮想カメラVA、VB、VC、VD、VE、VFの映像を合成して実映像または仮想映像を生成し、表示器6に表示するようにしたので、1台の表示器6の映像から自車両周囲の環境を効率的に把握することができる。   According to an embodiment, the image processing device 5 generates a real video or a virtual video by combining videos of the plurality of real cameras 1 to 4 or the virtual cameras VA, VB, VC, VD, VE, and VF, Since the information is displayed on the display device 6, the environment around the host vehicle can be efficiently grasped from the image of one display device 6.

一実施の形態によれば、画像処理装置5によって、複数の実カメラまたは仮想カメラの合成映像から、自車両の上空の位置から自車両の向きに設定された単一の仮想カメラAの仮想映像に切り換える途中に、仮想カメラAよりも自車両に近接する位置から自車両の向きに設定された単一の仮想カメラBの補間仮想映像を生成し、表示器6に表示するようにしたので、複数の補間仮想映像の中から合成仮想映像の割合を減らすことができ、補間仮想映像が見やすくなり、映像の切り換わりを容易に理解させることができる。   According to the embodiment, the virtual image of the single virtual camera A set by the image processing device 5 from the position above the own vehicle to the direction of the own vehicle from the composite images of the plurality of real cameras or virtual cameras. In the middle of switching, the interpolation virtual image of the single virtual camera B set in the direction of the host vehicle is generated from the position closer to the host vehicle than the virtual camera A and is displayed on the display unit 6. It is possible to reduce the ratio of the synthesized virtual video from among a plurality of interpolated virtual videos, making it easy to see the interpolated virtual video and making it easy to understand the video switching.

一実施の形態によれば、複数の実カメラまたは仮想カメラの合成映像から単一の実カメラまたは仮想カメラの映像に切り換える場合には、単一の実カメラまたは仮想カメラの位置および向きを、合成映像を撮像する複数の実カメラまたは仮想カメラの位置および向きの平均値としたので、映像の切り替わりが解りやすく、切り換わり後の映像をすぐに理解させることができる。   According to one embodiment, when switching from a composite video of a plurality of real cameras or virtual cameras to a video of a single real camera or virtual camera, the position and orientation of the single real camera or virtual camera are combined. Since the average value of the positions and orientations of a plurality of real cameras or virtual cameras that capture images is used, it is easy to understand the switching of images, and the images after switching can be understood immediately.

一実施の形態によれば、画像処理装置5によって、自車両に近接した位置に自車両の向きに設定した仮想カメラCの仮想映像を表示する場合には、まず仮想カメラCよりも自車両から遠い位置に自車両の向きに設定した仮想カメラDの仮想映像を表示し、次に仮想カメラDと仮想カメラCとの中間の位置と向きに設定された仮想カメラEの補間仮想映像を表示した後、仮想カメラCの仮想映像を表示するようにしたので、画像処理装置を起動した直後に自車両に近接した位置から自車両を見た仮想映像を表示する場合でも、その仮想映像における自車両の周囲環境をすぐに理解することができる。   According to one embodiment, when the virtual image of the virtual camera C set in the direction of the host vehicle is displayed at a position close to the host vehicle by the image processing device 5, first from the host vehicle rather than the virtual camera C. A virtual image of the virtual camera D set in the direction of the host vehicle is displayed at a far position, and then an interpolated virtual image of the virtual camera E set in an intermediate position and direction between the virtual camera D and the virtual camera C is displayed. After that, since the virtual video of the virtual camera C is displayed, even when the virtual video of viewing the host vehicle from a position close to the host vehicle is displayed immediately after starting the image processing apparatus, the host vehicle in the virtual video is displayed. You can immediately understand the surrounding environment.

一実施の形態によれば、画像処理装置5によって、補間仮想映像を撮像するための仮想カメラの位置と向きから見た自車両の概略図(ワイヤーフレーム)を補間仮想映像に重畳して表示器6に表示するようにしたので、補間仮想映像上の自車位置と映像の切り替わりが解りやすく、切り換わり後の映像をすぐに理解させることができる。   According to one embodiment, the image processing device 5 superimposes a schematic diagram (wire frame) of the host vehicle viewed from the position and orientation of a virtual camera for capturing an interpolated virtual image on the interpolated virtual image and displays it. 6, it is easy to understand the switching between the vehicle position and the video on the interpolation virtual video, and the video after the switching can be understood immediately.

一実施の形態によれば、画像処理装置5によって、路面に等間隔に設定されたグリッドライン(格子線)を補間仮想映像を撮像するための仮想カメラの位置と向きから見たグリッドラインに変換し、補間仮想映像に重畳して表示器6に表示するようにしたので、自車両周囲の映像上の駐車枠や他車両との距離を感覚的に把握することができ、最終的に表示される自車両周囲の映像を直ちに理解することができる。   According to an embodiment, the image processing device 5 converts grid lines (lattice lines) set at equal intervals on the road surface into grid lines viewed from the position and orientation of a virtual camera for capturing an interpolated virtual image. Since it is superimposed on the interpolated virtual image and displayed on the display device 6, it is possible to sensuously grasp the parking frame on the image around the host vehicle and the distance to other vehicles, and the final display is performed. You can immediately understand the video around your vehicle.

一実施の形態によれば、画像処理装置5によって、補間仮想映像を撮像するための仮想カメラの位置と向きから見た自車両の概略図を補間仮想映像に代えて表示器6に表示するようにしたので、補間仮想映像上の自車位置と映像の切り替わりが解りやすく、切り換わり後の映像をすぐに理解させることができる。   According to an embodiment, the image processing device 5 displays a schematic diagram of the host vehicle viewed from the position and orientation of a virtual camera for capturing an interpolated virtual image on the display 6 instead of the interpolated virtual image. Therefore, it is easy to understand the switching between the vehicle position and the video on the interpolated virtual video, and the video after the switching can be understood immediately.

一実施の形態の構成を示す図である。It is a figure which shows the structure of one embodiment. 車載カメラの設置位置を示す図である。It is a figure which shows the installation position of a vehicle-mounted camera. 車両前方カメラにより撮像した映像を示す図である。It is a figure which shows the image | video imaged with the vehicle front camera. 車両右側方カメラにより撮像した映像を示す図である。It is a figure which shows the image | video imaged with the vehicle right side camera. 車両左側方カメラにより撮像した映像を示す図である。It is a figure which shows the image | video imaged with the vehicle left side camera. 車両後方カメラにより撮像した映像を示す図である。It is a figure which shows the image | video imaged with the vehicle rear camera. 仮想カメラの位置と向きを示す図である。It is a figure which shows the position and direction of a virtual camera. 仮想カメラVAの映像例を示す図である。It is a figure which shows the example of an image | video of virtual camera VA. 仮想カメラVBの映像例を示す図である。It is a figure which shows the example of an image | video of virtual camera VB. 第1の実施の形態の自車両と仮想カメラの位置関係を示す図である。It is a figure which shows the positional relationship of the own vehicle and virtual camera of 1st Embodiment. 第1の実施の形態の映像切り換え時の仮想カメラの位置と向きの変化を示す図である。It is a figure which shows the change of the position and orientation of a virtual camera at the time of the video switch of 1st Embodiment. 第1の実施の形態の映像切り換え表示例を示す図である。It is a figure which shows the example of a video switching display of 1st Embodiment. 仮想カメラVCとVDの合成映像を示す図である。It is a figure which shows the synthetic | combination image | video of virtual camera VC and VD. 第2の実施の形態の自車両と仮想カメラの位置関係を示す図である。It is a figure which shows the positional relationship of the own vehicle and virtual camera of 2nd Embodiment. 第2の実施の形態の映像切り換え時の仮想カメラの位置と向きの変化を示す図である。It is a figure which shows the change of the position and orientation of a virtual camera at the time of the video switch of 2nd Embodiment. 第2の実施の形態の映像切り換え表示例を示す図である。It is a figure which shows the example of a video switching display of 2nd Embodiment. 第3の実施の形態の自車両と仮想カメラの位置関係を示す図である。It is a figure which shows the positional relationship of the own vehicle and virtual camera of 3rd Embodiment. 第3の実施の形態の像切り換え時の仮想カメラの位置と向きの変化を示す図である。It is a figure which shows the change of the position and orientation of a virtual camera at the time of image switching of 3rd Embodiment. 第3の実施の形態の映像切り換え表示例を示す図である。It is a figure which shows the example of a video switching display of 3rd Embodiment. 第4の実施の形態の自車両と仮想カメラの位置関係を示す図である。It is a figure which shows the positional relationship of the own vehicle and virtual camera of 4th Embodiment. 第4の実施の形態の像切り換え時の仮想カメラの位置と向きの変化を示す図である。It is a figure which shows the change of the position and orientation of a virtual camera at the time of image switching of 4th Embodiment. 第4の実施の形態の映像切り換え表示例を示す図である。It is a figure which shows the example of a video switching display of 4th Embodiment. 第5の実施の形態の自車両と仮想カメラの位置関係を示す図である。It is a figure which shows the positional relationship of the own vehicle and virtual camera of 5th Embodiment. 第5の実施の形態の像切り換え時の仮想カメラの位置と向きの変化を示す図である。It is a figure which shows the change of the position and orientation of a virtual camera at the time of image switching of 5th Embodiment. 第5の実施の形態の映像切り換え表示例を示す図である。It is a figure which shows the example of a video switching display of 5th Embodiment. 第6の実施の形態の映像切り換え表示例を示す図である。It is a figure which shows the example of a video switching display of 6th Embodiment. 第7の実施の形態の映像切り換え表示例を示す図である。It is a figure which shows the example of a video switching display of 7th Embodiment. 第8の実施の形態の映像切り換え表示例を示す図である。It is a figure which shows the example of a video switching display of 8th Embodiment. 第9の実施の形態の映像切り換え表示例を示す図である。It is a figure which shows the example of a video switching display of 9th Embodiment.

符号の説明Explanation of symbols

1 前方カメラ
2 右側方カメラ
3 左側方カメラ
4 後方カメラ
5 画像処理装置
6 表示器
DESCRIPTION OF SYMBOLS 1 Front camera 2 Right side camera 3 Left side camera 4 Rear camera 5 Image processing apparatus 6 Display

Claims (10)

自車両に搭載され、自車両周辺を撮像して実映像を出力する実撮像手段と、
前記実撮像手段による実映像を任意の位置と向きに設定した仮想撮像手段による仮想映像に変換する画像処理手段と、
前記実撮像手段の実映像または前記画像処理手段の仮想映像を表示する表示手段とを備えた画像処理装置であって、
前記画像処理手段は、前記表示手段の表示映像を第1の実映像または仮想映像から第2の実映像または仮想映像に切り換える途中に、前記第1の実映像または仮想映像を撮像した位置および向きと前記第2の実映像または仮想映像を撮像した位置および向きとの中間の位置および向きから見た補間仮想映像を生成し、前記表示手段に表示することを特徴とする画像処理装置。
An actual imaging means mounted on the host vehicle for imaging the periphery of the host vehicle and outputting an actual video;
Image processing means for converting a real video by the real imaging means into a virtual video by a virtual imaging means set at an arbitrary position and orientation;
An image processing apparatus comprising display means for displaying a real video of the real imaging means or a virtual video of the image processing means,
The image processing means is a position and a direction in which the first real video or virtual video is imaged while switching the display video of the display means from the first real video or virtual video to the second real video or virtual video. And an interpolated virtual image viewed from a position and orientation intermediate between the position and orientation at which the second real image or virtual image is imaged and displayed on the display means.
請求項1に記載の画像処理装置において、
前記画像処理手段は、前記表示手段の表示映像を前記第1の実映像または仮想映像から前記第2の実映像または仮想映像に切り換える途中に、前記第1の実映像または仮想映像を撮像した位置および向きから前記第2の実映像または仮想映像を撮像した位置および向きに至るまでの中間の複数の位置および向きから見た複数の補間仮想映像を生成し、それらを順に切り換えて前記表示手段に表示することを特徴とする画像処理装置。
The image processing apparatus according to claim 1.
The image processing means captures the first real video or virtual video while switching the display video of the display means from the first real video or virtual video to the second real video or virtual video. And a plurality of interpolated virtual images viewed from a plurality of intermediate positions and orientations from the direction to the position and orientation from which the second real image or virtual image is imaged, and sequentially switching them to the display means An image processing apparatus characterized by displaying.
請求項1または請求項2に記載の画像処理装置において、
前記画像処理手段は、前記表示手段の表示映像を前記第1の実映像または仮想映像から前記第2の実映像または仮想映像に切り換える途中に、前記実撮像手段および前記仮想撮像手段の撮影レンズの光学特性を連続的に変化させることを特徴とする画像処理装置。
The image processing apparatus according to claim 1 or 2,
The image processing unit is configured to switch the display image of the display unit from the first real video or virtual video to the second real video or virtual video while the real imaging unit and the imaging lens of the virtual imaging unit An image processing apparatus characterized by continuously changing optical characteristics.
請求項1〜3のいずれか1項に記載の画像処理装置において、
前記画像処理手段は、複数の前記実撮像手段または前記仮想撮像手段の映像を合成して実映像または仮想映像を生成し、前記表示手段に表示することを特徴とする画像処理装置。
The image processing apparatus according to any one of claims 1 to 3,
The image processing device is characterized in that a plurality of images of the real imaging unit or the virtual imaging unit are combined to generate a real video or a virtual video and displayed on the display unit.
請求項4に記載の画像処理装置において、
前記画像処理手段は、複数の前記実撮像手段または前記仮想撮像手段の合成映像から、自車両の上空の位置から自車両の向きに設定された単一の前記仮想撮像手段Aの仮想映像に切り換える途中に、前記仮想撮像手段Aよりも自車両に近接する位置から自車両の向きに設定された単一の前記仮想撮像手段Bの補間仮想映像を生成し、前記表示手段に表示することを特徴とする画像処理装置。
The image processing apparatus according to claim 4.
The image processing means switches from a composite image of the plurality of real imaging means or the virtual imaging means to a single virtual image of the virtual imaging means A set in the direction of the host vehicle from a position above the host vehicle. In the middle, a single interpolation virtual image of the virtual imaging means B set in the direction of the host vehicle is generated from a position closer to the host vehicle than the virtual imaging means A, and is displayed on the display means. An image processing apparatus.
請求項4または請求項5に記載の画像処理装置において、
複数の前記実撮像手段または前記仮想撮像手段の合成映像から単一の前記実撮像手段または前記仮想撮像手段の映像に切り換える場合には、前記単一の前記実撮像手段または前記仮想撮像手段の位置および向きを、前記合成映像を撮像する前記複数の前記実撮像手段または前記仮想撮像手段の位置および向きの平均値とすることを特徴とする画像処理装置。
The image processing apparatus according to claim 4 or 5,
In the case of switching from a composite video of the plurality of real imaging means or the virtual imaging means to a single video of the real imaging means or the virtual imaging means, the position of the single real imaging means or the virtual imaging means And an orientation as an average value of the positions and orientations of the plurality of the real imaging means or the virtual imaging means for imaging the composite video.
請求項1〜6のいずれかの項に記載の画像処理装置において、
前記画像処理手段は、自車両に近接した位置に自車両の向きに設定した前記仮想撮像手段Cの仮想映像を表示する場合には、まず前記仮想撮像手段Cよりも自車両から遠い位置に自車両の向きに設定した前記仮想撮像手段Dの仮想映像を表示し、次に前記仮想撮像手段Dと前記仮想撮像手段Cとの中間の位置と向きに設定された前記仮想撮像手段Eの補間仮想映像を表示した後、前記仮想撮像手段Cの仮想映像を表示することを特徴とする画像処理装置。
In the image processing device according to any one of claims 1 to 6,
When displaying the virtual image of the virtual imaging means C set in the direction of the own vehicle at a position close to the own vehicle, the image processing means first detects the virtual image at a position farther from the own vehicle than the virtual imaging means C. The virtual image of the virtual imaging means D set in the direction of the vehicle is displayed, and then the interpolation virtual of the virtual imaging means E set in the intermediate position and orientation between the virtual imaging means D and the virtual imaging means C An image processing apparatus that displays a virtual image of the virtual imaging means C after displaying the image.
請求項1〜7のいずれかの項に記載の画像処理装置において、
前記画像処理手段は、前記補間仮想映像を撮像するための前記仮想撮像手段の位置と向きから見た自車両の概略図を前記補間仮想映像に重畳して前記表示手段に表示することを特徴とする画像処理装置。
In the image processing device according to any one of claims 1 to 7,
The image processing means superimposes a schematic view of the host vehicle viewed from the position and orientation of the virtual imaging means for capturing the interpolated virtual image on the interpolated virtual image and displays the superimposed image on the display means. An image processing apparatus.
請求項1〜7のいずれかの項に記載の画像処理装置において、
前記画像処理手段は、路面に等間隔に設定されたグリッドライン(格子線)を前記補間仮想映像を撮像するための前記仮想撮像手段の位置と向きから見たグリッドラインに変換し、前記補間仮想映像に重畳して前記表示手段に表示することを特徴とする画像処理装置。
In the image processing device according to any one of claims 1 to 7,
The image processing means converts grid lines (lattice lines) set at equal intervals on the road surface into grid lines viewed from the position and orientation of the virtual imaging means for imaging the interpolation virtual image, and the interpolation virtual An image processing apparatus, wherein the image processing apparatus displays the image on the display unit in a superimposed manner.
請求項1〜7のいずれかの項に記載の画像処理装置において、
前記画像処理手段は、前記補間仮想映像を撮像するための前記仮想撮像手段の位置と向きから見た自車両の概略図を前記補間仮想映像に代えて前記表示手段に表示することを特徴とする画像処理装置。
In the image processing device according to any one of claims 1 to 7,
The image processing means displays a schematic diagram of the host vehicle viewed from the position and orientation of the virtual imaging means for capturing the interpolated virtual video on the display means instead of the interpolated virtual video. Image processing device.
JP2004359778A 2004-12-13 2004-12-13 Image processing device Active JP4569285B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2004359778A JP4569285B2 (en) 2004-12-13 2004-12-13 Image processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2004359778A JP4569285B2 (en) 2004-12-13 2004-12-13 Image processing device

Publications (2)

Publication Number Publication Date
JP2006171849A true JP2006171849A (en) 2006-06-29
JP4569285B2 JP4569285B2 (en) 2010-10-27

Family

ID=36672566

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2004359778A Active JP4569285B2 (en) 2004-12-13 2004-12-13 Image processing device

Country Status (1)

Country Link
JP (1) JP4569285B2 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008148059A (en) * 2006-12-11 2008-06-26 Denso Corp Vehicle-surroundings monitor apparatus
WO2009144994A1 (en) * 2008-05-29 2009-12-03 富士通株式会社 Vehicle image processor, and vehicle image processing system
JP2010016571A (en) * 2008-07-02 2010-01-21 Honda Motor Co Ltd Driving support device
JP2010081245A (en) * 2008-09-25 2010-04-08 Nissan Motor Co Ltd Display device for vehicle, and display method
JP2010199835A (en) * 2009-02-24 2010-09-09 Nissan Motor Co Ltd Image processor
JP2010231276A (en) * 2009-03-25 2010-10-14 Fujitsu Ltd Method and apparatus for processing image
WO2011001642A1 (en) * 2009-06-29 2011-01-06 パナソニック株式会社 Vehicle-mounted video display device
WO2011048716A1 (en) * 2009-10-21 2011-04-28 パナソニック株式会社 Video image conversion device and image capture device
JP2012080555A (en) * 2011-11-09 2012-04-19 Toshiba Corp Video reproducing device and video reproducing method
JP2014068308A (en) * 2012-09-27 2014-04-17 Fujitsu Ten Ltd Image generation device, image display system, and image generation method
KR20140114373A (en) * 2012-01-19 2014-09-26 로베르트 보쉬 게엠베하 Method and device for visualizing the surroundings of a vehicle
JP2016042704A (en) * 2015-09-17 2016-03-31 富士通テン株式会社 Image display system, image processing device, and image display method
JP2016208368A (en) * 2015-04-24 2016-12-08 富士通テン株式会社 Image processing device, image processing method, and on-vehicle device
JP2017069739A (en) * 2015-09-30 2017-04-06 アイシン精機株式会社 Periphery monitoring device
JP2020074503A (en) * 2015-04-24 2020-05-14 株式会社デンソーテン Image processing device, image processing method and on-vehicle device
JP2020135206A (en) * 2019-02-15 2020-08-31 パナソニックIpマネジメント株式会社 Image processing device, on-vehicle camera system, and image processing method
JP2021046096A (en) * 2019-09-18 2021-03-25 株式会社Subaru Vehicle exterior monitor device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002314991A (en) * 2001-02-09 2002-10-25 Matsushita Electric Ind Co Ltd Image synthesizer
JP2003118522A (en) * 2001-10-18 2003-04-23 Clarion Co Ltd Parking support device
JP2003204547A (en) * 2001-10-15 2003-07-18 Matsushita Electric Ind Co Ltd Vehicle surrounding monitoring system and method for adjusting the same
JP2004032464A (en) * 2002-06-27 2004-01-29 Clarion Co Ltd Method for displaying image of periphery of vehicle, signal processing unit used therefor and vehicle periphery monitoring device with the same processing unit mounted

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002314991A (en) * 2001-02-09 2002-10-25 Matsushita Electric Ind Co Ltd Image synthesizer
JP2003204547A (en) * 2001-10-15 2003-07-18 Matsushita Electric Ind Co Ltd Vehicle surrounding monitoring system and method for adjusting the same
JP2003118522A (en) * 2001-10-18 2003-04-23 Clarion Co Ltd Parking support device
JP2004032464A (en) * 2002-06-27 2004-01-29 Clarion Co Ltd Method for displaying image of periphery of vehicle, signal processing unit used therefor and vehicle periphery monitoring device with the same processing unit mounted

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008148059A (en) * 2006-12-11 2008-06-26 Denso Corp Vehicle-surroundings monitor apparatus
US9840199B2 (en) 2008-05-29 2017-12-12 Fujitsu Limited Vehicle image processing apparatus and vehicle image processing method
WO2009144994A1 (en) * 2008-05-29 2009-12-03 富士通株式会社 Vehicle image processor, and vehicle image processing system
US9403483B2 (en) 2008-05-29 2016-08-02 Fujitsu Limited Vehicle image processing apparatus and vehicle image processing method
JP5397373B2 (en) * 2008-05-29 2014-01-22 富士通株式会社 VEHICLE IMAGE PROCESSING DEVICE AND VEHICLE IMAGE PROCESSING METHOD
US9475430B2 (en) 2008-05-29 2016-10-25 Fujitsu Limited Vehicle image processing apparatus and vehicle image processing method
JP2010016571A (en) * 2008-07-02 2010-01-21 Honda Motor Co Ltd Driving support device
JP2010081245A (en) * 2008-09-25 2010-04-08 Nissan Motor Co Ltd Display device for vehicle, and display method
JP2010199835A (en) * 2009-02-24 2010-09-09 Nissan Motor Co Ltd Image processor
JP2010231276A (en) * 2009-03-25 2010-10-14 Fujitsu Ltd Method and apparatus for processing image
EP2234399B1 (en) 2009-03-25 2016-08-17 Fujitsu Limited Image processing method and image processing apparatus
US8576285B2 (en) 2009-03-25 2013-11-05 Fujitsu Limited In-vehicle image processing method and image processing apparatus
WO2011001642A1 (en) * 2009-06-29 2011-01-06 パナソニック株式会社 Vehicle-mounted video display device
JP5064601B2 (en) * 2009-06-29 2012-10-31 パナソニック株式会社 In-vehicle video display
JP2011091527A (en) * 2009-10-21 2011-05-06 Panasonic Corp Video conversion device and imaging apparatus
CN102577374A (en) * 2009-10-21 2012-07-11 松下电器产业株式会社 Video image conversion device and image capture device
WO2011048716A1 (en) * 2009-10-21 2011-04-28 パナソニック株式会社 Video image conversion device and image capture device
JP2012080555A (en) * 2011-11-09 2012-04-19 Toshiba Corp Video reproducing device and video reproducing method
KR20140114373A (en) * 2012-01-19 2014-09-26 로베르트 보쉬 게엠베하 Method and device for visualizing the surroundings of a vehicle
KR102034189B1 (en) * 2012-01-19 2019-10-18 로베르트 보쉬 게엠베하 Method and device for visualizing the surroundings of a vehicle
JP2014068308A (en) * 2012-09-27 2014-04-17 Fujitsu Ten Ltd Image generation device, image display system, and image generation method
US9479740B2 (en) 2012-09-27 2016-10-25 Fujitsu Ten Limited Image generating apparatus
US10647255B2 (en) 2015-04-24 2020-05-12 Denso Ten Limited Image processing device, image processing method, and on-vehicle apparatus
JP2016208368A (en) * 2015-04-24 2016-12-08 富士通テン株式会社 Image processing device, image processing method, and on-vehicle device
JP2020074503A (en) * 2015-04-24 2020-05-14 株式会社デンソーテン Image processing device, image processing method and on-vehicle device
US10328856B2 (en) 2015-04-24 2019-06-25 Denso Ten Limited Image processing device, image processing method, and on-vehicle apparatus
JP2016042704A (en) * 2015-09-17 2016-03-31 富士通テン株式会社 Image display system, image processing device, and image display method
WO2017057006A1 (en) * 2015-09-30 2017-04-06 アイシン精機株式会社 Periphery monitoring device
US10632915B2 (en) 2015-09-30 2020-04-28 Aisin Seiki Kabushiki Kaisha Surroundings monitoring apparatus
JP2017069739A (en) * 2015-09-30 2017-04-06 アイシン精機株式会社 Periphery monitoring device
JP2020135206A (en) * 2019-02-15 2020-08-31 パナソニックIpマネジメント株式会社 Image processing device, on-vehicle camera system, and image processing method
JP2021046096A (en) * 2019-09-18 2021-03-25 株式会社Subaru Vehicle exterior monitor device
JP7353110B2 (en) 2019-09-18 2023-09-29 株式会社Subaru External monitor device

Also Published As

Publication number Publication date
JP4569285B2 (en) 2010-10-27

Similar Documents

Publication Publication Date Title
JP4569285B2 (en) Image processing device
EP2437494B1 (en) Device for monitoring area around vehicle
JP5194679B2 (en) Vehicle periphery monitoring device and video display method
JP4934308B2 (en) Driving support system
US20190028651A1 (en) Imaging device, imaging system, and imaging method
JP5321711B2 (en) Vehicle periphery monitoring device and video display method
JP5003395B2 (en) Vehicle periphery image processing apparatus and vehicle periphery state presentation method
WO2015002031A1 (en) Video display system, video compositing device, and video compositing method
JP4902368B2 (en) Image processing apparatus and image processing method
JP2008077628A (en) Image processor and vehicle surrounding visual field support device and method
JP2010166196A (en) Vehicle periphery monitoring device
KR20190047027A (en) How to provide a rearview mirror view of the vehicle&#39;s surroundings in the vehicle
JP2008017311A (en) Display apparatus for vehicle and method for displaying circumference video image of vehicle
JP2012257107A (en) Image generating device
CN110651295A (en) Image processing apparatus, image processing method, and program
JP5168186B2 (en) Image processing device
JP2006254318A (en) Vehicle-mounted camera, vehicle-mounted monitor and forward road area imaging method
JP2008034964A (en) Image display apparatus
JP2007266703A5 (en)
JP2007266703A (en) Display control apparatus
JP4945315B2 (en) Driving support system and vehicle
CN115883985A (en) Image processing system, moving object, image processing method, and storage medium
JP6274936B2 (en) Driving assistance device
JP2006224927A (en) Device for visual recognition around vehicle
JP6252756B2 (en) Image processing apparatus, driving support apparatus, navigation apparatus, and camera apparatus

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20071029

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20091126

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20100209

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20100402

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20100518

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20100624

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20100713

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20100726

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130820

Year of fee payment: 3

R150 Certificate of patent or registration of utility model

Ref document number: 4569285

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140820

Year of fee payment: 4