WO2007145257A1 - Display device, display method, display program and computer readable recording medium - Google Patents

Display device, display method, display program and computer readable recording medium Download PDF

Info

Publication number
WO2007145257A1
WO2007145257A1 PCT/JP2007/061926 JP2007061926W WO2007145257A1 WO 2007145257 A1 WO2007145257 A1 WO 2007145257A1 JP 2007061926 W JP2007061926 W JP 2007061926W WO 2007145257 A1 WO2007145257 A1 WO 2007145257A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
display
peripheral
virtual
image
Prior art date
Application number
PCT/JP2007/061926
Other languages
French (fr)
Japanese (ja)
Inventor
Kenichiro Yano
Tatsuru Komori
Takeshi Sato
Original Assignee
Pioneer Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pioneer Corporation filed Critical Pioneer Corporation
Publication of WO2007145257A1 publication Critical patent/WO2007145257A1/en

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/161Decentralised systems, e.g. inter-vehicle communication
    • G08G1/163Decentralised systems, e.g. inter-vehicle communication involving continuous checking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • Display device display method, display program, and computer-readable recording medium
  • the present invention relates to a display device that displays video, a display method, a display program, and a recording medium that can be read by a computer.
  • a display device that displays video
  • a display method that displays video
  • a display program that controls the display
  • a recording medium that can be read by a computer.
  • use of the present invention is not limited to the display device, display method, display program, and computer-readable recording medium described above.
  • an increasing number of vehicles are equipped with an in-vehicle monitor device that displays various images such as a television broadcast image by a received television broadcast radio wave and a navigation image by a navigation device.
  • some vehicles equipped with an in-vehicle monitor device include a monitor camera that captures an image around the vehicle, and displays the surrounding image captured by the monitor camera on the in-vehicle monitor device.
  • Patent Document 1 Japanese Patent Application Laid-Open No. 2004-291867
  • the driver sets the parking space by operating the operation switch. Therefore, when setting the parking space during parking, the driver's surrounding environment An example is the problem of distracting attention and hindering safe driving.
  • an example of the problem is that the driver's misunderstanding about the vehicle's surrounding environment will occur if the video distortion increases. Can be mentioned.
  • a display device includes an imaging unit that captures a peripheral image around a moving body, and a peripheral imaged by the imaging unit.
  • Conversion means for converting a video into a virtual video captured from a virtual viewpoint different from the viewpoint from which the peripheral video was captured, and analyzing the peripheral video or virtual video, thereby moving the mobile body and the peripheral video Based on the detection means for detecting the positional relationship with a predetermined terrain or object (hereinafter referred to as “feature”) present in the image, and based on the positional relationship of the predetermined feature detected by the detection means!
  • Determining means for determining whether or not to switch the display video to be displayed from the peripheral video to the virtual video; and when the display determines that the display video is to be switched from the peripheral video camera to the virtual video.
  • the display device includes an imaging unit that captures a peripheral video around a moving body, and the moving body by analyzing the peripheral video, Based on the detection means for detecting the positional relationship with the existing predetermined feature and the result detected by the detection means, the peripheral video is different from the viewpoint that captured the peripheral video.
  • Conversion means for converting into a virtual video imaged according to the viewpoint power, and display means for displaying the virtual video image converted by the conversion means.
  • the display method according to the invention of claim 6 includes an imaging step of imaging a peripheral video around a moving body, and a viewpoint of imaging the peripheral video captured by the imaging step.
  • the display method according to the invention of claim 7 includes an imaging step of imaging a peripheral video around a moving object, and analyzing the peripheral video to thereby include the moving object and the peripheral video in the peripheral video. Based on a detection process for detecting a positional relationship with a predetermined existing feature and a result detected by the detection process, the peripheral video is different from a viewpoint from which the peripheral video was captured. A conversion step of converting the image to a virtual image captured by the viewpoint power, and a display step of displaying the virtual image converted by the conversion step.
  • a display program according to the invention of claim 8 causes a computer to execute the display method of claim 6 or 7.
  • a computer-readable recording medium records the display program according to claim 8.
  • FIG. 1 is a block diagram showing an example of a functional configuration of a display device that is useful for this embodiment.
  • FIG. 2 is a flowchart showing the contents of processing of the display device according to the present embodiment.
  • FIG. 3 is a block diagram showing an example of a hardware configuration of a navigation device that works well with the present embodiment.
  • FIG. 4 is an explanatory view showing an outline of parking of a vehicle which is effective in this embodiment.
  • FIG. 5 is a flowchart showing the contents of the processing in the navigation apparatus that is useful in the present embodiment.
  • FIG. 1 is a block diagram showing an example of a functional configuration of a display device that is useful in this embodiment.
  • the display device 100 includes an imaging unit 101, a conversion unit 102, a detection unit 103, a determination unit 104, a switching unit 105, a display unit 106, and a force.
  • the imaging unit 101 captures a peripheral video around the moving body.
  • Peripheral images include, for example, images that can be checked for safety when a moving object moves. When a moving object moves backward, it is an image that confirms features, passersby, other moving objects, etc. that exist behind. Or even a video confirming the space where the moving body stops!
  • the conversion unit 102 converts the peripheral video captured by the imaging unit 101 into a corrected peripheral video in which distortion due to optical characteristics is corrected, and the viewpoint from which the peripheral video is captured.
  • Different virtual viewpoint powers are converted into captured virtual images.
  • the virtual video looks like it looks down on the ground from directly above or diagonally from the surrounding image captured by the imaging unit 101 by viewpoint conversion that becomes a video from a virtual camera virtually installed. It is a video from a different viewpoint. Convert the corrected peripheral video to a virtual video.
  • the conversion unit 102 projects a peripheral video imaged by the imaging unit 101 on a virtual screen, images the projected video image with a virtual camera of a virtual viewpoint, and outputs the peripheral video image.
  • it may be configured to convert to virtual video.
  • the conversion unit 102 may convert the peripheral video imaged by the imaging unit 101 into a virtual video imaged from a plurality of virtual viewpoint powers different from the viewpoint where the peripheral video imaged.
  • the detection unit 103 analyzes the surrounding video (including the corrected peripheral video), and detects a moving object and a predetermined terrain or object (hereinafter referred to as "feature") present in the peripheral video. Detect the positional relationship of.
  • the detection unit 103 may detect the distance between the moving object and a feature that is subject to interference with the moving object. More specifically, for example, the feature to be interfered with may be an object that divides the space to stop when the moving body stops.
  • the detection unit 103 detects at least one of the peripheral video and the virtual video to detect the positional relationship between the moving object and a predetermined feature existing in the peripheral video.
  • the peripheral images and the plurality of virtual images may be analyzed.
  • the determination unit 104 determines whether or not to switch the display video to be displayed from the peripheral video camera to the virtual video. Specifically, for example, the determination unit 104 may switch the display video when the distance between the moving object and the feature detected by the detection unit 103 is shorter than a predetermined distance.
  • the switching unit 105 switches the display video from the peripheral video camera to the virtual video. Then, the display unit 106 displays the display video switched by the switching unit 105.
  • the conversion unit 102 When the detection unit 103 detects the positional relationship between the moving object and a predetermined feature existing in the peripheral video by analyzing the peripheral video, the conversion unit 102 On the basis of the detection result, the peripheral video may be converted into a virtual image captured with a virtual viewpoint power different from the viewpoint where the peripheral video is captured. In that case, the display unit 106 may be configured to display the virtual video converted by the conversion unit 102.
  • FIG. 2 is a flowchart showing the contents of the processing of the display device that is useful for the present embodiment.
  • the imaging unit 101 determines whether or not the imaging unit 101 has started the imaging of the peripheral image of the moving body (step S201).
  • step S201 when imaging of peripheral video is not started (step S201: No) Repeats the process and determines that the peripheral video has been started (step S201: Yes), the converter 102 corrects the distortion of the captured peripheral video and converts it into a corrected peripheral video.
  • the captured peripheral video is converted into a virtual video in which a virtual viewpoint power different from the viewpoint of capturing the peripheral video is also captured (step S202). Note that the virtual video in step S202 may be converted using the corrected peripheral video! /.
  • the detection unit 103 analyzes at least one of the corrected peripheral video corrected in step S202 and the converted virtual video, so that a moving object and a predetermined topography existing in the peripheral video or A positional relationship with an object (hereinafter referred to as “feature”) is detected (step S 203).
  • step S204 based on the positional relationship of the predetermined feature detected in step S203 by the determination unit 104, the display video to be displayed is changed from the peripheral video (including the corrected peripheral video) to the virtual video. Judgment is made as to whether or not to switch (step S204).
  • step S204 determines in step S204 that the display image is to be switched to the virtual image
  • the display image is switched from the peripheral image to the virtual image (step S205).
  • the display unit 106 displays the display image switched in step S205 (step S206), and ends the series of processes.
  • the power not described in the flowchart of FIG. 2 is obtained by analyzing the peripheral video in step S203 before converting the peripheral video to the virtual video in step S202. Based on the detected result, the surrounding image is converted into a virtual image captured with a virtual viewpoint that is different from the viewpoint from which the surrounding image was captured. It is good to do.
  • the display video can be switched when a video with a different viewpoint is required depending on the positional relationship between the moving object and the feature. Therefore, it is possible to grasp the appropriate environment around the moving body. In addition, the user can perform a safe operation of the moving body without performing an operation for switching images.
  • a navigation device mounted on a moving body such as a vehicle (including a four-wheeled vehicle and a two-wheeled vehicle) is used to An example in the case of implementing this display device will be described.
  • FIG. 3 is a block diagram showing an example of a hardware configuration of a navigation apparatus that works on the present embodiment.
  • a navigation device 300 is mounted on a moving body such as a vehicle, and includes a CPU 301, a ROM 302, a RAM 303, a magnetic disk drive 304, a magnetic disk 305, and an optical disk drive 306. , Optical disk 307, audio IZF (interface) 308, speaker 309, input device 310, video IZF 311, display 312, camera 3 13, communication I / F 314, GPS unit 315, and various sensors 316 , Prepare for. Each component 301 to 316 is connected by a bus 320.
  • the CPU 301 governs overall control of the navigation device 300.
  • the ROM 302 stores various programs such as a boot program, a current location calculation program, a route search program, a route guidance program, a voice generation program, an image recognition program, a display program, and a communication program.
  • the RAM 303 is used as a work area for the CPU 301.
  • the current location calculation program calculates the current location of the vehicle (the current location of the navigation device 300) based on output information from a GPS unit 315 and various sensors 316 described later.
  • the route search program searches for an optimum route from the departure point to the destination point using map information or the like recorded on a magnetic disk 305 to be described later.
  • the optimum route is the shortest (or fastest) route to the destination or the route that best meets the conditions specified by the user. Also, not only the destination point but also a route to a stop point or a rest point may be searched.
  • the guidance route searched by executing the route search program is output to the audio IZF 308 and the video IZF 311 via the CPU 301.
  • the route guidance program is read from the guidance route information searched by executing the route search program, the current location information of the vehicle calculated by executing the current location calculation program, and a magnetic disk 305 described later. Based on the map information Enables real-time route guidance information generation.
  • the route guidance information generated by executing the route guidance program is output to the audio IZF 308 and the video IZF 311 via the CPU 301.
  • the voice generation program generates tone and voice information corresponding to the pattern. In other words, based on the route guidance information generated by executing the route guidance program, the virtual sound source corresponding to the guidance point is set and the voice guidance information is generated. The generated voice guidance information is output to the voice IZF 308 via the CPU 301.
  • the image recognition program analyzes at least one of a camera image captured by a camera 313, which will be described later, and a viewpoint-converted image, whose viewpoint has been converted by a display program, which will be described later. Recognize the positional relationship with the target feature.
  • the recognition of the positional relationship between the vehicle and the feature that is the object of interference of the vehicle may be a vehicle that has a configuration that recognizes the type of the feature and the distance to the feature,
  • the display image in the display program may be switched. More specifically, for example, the positional relationship between the vehicle and the feature that is subject to the interference of the vehicle is analyzed by analyzing the distance and angle to the white line, the end of the white line, the ring stopper, and other vehicles related to the parking space. It is also possible to recognize that the vehicle is approaching the parking space.
  • the display program controls the video IZF 311 to switch the display image to be displayed on the display 312. Specifically, for example, in order to display the map information, the display format of the map information displayed on the display 312 is determined by the video IZF 311 and the map information is displayed on the display 312 according to the determined display format.
  • the display program causes the video IZF 311 to determine the display format according to the received radio wave of the television broadcast received by the communication IZF 314, and displays the television image according to the determined display format in order to display the television broadcast. It may be displayed as a display image.
  • the display program displays a camera image captured by the camera 313.
  • the display format to be displayed on the display 312 may be determined by the I / F 311, and the camera image captured by the determined display format may be displayed on the display 312.
  • the display program may be displayed by switching to a viewpoint conversion image obtained by performing viewpoint conversion on the camera image behind the vehicle as necessary.
  • Switching to the viewpoint-converted image may be performed, for example, when it is recognized that the vehicle is approaching the parking space by the above-described image recognition program, or output to various sensors 316 described later. Therefore, it may be performed when it is determined that the vehicle has approached the parking space.
  • the camera image behind the vehicle captured by the camera 313 using the wide-angle lens is suitable for grasping the entire outline of the parking space.
  • the viewpoint conversion image in which the force is also directly above or obliquely upward the viewpoint is converted into a viewpoint looking down on the ground, after the parking space is approached, accurate parking or other Suitable for grasping interference with features.
  • the other features may be, for example, other vehicles that have already been parked, white lines that delimit the parking space, or ring closures.
  • the switching to the viewpoint conversion image may be performed by, for example, a configuration in which the traveling direction of the vehicle is detected by a gyro sensor or the like based on the output of various sensors 316, and switching is performed according to the angle. It is good also as a structure switched according to the output of the sensor which detects the distance to a certain feature. Switching to a viewpoint-converted image is performed when, for example, the image recognition recognizes that the direction of the white line and the vehicle are parallel, or when the end of the ring or white line is recognized as being in a predetermined part of the screen. You may switch.
  • the viewpoint conversion is configured to generate a viewpoint conversion image from a plurality of viewpoints, such as an obliquely upward viewpoint or a viewpoint with a top force, for example, the viewpoint conversion image may be obtained by subdividing the parking stage. By switching, the driver can grasp the situation outside the vehicle in more detail.
  • the switching to the viewpoint conversion image can be set by the driver.
  • switching to a viewpoint-converted image enables switching suitable for each driver's driving skill by matching the distance to the feature to be interfered with to the driver's preference. Can do.
  • the magnetic disk drive 304 controls reading and writing of data with respect to the magnetic disk 305 according to the control of the CPU 301.
  • the magnetic disk 305 records data written under the control of the magnetic disk drive 304.
  • the magnetic disk 305 for example, HD (node disk) or FD (flexible disk) can be used.
  • map information recorded on the magnetic disk 305 is map information used for route search and route guidance.
  • the map information has background data that represents features (features) such as buildings, rivers, and the ground surface, and road shape data that represents the shape of the road, and is displayed in 2D or 3D on the display screen of the display 312. Drawn on.
  • the map information is recorded on the magnetic disk 305.
  • the map information may be recorded on an optical disk 307 described later.
  • the map information may be provided integrally with the hardware of the navigation device 300! /, And may be recorded outside the navigation device 300.
  • the navigation device 300 acquires map information via the network through, for example, the communication I / F 314. To get.
  • the acquired map information is stored in the RAM 303 or the like.
  • the optical disk drive 306 controls data reading / writing to the optical disk 307 according to the control of the CPU 301.
  • the optical disk 307 is a detachable recording medium from which data is read according to the control of the optical disk drive 306.
  • a writable recording medium can be used as the optical disk 307.
  • the removable recording medium may be a power MO of the optical disc 307, a memory card, or the like.
  • the audio IZF 308 is connected to a speaker 309 for audio output. Audio is output from the speaker 309. A plurality of speakers 309 may be installed in the vehicle.
  • Examples of the input device 310 include a remote controller having a plurality of keys for inputting characters, numerical values, various instructions, and the like, a keyboard, a mouse, and a touch panel.
  • the video IZF 311 is connected to the display 312 and the camera 313.
  • the video IZF 311 includes, for example, a graphic controller that controls the entire display 312, a buffer memory such as VRAM (Video RAM) that temporarily stores image information that can be displayed immediately, and a graphic controller. This is composed of a control IC that controls the display of the display 312 based on image data output from the display.
  • VRAM Video RAM
  • the display 312 displays icons, cursors, menus, windows, or various data such as characters and images.
  • a CRT for example, a CRT, a TFT liquid crystal display, a plasma display, or the like can be adopted.
  • a plurality of displays 312 may be provided in the vehicle, for example, for the driver and for a passenger seated in the rear seat.
  • the camera 313 images the outside of the vehicle as a camera image.
  • the camera 313 may be attached to, for example, a front part, a rear part, or a ceiling part of a vehicle that may be movable. For example, when the camera 313 captures a camera image behind the vehicle, the safety behind the vehicle can be confirmed.
  • the camera image taken by the camera 313 may be displayed on the display 312 according to the control of the video IZF311! /.
  • Communication IZF 314 is connected to a network via radio and functions as an interface between navigation device 300 and CPU 301.
  • the communication I / F 314 is further connected to a communication network such as the Internet via wireless communication, and an interface between the communication network and the CPU 301 is used. It also functions as a service.
  • Communication networks include LANs, WANs, public line networks, mobile phone networks, and the like.
  • the communication IZF314 is composed of, for example, an FM tuner, VICS (Vehicle Information and Communication System) Z beacon resino, wireless navigation device, and other navigation devices. Get road traffic information such as traffic regulations. VICS is a registered trademark.
  • the GPS unit 315 receives radio waves from GPS satellites and outputs information indicating the current location of the vehicle.
  • the output information of the GPS unit 315 is used when the CPU 301 calculates the current location of the vehicle together with output values of various sensors 316 described later.
  • the information indicating the current location is information that identifies one point on the map information, such as latitude'longitude and altitude.
  • Various sensors 316 include a vehicle speed sensor, an acceleration sensor, an angular velocity sensor, and the like, and output information that can determine the position and behavior of the vehicle.
  • the output values of the various sensors 316 are used for the calculation of the current position of the vehicle by the CPU 301 and the measurement of the change in speed and direction.
  • the imaging unit 101 is switched by the camera 313, and the conversion unit 102, the detection unit 103, and the determination unit 104 are switched by the CPU 301 and various sensors 316.
  • the unit 105 is realized by the CPU 301 and the video IZF 311, and the display unit 106 is realized by the display 312.
  • FIG. 4 is an explanatory view showing an outline of parking of the vehicle which is effective in this embodiment.
  • the parking lot 400 includes a plurality of parking spaces 410, a white line 411 that separates the parking spaces 410, a ring stop 412, and the like.
  • the vehicle 401 is equipped with the navigation device 300 shown in FIG. 3, and continuously operates along the illustrated arrows until it becomes a parked vehicle 402 indicated by a broken line.
  • a camera image behind the vehicle 401 captured by the camera 313 is displayed.
  • This camera image is an image for grasping the overall outline of the parking space 410, for example.
  • the camera image may be an image obtained by correcting distortion due to optical characteristics, for example, an image taken by a camera.
  • the positional relationship between the vehicle 401 and the white line 411 or the ring stop 412 becomes a predetermined positional relationship, the vehicle 401 approaches the parking space 410, and the vehicle 402 in the parked state indicated by the broken line is shown.
  • the display image is switched to a viewpoint conversion image based on a viewpoint with a force that is just above the power.
  • the viewpoint conversion image is, for example, a virtual image that becomes an image captured with a virtual viewpoint power different from the viewpoint of the camera 313 that captured the camera image.
  • the viewpoint-converted image is such that the virtual camera has a virtual viewpoint that is different from the viewpoint of the camera 313 with respect to the camera image. This is the viewpoint image.
  • a camera image is projected onto a virtual screen, and the projected image is captured by a virtual camera serving as a virtual viewpoint, thereby converting the camera image into a viewpoint converted image. is there.
  • the viewpoint conversion process may be configured to process all camera images captured by the camera 313, or may be configured to perform the above-described display switching.
  • FIG. 5 is a flowchart showing the contents of the processing in the navigation device that is useful in this embodiment.
  • the camera image may be captured when the vehicle 401 starts back for parking.
  • the camera image is determined to be started in step S501. Instead, it may be configured to determine whether it is necessary to switch the displayed image to a camera image!
  • Step S501 If it is determined in step S501 that camera image capturing has started (No in step S501), the process is repeated and it is determined that camera image capturing has started. (Step S501: Yes) performs processing for correcting distortion due to optical characteristics of the captured camera image, and displays the corrected camera image on the display 312 (Step S502).
  • the CPU 301 also generates a viewpoint-converted image with the camera image force captured in step S501 (step S503). Then, a feature point in the image is extracted from at least one of the corrected camera image and the viewpoint conversion image (step S504).
  • the feature points in the image may be, for example, a white line 411 or a ring stop 412 that delimits the parking space 410 when parking, or a feature to be interfered with such as another vehicle.
  • step S505 determines whether or not the positional relationship force between the vehicle and the feature that is the object of interference of the vehicle has a predetermined positional relationship.
  • step S505 if the positional relationship between the vehicle and the feature that is the object of the vehicle interference is not the predetermined positional relationship (step S505: No), the process returns to step S502 and the process is repeated. .
  • step S505 if the positional relationship between the vehicle and the feature that is the object of interference of the vehicle is a predetermined positional relationship (step S505: Yes), the CPU 301 displays the display image in step S505. In 503, the viewpoint conversion image generated is switched (step S506).
  • step S506 the viewpoint conversion image switched in step S506 is displayed on display 312 (step S507), and the series of processing ends.
  • various notifications regarding parking may be performed. Specifically, for example, when there is no hanger, it is possible to prevent the user from moving backward by notifying that there is no hanger. In addition, if it is not parallel to the white line, appropriate parking can be encouraged by notifying that effect.
  • various notifications are displayed as text information on the display unit, or are subject to notification. It may be configured to highlight white lines.
  • the display image is switched from the camera image to the viewpoint converted image.
  • both the camera image and the viewpoint converted image may be displayed.
  • each display screen becomes smaller, so the camera screen is displayed when the camera image display is necessary without dividing the display screen equally, and the viewpoint conversion is performed when the viewpoint conversion image display is required. It is also possible to display each image in a large size.
  • the display image can be automatically switched from the camera image to the viewpoint conversion image. Can accurately grasp the situation around the vehicle.
  • the driver can accurately grasp the situation outside the vehicle and park appropriately.
  • the driver can concentrate on driving without switching the display image during driving and can achieve safe driving.
  • the distance to the feature to be interfered with is adjusted to the driver's preference, so that each driver's driving Switching suitable for the technology can be made possible.
  • the display method described in the present embodiment can be realized by executing a prepared program on a computer such as a personal computer or a workstation.
  • This program is recorded on a computer-readable recording medium such as a hard disk, a flexible disk, a CD-ROM, an MO, and a DVD, and is executed by being read by the computer.
  • the program may be a transmission medium that can be distributed via a network such as the Internet.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Navigation (AREA)

Abstract

An imaging section (101) picks up peripheral images of a mobile object, an converting section (102) converts the peripheral images picked up by the imaging section (101) into virtual images picked up from virtual point of views different from the point of views from which the peripheral images are picked up, and a detecting section (103) detects the positional relationship between the mobile body and other objects existing in the peripheral images, by analyzing the peripheral images or the virtual images. Then, a judging section (104) judges whether to switch an display image to be displayed to the peripheral image to the virtual image or not, based on the relationship, which is between the mobile object and a prescribed other object and is detected by the detecting section (103), a switching section (105) switches the display image from the peripheral image to the virtual image when the judging section (104) judges that the display image is to be switched to the virtual image from the peripheral image, and a displaying section (106) displays the display image switched by the switching section (105).

Description

明 細 書  Specification
表示装置、表示方法、表示プログラムおよびコンピュータに読み取り可能 な記録媒体  Display device, display method, display program, and computer-readable recording medium
技術分野  Technical field
[0001] この発明は、映像を表示する表示装置、表示方法、表示プログラムおよびコンビュ ータに読み取り可能な記録媒体に関する。ただし、この発明の利用は、上述した表示 装置、表示方法、表示プログラムおよびコンピュータに読み取り可能な記録媒体には 限られない。  The present invention relates to a display device that displays video, a display method, a display program, and a recording medium that can be read by a computer. However, use of the present invention is not limited to the display device, display method, display program, and computer-readable recording medium described above.
背景技術  Background art
[0002] 従来より、受信したテレビ放送電波によるテレビ放送画像やナビゲーシヨン装置に よるナビゲーシヨン画像などの各種映像を表示する車載モニタ装置を搭載した車両 が増加している。さらに、車載モニタ装置を搭載した車両の中には、車両の周辺の映 像を撮像するモニタカメラを備え、モニタカメラによって撮像された周辺の映像を車載 モニタ装置に表示する車両もある。  [0002] Conventionally, an increasing number of vehicles are equipped with an in-vehicle monitor device that displays various images such as a television broadcast image by a received television broadcast radio wave and a navigation image by a navigation device. Furthermore, some vehicles equipped with an in-vehicle monitor device include a monitor camera that captures an image around the vehicle, and displays the surrounding image captured by the monitor camera on the in-vehicle monitor device.
[0003] 近年では、車載モニタ装置によって表示された車両周辺の映像を利用して、車両 が駐車する際の駐車スペースを設定し、駐車スペースまでの経路に沿って車両を移 動させる提案がされている(たとえば、下記特許文献 1参照。 ) o  [0003] In recent years, a proposal has been made to set a parking space when a vehicle is parked using an image around the vehicle displayed by an in-vehicle monitor device, and to move the vehicle along a route to the parking space. (For example, see Patent Document 1 below.) O
[0004] 特許文献 1:特開 2004— 291867号公報  [0004] Patent Document 1: Japanese Patent Application Laid-Open No. 2004-291867
発明の開示  Disclosure of the invention
発明が解決しょうとする課題  Problems to be solved by the invention
[0005] し力しながら、上記従来技術では、運転者が操作スィッチを操作することによって駐 車スペースを設定するため、駐車の際に駐車スペースを設定する場合、運転者の車 両周辺環境に対する注意が散漫となり安全運転に支障をきたすという問題が一例と して挙げられる。一方、駐車の際に駐車スペースの設定をおこなわずに車両周辺の 映像を視認して駐車する場合、映像の歪みが大きくなると運転者の車両周辺環境に 対する誤認が生じてしまうという問題が一例として挙げられる。 [0005] However, in the above prior art, the driver sets the parking space by operating the operation switch. Therefore, when setting the parking space during parking, the driver's surrounding environment An example is the problem of distracting attention and hindering safe driving. On the other hand, when parking the vehicle by visually checking the video around the vehicle without setting the parking space, an example of the problem is that the driver's misunderstanding about the vehicle's surrounding environment will occur if the video distortion increases. Can be mentioned.
課題を解決するための手段 [0006] 上述した課題を解決し、目的を達成するため、請求項 1の発明にかかる表示装置 は、移動体の周辺の周辺映像を撮像する撮像手段と、前記撮像手段によって撮像さ れた周辺映像を、当該周辺映像を撮像した視点とは異なる仮想的な視点から撮像さ れた仮想映像に変換する変換手段と、前記周辺映像または仮想映像を解析すること により、前記移動体と、当該周辺映像中に存在する所定の地形または物体 (以下、「 地物」という)との位置関係を検知する検知手段と、前記検知手段によって検知され た所定の地物の位置関係に基づ!、て、表示する表示映像を前記周辺映像から前記 仮想映像に切り替えるか否かを判断する判断手段と、前記判断手段によって前記表 示映像を前記周辺映像カゝら前記仮想映像に切り替えると判断された場合、当該表示 映像を当該周辺映像力 当該仮想映像に切り替える切替手段と、前記切替手段によ つて切り替えられた前記表示映像を表示する表示手段と、を備えることを特徴とする Means for solving the problem [0006] In order to solve the above-described problems and achieve the object, a display device according to the invention of claim 1 includes an imaging unit that captures a peripheral image around a moving body, and a peripheral imaged by the imaging unit. Conversion means for converting a video into a virtual video captured from a virtual viewpoint different from the viewpoint from which the peripheral video was captured, and analyzing the peripheral video or virtual video, thereby moving the mobile body and the peripheral video Based on the detection means for detecting the positional relationship with a predetermined terrain or object (hereinafter referred to as “feature”) present in the image, and based on the positional relationship of the predetermined feature detected by the detection means! Determining means for determining whether or not to switch the display video to be displayed from the peripheral video to the virtual video; and when the display determines that the display video is to be switched from the peripheral video camera to the virtual video. , Characterized in that it comprises a switching means for switching the display image on the peripheral image force the virtual image, and a display means for displaying the display image is switched by connexion to the switching means
[0007] また、請求項 5の発明に力かる表示装置は、移動体の周辺の周辺映像を撮像する 撮像手段と、前記周辺映像を解析することにより、前記移動体と、当該周辺映像中に 存在する所定の地物との位置関係を検知する検知手段と、前記検知手段によって検 知された結果に基づ!/ヽて、前記周辺映像を当該周辺映像を撮像した視点とは異なる 仮想的な視点力ゝら撮像された仮想映像に変換する変換手段と、前記変換手段によつ て変換された仮想映像を表示する表示手段と、を備えることを特徴とする。 [0007] Further, the display device according to the invention of claim 5 includes an imaging unit that captures a peripheral video around a moving body, and the moving body by analyzing the peripheral video, Based on the detection means for detecting the positional relationship with the existing predetermined feature and the result detected by the detection means, the peripheral video is different from the viewpoint that captured the peripheral video. Conversion means for converting into a virtual video imaged according to the viewpoint power, and display means for displaying the virtual video image converted by the conversion means.
[0008] また、請求項 6の発明に力かる表示方法は、移動体の周辺の周辺映像を撮像する 撮像工程と、前記撮像工程によって撮像された周辺映像を、当該周辺映像を撮像し た視点とは異なる仮想的な視点力ゝら撮像された仮想映像に変換する変換工程と、前 記周辺映像を解析することにより、前記移動体と、当該周辺映像中に存在する所定 の地形または物体 (以下、「地物」という)との位置関係を検知する検知工程と、前記 検知工程によって検知された所定の地物の位置関係に基づいて、表示する表示映 像を前記周辺映像から前記仮想映像に切り替えるか否かを判断する判断工程と、前 記判断工程によって前記表示映像を前記周辺映像から前記仮想映像に切り替える と判断された場合、当該表示映像を当該周辺映像カゝら当該仮想映像に切り替える切 替工程と、前記切替工程によって切り替えられた前記表示映像を表示する表示工程 と、を含むことを特徴とする。 [0008] Further, the display method according to the invention of claim 6 includes an imaging step of imaging a peripheral video around a moving body, and a viewpoint of imaging the peripheral video captured by the imaging step. A conversion step of converting the virtual viewpoint power different from that of the captured virtual image, and analyzing the peripheral video, the mobile body and a predetermined terrain or object ( (Hereinafter referred to as “feature”) and a display image to be displayed based on the positional relationship of the predetermined feature detected by the detection step from the peripheral video to the virtual video. A determination step for determining whether or not to switch to the virtual video when the display video is determined to be switched from the peripheral video to the virtual video by the determination step. Cut off A switching step of switching, and a display step of displaying the display video switched by the switching step It is characterized by including these.
[0009] また、請求項 7の発明に力かる表示方法は、移動体の周辺の周辺映像を撮像する 撮像工程と、前記周辺映像を解析することにより、前記移動体と、当該周辺映像中に 存在する所定の地物との位置関係を検知する検知工程と、前記検知工程によって検 知された結果に基づ!/ヽて、前記周辺映像を当該周辺映像を撮像した視点とは異なる 仮想的な視点力ゝら撮像された仮想映像に変換する変換工程と、前記変換工程によつ て変換された仮想映像を表示する表示工程と、を含むことを特徴とする。  [0009] Further, the display method according to the invention of claim 7 includes an imaging step of imaging a peripheral video around a moving object, and analyzing the peripheral video to thereby include the moving object and the peripheral video in the peripheral video. Based on a detection process for detecting a positional relationship with a predetermined existing feature and a result detected by the detection process, the peripheral video is different from a viewpoint from which the peripheral video was captured. A conversion step of converting the image to a virtual image captured by the viewpoint power, and a display step of displaying the virtual image converted by the conversion step.
[0010] また、請求項 8の発明に力かる表示プログラムは、請求項 6または 7に記載の表示 方法をコンピュータに実行させることを特徴とする。  [0010] Further, a display program according to the invention of claim 8 causes a computer to execute the display method of claim 6 or 7.
[0011] また、請求項 9の発明にかかるコンピュータに読み取り可能な記録媒体は、請求項 8に記載の表示プログラムを記録したことを特徴とする。  [0011] A computer-readable recording medium according to the invention of claim 9 records the display program according to claim 8.
図面の簡単な説明  Brief Description of Drawings
[0012] [図 1]図 1は、本実施の形態に力かる表示装置の機能的構成の一例を示すブロック図 である。  FIG. 1 is a block diagram showing an example of a functional configuration of a display device that is useful for this embodiment.
[図 2]図 2は、本実施の形態にかかる表示装置の処理の内容を示すフローチャートで ある。  FIG. 2 is a flowchart showing the contents of processing of the display device according to the present embodiment.
[図 3]図 3は、本実施例に力かるナビゲーシヨン装置のハードウェア構成の一例を示 すブロック図である。  FIG. 3 is a block diagram showing an example of a hardware configuration of a navigation device that works well with the present embodiment.
[図 4]図 4は、本実施例に力かる車両の駐車の概要を示す説明図である。  [FIG. 4] FIG. 4 is an explanatory view showing an outline of parking of a vehicle which is effective in this embodiment.
[図 5]図 5は、本実施例に力かるナビゲーシヨン装置における処理の内容を示すフロ 一チャートである。  [FIG. 5] FIG. 5 is a flowchart showing the contents of the processing in the navigation apparatus that is useful in the present embodiment.
符号の説明  Explanation of symbols
[0013] 100 表示装置 [0013] 100 display device
101 撮像部  101 Imaging unit
102 変換部  102 Conversion unit
103 検知部  103 Detector
104 判断部  104 Judgment part
105 切替部 106 表示部 105 Switching section 106 Display
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0014] 以下に添付図面を参照して、この発明にかかる表示装置、表示方法、表示プロダラ ムおよびコンピュータに読み取り可能な記録媒体の好適な実施の形態を詳細に説明 する。 Hereinafter, preferred embodiments of a display device, a display method, a display program, and a computer-readable recording medium according to the present invention will be described in detail with reference to the accompanying drawings.
[0015] (実施の形態)  [0015] (Embodiment)
(表示装置の機能的構成)  (Functional configuration of display device)
図 1を用いて、本実施の形態に力かる表示装置の機能的構成について説明する。 図 1は、本実施の形態に力かる表示装置の機能的構成の一例を示すブロック図であ る。  With reference to FIG. 1, the functional configuration of a display device that is useful in this embodiment will be described. FIG. 1 is a block diagram showing an example of a functional configuration of a display device that is useful in this embodiment.
[0016] 図 1において、表示装置 100は、撮像部 101と、変換部 102と、検知部 103と、判断 部 104と、切替部 105と、表示部 106と、力も構成されている。  In FIG. 1, the display device 100 includes an imaging unit 101, a conversion unit 102, a detection unit 103, a determination unit 104, a switching unit 105, a display unit 106, and a force.
[0017] 撮像部 101は、移動体周辺の周辺映像を撮像する。周辺映像は、たとえば、移動 体が移動する際に安全を確認できる映像などでもよぐ移動体が後方に移動する際、 後方に存在する地物、通行人、他の移動体などを確認する映像や、移動体が停止 するスペースを確認する映像であってもよ!/、。 [0017] The imaging unit 101 captures a peripheral video around the moving body. Peripheral images include, for example, images that can be checked for safety when a moving object moves. When a moving object moves backward, it is an image that confirms features, passersby, other moving objects, etc. that exist behind. Or even a video confirming the space where the moving body stops!
[0018] 変換部 102は、撮像部 101によって撮像された周辺映像を、光学的な特性による 歪みを補正した補正周辺映像に変換するとともに、当該周辺映像を、周辺映像を撮 像した視点とは異なる仮想的な視点力ゝら撮像された仮想映像に変換する。仮想映像 は、たとえば、撮像部 101によって撮像された周辺映像に対し、仮想的に設置した仮 想カメラからの映像となるような視点変換によって、あたかも真上や斜め上方から地 面を見下ろしたような視点の映像である。なお、補正周辺映像を仮想映像に変換す るようにしてちょい。 [0018] The conversion unit 102 converts the peripheral video captured by the imaging unit 101 into a corrected peripheral video in which distortion due to optical characteristics is corrected, and the viewpoint from which the peripheral video is captured. Different virtual viewpoint powers are converted into captured virtual images. For example, the virtual video looks like it looks down on the ground from directly above or diagonally from the surrounding image captured by the imaging unit 101 by viewpoint conversion that becomes a video from a virtual camera virtually installed. It is a video from a different viewpoint. Convert the corrected peripheral video to a virtual video.
[0019] 具体的には、たとえば、変換部 102は、撮像部 101によって撮像された周辺映像を 仮想スクリーンに投影し、投影された映像を仮想的な視点の仮想カメラによって撮像 し、周辺映像カゝら仮想映像に変換する構成としてもよい。また、変換部 102は、撮像 部 101によって撮像された周辺映像を、周辺映像を撮像した視点とは異なる複数の 仮想的な視点力ゝら撮像された仮想映像に変換することとしてもよい。 [0020] 検知部 103は、周辺映像 (補正周辺映像を含む)を解析することにより、移動体と、 周辺映像中に存在する所定の地形または物体 (以下、「地物」と 、う)との位置関係を 検知する。具体的には、たとえば、検知部 103は、移動体と、移動体に対して干渉の 対象となる地物との距離を検知することとしてもよい。より具体的には、たとえば、干渉 の対象となる地物は、移動体が停止する場合、停止する空間を区分けする物体など でもよい。 Specifically, for example, the conversion unit 102 projects a peripheral video imaged by the imaging unit 101 on a virtual screen, images the projected video image with a virtual camera of a virtual viewpoint, and outputs the peripheral video image. In addition, it may be configured to convert to virtual video. Further, the conversion unit 102 may convert the peripheral video imaged by the imaging unit 101 into a virtual video imaged from a plurality of virtual viewpoint powers different from the viewpoint where the peripheral video imaged. [0020] The detection unit 103 analyzes the surrounding video (including the corrected peripheral video), and detects a moving object and a predetermined terrain or object (hereinafter referred to as "feature") present in the peripheral video. Detect the positional relationship of. Specifically, for example, the detection unit 103 may detect the distance between the moving object and a feature that is subject to interference with the moving object. More specifically, for example, the feature to be interfered with may be an object that divides the space to stop when the moving body stops.
[0021] また、検知部 103は、周辺映像および仮想映像の少なくともいずれか一方を解析 することにより、移動体と、周辺映像中に存在する所定の地物との位置関係を検知す ることとしてもよく、仮想映像が複数であれば、周辺映像および複数の仮想映像のう ち少なくとも 、ずれか一つを解析する構成でもよ 、。  [0021] In addition, the detection unit 103 detects at least one of the peripheral video and the virtual video to detect the positional relationship between the moving object and a predetermined feature existing in the peripheral video. Of course, if there are a plurality of virtual images, at least one of the peripheral images and the plurality of virtual images may be analyzed.
[0022] 判断部 104は、検知部 103によって検知された所定の地物の位置関係に基づいて 、表示する表示映像を周辺映像カゝら仮想映像に切り替えるカゝ否かを判断する。具体 的には、たとえば、判断部 104は、検知部 103によって検知された移動体と地物との 距離が、所定の距離よりも短い場合に表示映像を切り替えることとしてもよい。  Based on the positional relationship of the predetermined features detected by the detection unit 103, the determination unit 104 determines whether or not to switch the display video to be displayed from the peripheral video camera to the virtual video. Specifically, for example, the determination unit 104 may switch the display video when the distance between the moving object and the feature detected by the detection unit 103 is shorter than a predetermined distance.
[0023] 切替部 105は、判断部 104によって表示映像を周辺映像カゝら仮想映像に切り替え ると判断された場合、表示映像を周辺映像カゝら仮想映像に切り替える。そして、表示 部 106は、切替部 105によって切り替えられた表示映像を表示する。  [0023] When the determination unit 104 determines that the display video is switched from the peripheral video camera to the virtual video, the switching unit 105 switches the display video from the peripheral video camera to the virtual video. Then, the display unit 106 displays the display video switched by the switching unit 105.
[0024] また、検知部 103が、周辺映像を解析することによって、移動体と周辺映像中に存 在する所定の地物との位置関係を検知する場合に、変換部 102は、検知部 103によ つて検知された結果に基づ ヽて、周辺映像を周辺映像を撮像した視点とは異なる仮 想的な視点力ゝら撮像された仮想映像に変換する構成としてもよい。その場合、表示 部 106は、変換部 102によって変換された仮想映像を表示する構成としてもよい。  [0024] When the detection unit 103 detects the positional relationship between the moving object and a predetermined feature existing in the peripheral video by analyzing the peripheral video, the conversion unit 102 On the basis of the detection result, the peripheral video may be converted into a virtual image captured with a virtual viewpoint power different from the viewpoint where the peripheral video is captured. In that case, the display unit 106 may be configured to display the virtual video converted by the conversion unit 102.
[0025] (表示装置 100の処理の内容)  [0025] (Processing content of display device 100)
つぎに、図 2を用いて本実施の形態に力かる表示装置 100の処理の内容について 説明する。図 2は、本実施の形態に力かる表示装置の処理の内容を示すフローチヤ ートである。図 2のフローチャートにおいて、まず、撮像部 101によって移動体の周辺 映像の撮像を開始した力否かを判断する (ステップ S201)。  Next, the contents of processing of the display device 100 that is useful for the present embodiment will be described with reference to FIG. FIG. 2 is a flowchart showing the contents of the processing of the display device that is useful for the present embodiment. In the flowchart of FIG. 2, first, it is determined whether or not the imaging unit 101 has started the imaging of the peripheral image of the moving body (step S201).
[0026] ステップ S201において、周辺映像の撮像を開始しない場合 (ステップ S201: No) は、処理を繰り返し、周辺映像の撮像を開始したと判断した場合 (ステップ S201 : Ye s)は、変換部 102によって、撮像された周辺映像の歪みを補正して補正周辺映像に 変換するとともに、撮像された周辺映像を、周辺映像を撮像した視点とは異なる仮想 的な視点力も撮像された仮想映像に変換する (ステップ S202)。なお、ステップ S20 2における仮想映像は、補正周辺映像を用いて変換する構成でもよ!/、。 [0026] In step S201, when imaging of peripheral video is not started (step S201: No) Repeats the process and determines that the peripheral video has been started (step S201: Yes), the converter 102 corrects the distortion of the captured peripheral video and converts it into a corrected peripheral video. The captured peripheral video is converted into a virtual video in which a virtual viewpoint power different from the viewpoint of capturing the peripheral video is also captured (step S202). Note that the virtual video in step S202 may be converted using the corrected peripheral video! /.
[0027] そして、検知部 103によって、ステップ S202において補正された補正周辺映像お よび変換された仮想映像の少なくともいずれかを解析することにより、移動体と、周辺 映像中に存在する所定の地形または物体 (以下、「地物」と 、う)との位置関係を検知 する(ステップ S 203)。 [0027] Then, the detection unit 103 analyzes at least one of the corrected peripheral video corrected in step S202 and the converted virtual video, so that a moving object and a predetermined topography existing in the peripheral video or A positional relationship with an object (hereinafter referred to as “feature”) is detected (step S 203).
[0028] つぎに、判断部 104によって、ステップ S203において検知された所定の地物の位 置関係に基づ!、て、表示する表示映像を周辺映像 (補正周辺映像を含む)から仮想 映像に切り替える力否かを判断する (ステップ S204)。  [0028] Next, based on the positional relationship of the predetermined feature detected in step S203 by the determination unit 104, the display video to be displayed is changed from the peripheral video (including the corrected peripheral video) to the virtual video. Judgment is made as to whether or not to switch (step S204).
[0029] そして、切替部 105によって、ステップ S204において表示映像を周辺映像力も仮 想映像に切り替えると判断された場合、周辺映像カゝら仮想映像に表示映像を切り替 える(ステップ S205)。そして、表示部 106によって、ステップ S205において切り替え られた表示映像を表示して (ステップ S 206)、一連の処理を終了する。  [0029] When the switching unit 105 determines in step S204 that the display image is to be switched to the virtual image, the display image is switched from the peripheral image to the virtual image (step S205). Then, the display unit 106 displays the display image switched in step S205 (step S206), and ends the series of processes.
[0030] なお、図 2のフローチャートでは説明を省略した力 ステップ S 202において、周辺 映像を仮想映像に変換する前に、ステップ S203において、周辺映像を解析すること によって、移動体と、周辺映像中に存在する所定の地物との位置関係を検知して、 検知した結果に基づいて、周辺映像を、周辺映像を撮像した視点とは異なる仮想的 な視点力ゝら撮像された仮想映像に変換することとしてもよい。  [0030] It should be noted that the power not described in the flowchart of FIG. 2 is obtained by analyzing the peripheral video in step S203 before converting the peripheral video to the virtual video in step S202. Based on the detected result, the surrounding image is converted into a virtual image captured with a virtual viewpoint that is different from the viewpoint from which the surrounding image was captured. It is good to do.
[0031] 以上説明したように、本実施の形態によれば、移動体と、地物との位置関係によつ て、視点を変えた映像が必要な場合に表示映像を切り替えることができるため、適切 な移動体周辺環境の把握を図ることができる。また、利用者は、映像を切り替えるた めの操作をすることなぐ移動体の安全な操作を図ることができる。  [0031] As described above, according to the present embodiment, the display video can be switched when a video with a different viewpoint is required depending on the positional relationship between the moving object and the feature. Therefore, it is possible to grasp the appropriate environment around the moving body. In addition, the user can perform a safe operation of the moving body without performing an operation for switching images.
実施例  Example
[0032] 以下に、本発明の実施例について説明する。本実施例では、たとえば、車両(四輪 車、二輪車を含む)などの移動体に搭載されるナビゲーシヨン装置によって、本発明 の表示装置を実施した場合の一例について説明する。 [0032] Examples of the present invention will be described below. In this embodiment, for example, a navigation device mounted on a moving body such as a vehicle (including a four-wheeled vehicle and a two-wheeled vehicle) is used to An example in the case of implementing this display device will be described.
[0033] (ナビゲーシヨン装置のハードウェア構成)  [0033] (Hardware configuration of navigation device)
まず、図 3を用いて、本実施例に力かるナビゲーシヨン装置のハードウェア構成に ついて説明する。図 3は、本実施例に力かるナビゲーシヨン装置のハードウェア構成 の一例を示すブロック図である。  First, the hardware configuration of the navigation apparatus that is useful in this embodiment will be described with reference to FIG. FIG. 3 is a block diagram showing an example of a hardware configuration of a navigation apparatus that works on the present embodiment.
[0034] 図 3において、ナビゲーシヨン装置 300は、車両などの移動体に搭載されており、 C PU301と、 ROM302と、 RAM303と、磁気ディスクドライブ 304と、磁気ディスク 30 5と、光ディスクドライブ 306と、光ディスク 307と、音声 IZF (インターフェース) 308と 、スピーカ 309と、入力デバイス 310と、映像 IZF311と、ディスプレイ 312と、カメラ 3 13と、通信 I/F314と、 GPSユニット 315と、各種センサ 316と、を備えて ヽる。また、 各構成部 301〜316はバス 320によってそれぞれ接続されている。  In FIG. 3, a navigation device 300 is mounted on a moving body such as a vehicle, and includes a CPU 301, a ROM 302, a RAM 303, a magnetic disk drive 304, a magnetic disk 305, and an optical disk drive 306. , Optical disk 307, audio IZF (interface) 308, speaker 309, input device 310, video IZF 311, display 312, camera 3 13, communication I / F 314, GPS unit 315, and various sensors 316 , Prepare for. Each component 301 to 316 is connected by a bus 320.
[0035] まず、 CPU301は、ナビゲーシヨン装置 300の全体の制御を司る。 ROM302は、 ブートプログラム、現在地点算出プログラム、経路探索プログラム、経路誘導プロダラ ム、音声生成プログラム、画像認識プログラム、表示プログラム、通信プログラムなど の各種プログラムを記録している。また、 RAM303は、 CPU301のワークエリアとし て使用される。  First, the CPU 301 governs overall control of the navigation device 300. The ROM 302 stores various programs such as a boot program, a current location calculation program, a route search program, a route guidance program, a voice generation program, an image recognition program, a display program, and a communication program. The RAM 303 is used as a work area for the CPU 301.
[0036] 現在地点算出プログラムは、たとえば、後述する GPSユニット 315および各種セン サ 316の出力情報に基づいて、車両の現在地点(ナビゲーシヨン装置 300の現在地 点)を算出させる。  [0036] The current location calculation program, for example, calculates the current location of the vehicle (the current location of the navigation device 300) based on output information from a GPS unit 315 and various sensors 316 described later.
[0037] 経路探索プログラムは、後述する磁気ディスク 305に記録されている地図情報など を利用して、出発地点から目的地点までの最適な経路を探索させる。ここで、最適な 経路とは、 目的地点までの最短 (あるいは最速)経路や利用者が指定した条件に最 も合致する経路などである。また、 目的地点のみならず、立ち寄り地点や休憩地点ま での経路を探索してもよい。経路探索プログラムを実行することによって探索された 誘導経路は、 CPU301を介して音声 IZF308や映像 IZF311へ出力される。  [0037] The route search program searches for an optimum route from the departure point to the destination point using map information or the like recorded on a magnetic disk 305 to be described later. Here, the optimum route is the shortest (or fastest) route to the destination or the route that best meets the conditions specified by the user. Also, not only the destination point but also a route to a stop point or a rest point may be searched. The guidance route searched by executing the route search program is output to the audio IZF 308 and the video IZF 311 via the CPU 301.
[0038] 経路誘導プログラムは、経路探索プログラムを実行することによって探索された誘 導経路情報、現在地点算出プログラムを実行することによって算出された車両の現 在地点情報、後述する磁気ディスク 305から読み出された地図情報に基づいて、リア ルタイムな経路誘導情報の生成をおこなわせる。経路誘導プログラムを実行すること によって生成された経路誘導情報は、 CPU301を介して音声 IZF308や映像 IZF 311へ出力される。 [0038] The route guidance program is read from the guidance route information searched by executing the route search program, the current location information of the vehicle calculated by executing the current location calculation program, and a magnetic disk 305 described later. Based on the map information Enables real-time route guidance information generation. The route guidance information generated by executing the route guidance program is output to the audio IZF 308 and the video IZF 311 via the CPU 301.
[0039] 音声生成プログラムは、パターンに対応したトーンと音声の情報を生成させる。すな わち、経路誘導プログラムを実行することによって生成された経路誘導情報に基づい て、案内ポイントに対応した仮想音源の設定と音声ガイダンス情報の生成をおこなわ せる。生成された音声ガイダンス情報は、 CPU301を介して音声 IZF308へ出力さ れる。  [0039] The voice generation program generates tone and voice information corresponding to the pattern. In other words, based on the route guidance information generated by executing the route guidance program, the virtual sound source corresponding to the guidance point is set and the voice guidance information is generated. The generated voice guidance information is output to the voice IZF 308 via the CPU 301.
[0040] 画像認識プログラムは、後述するカメラ 313で撮像されたカメラ画像および後述す る表示プログラムによって視点変換された視点変換画像の少なくともいずれか一方の 画像を解析させることにより、車両と、車両の干渉対象となる地物との位置関係を認 識させる。  [0040] The image recognition program analyzes at least one of a camera image captured by a camera 313, which will be described later, and a viewpoint-converted image, whose viewpoint has been converted by a display program, which will be described later. Recognize the positional relationship with the target feature.
[0041] 具体的には、たとえば、車両と、車両の干渉対象となる地物との位置関係の認識は 、地物の種類および地物までの距離などを認識する構成でもよぐ車両と、車両の干 渉対象となる地物とが所定の位置関係であると認識された場合、表示プログラムにお ける表示画像を切り替えてもよい。より具体的には、たとえば、車両と、車両の干渉対 象となる地物との位置関係は、駐車スペースに関する白線、白線の終端、輪留め、他 の車両までの距離や角度を解析することにより、車両が駐車スペースに接近して 、る ことを認識することとしてもよ 、。  [0041] Specifically, for example, the recognition of the positional relationship between the vehicle and the feature that is the object of interference of the vehicle may be a vehicle that has a configuration that recognizes the type of the feature and the distance to the feature, When it is recognized that the feature that the vehicle interferes with is in a predetermined positional relationship, the display image in the display program may be switched. More specifically, for example, the positional relationship between the vehicle and the feature that is subject to the interference of the vehicle is analyzed by analyzing the distance and angle to the white line, the end of the white line, the ring stopper, and other vehicles related to the parking space. It is also possible to recognize that the vehicle is approaching the parking space.
[0042] 表示プログラムは、映像 IZF311を制御して、ディスプレイ 312に表示する表示画 像を切り替えさせる。具体的には、たとえば、地図情報を表示させるため、映像 IZF3 11によってディスプレイ 312に表示する地図情報の表示形式を決定させ、決定され た表示形式によって地図情報をディスプレイ 312に表示させる。  [0042] The display program controls the video IZF 311 to switch the display image to be displayed on the display 312. Specifically, for example, in order to display the map information, the display format of the map information displayed on the display 312 is determined by the video IZF 311 and the map information is displayed on the display 312 according to the determined display format.
[0043] また、表示プログラムは、テレビ放送を表示させるため、通信 IZF314によって受信 されたテレビ放送の受信電波にしたがって、映像 IZF311によって表示形式を決定 させ、決定された表示形式によってテレビ画像をディスプレイ 312に表示画像として 表示させることとしてもよい。  [0043] In addition, the display program causes the video IZF 311 to determine the display format according to the received radio wave of the television broadcast received by the communication IZF 314, and displays the television image according to the determined display format in order to display the television broadcast. It may be displayed as a display image.
[0044] また、表示プログラムは、カメラ 313で撮像されたカメラ画像を表示させるため、映像 I/F311によってディスプレイ 312に表示する表示形式を決定させ、決定された表示 形式によって撮像されたカメラ画像をディスプレイ 312に表示させることとしてもよい。 また、表示プログラムは、たとえば、 CPU301によって、車両が駐車する場合、車両 後方のカメラ画像について必要に応じて視点変換をおこなった視点変換画像に切り 替えて表示させるようにしてもょ 、。 [0044] Further, the display program displays a camera image captured by the camera 313. The display format to be displayed on the display 312 may be determined by the I / F 311, and the camera image captured by the determined display format may be displayed on the display 312. In addition, for example, when the vehicle is parked by the CPU 301, the display program may be displayed by switching to a viewpoint conversion image obtained by performing viewpoint conversion on the camera image behind the vehicle as necessary.
[0045] 視点変換画像への切り替えは、たとえば、前述した画像認識プログラムによって車 両が駐車スペースに接近していることを認識した場合におこなってもよいし、後述す る各種センサ 316の出力に基づ 、て、車両が駐車スペースに接近したと判断される 場合におこなってもよい。  [0045] Switching to the viewpoint-converted image may be performed, for example, when it is recognized that the vehicle is approaching the parking space by the above-described image recognition program, or output to various sensors 316 described later. Therefore, it may be performed when it is determined that the vehicle has approached the parking space.
[0046] 視点変換画像の詳細は、図 4および図 5に後述するが、カメラ画像および視点変換 画像の切り替えは、カメラ画像および視点変換画像それぞれについて、表示する場 面の適正が異なることを利用する構成でもよい。  Details of the viewpoint conversion image will be described later with reference to FIGS. 4 and 5. However, switching between the camera image and the viewpoint conversion image is based on the fact that the appropriate display screen is different for each of the camera image and the viewpoint conversion image. The structure to do may be sufficient.
[0047] 具体的には、たとえば、車両が駐車する際の適正に関し、広角レンズを用いたカメ ラ 313によって撮像された車両後方のカメラ画像については、駐車スペースの全体 概要を把握する場合に適して 、る。  [0047] Specifically, for example, regarding the appropriateness when the vehicle is parked, the camera image behind the vehicle captured by the camera 313 using the wide-angle lens is suitable for grasping the entire outline of the parking space. And
[0048] 一方、あた力も真上や斜め上方力 地面を見下ろしたような視点に変換する視点変 換画像については、駐車スペースに接近してから、駐車スペースへの正確な駐車や 、他の地物との干渉などを把握する場合に適している。ここで、他の地物は、たとえば 、すでに駐車している他の車両や、駐車スペースを区切る白線や、輪留めなどでもよ い。  [0048] On the other hand, with regard to the viewpoint conversion image in which the force is also directly above or obliquely upward, the viewpoint is converted into a viewpoint looking down on the ground, after the parking space is approached, accurate parking or other Suitable for grasping interference with features. Here, the other features may be, for example, other vehicles that have already been parked, white lines that delimit the parking space, or ring closures.
[0049] また、視点変換画像への切り替えは、たとえば、各種センサ 316の出力によって、 車両の進行方向をジャイロセンサなどによって検知して、その角度に応じて切り替え る構成や、周囲の干渉対象となる地物までの距離を検知するセンサの出力に応じて 切り替える構成としてもよい。また、視点変換画像への切り替えは、たとえば、画像認 識によって、白線と車両の向きが平行になった場合や、輪留めや白線の終端が画面 内の所定部分にあると認識された場合に切り替えてもよい。  [0049] In addition, the switching to the viewpoint conversion image may be performed by, for example, a configuration in which the traveling direction of the vehicle is detected by a gyro sensor or the like based on the output of various sensors 316, and switching is performed according to the angle. It is good also as a structure switched according to the output of the sensor which detects the distance to a certain feature. Switching to a viewpoint-converted image is performed when, for example, the image recognition recognizes that the direction of the white line and the vehicle are parallel, or when the end of the ring or white line is recognized as being in a predetermined part of the screen. You may switch.
[0050] このようにして、駐車の段階に合わせて表示画像を切り替えることで、運転者は車 両外部の状況を的確に把握して駐車を適切におこなうことができる。また、駐車動作 中に自動的に最適な表示画像に切り替えることによって、運転者が運転動作中に表 示画像の切り替えに関する操作をおこなうことなぐ運転に集中し安全運転を図ること ができる。 [0050] In this way, by switching the display image in accordance with the parking stage, the driver can accurately grasp the situation outside the vehicle and perform parking appropriately. Also parking operation By automatically switching to the optimal display image during the driving, it is possible to concentrate on driving without the driver performing operations related to switching the display image during driving operation and to achieve safe driving.
[0051] また、視点変換にっ 、て、たとえば、斜め上方の視点や真上力もの視点など複数の 視点による視点変換画像を生成する構成とすれば、駐車の段階を細分化して視点 変換画像を切り替えることによって、運転者は、より詳細に車両外部の状況を把握す ることがでさる。  [0051] Also, if the viewpoint conversion is configured to generate a viewpoint conversion image from a plurality of viewpoints, such as an obliquely upward viewpoint or a viewpoint with a top force, for example, the viewpoint conversion image may be obtained by subdividing the parking stage. By switching, the driver can grasp the situation outside the vehicle in more detail.
[0052] 具体的には、たとえば、駐車の際、干渉対象となる地物力 遠い場合は少し斜め上 方から俯瞰して広 、範囲の視点変換画像となるようにし、駐車スペースに近くなつた ら真上力もの視点変換画像となるようにすることで、運転者は駐車スペースまでの距 離感を正確に把握することができる。  [0052] Specifically, for example, when parking, if the feature force to be interfered with is far away, a bird's-eye view is slightly widened from the top to form a wide range of viewpoint conversion images. By making the point-of-view conversion image with a force directly above, the driver can accurately grasp the sense of distance to the parking space.
[0053] また、視点変換画像への切り替えは、運転者による設定ができる構成としてもょ 、。  [0053] In addition, the switching to the viewpoint conversion image can be set by the driver.
具体的には、たとえば、視点変換画像への切り替えは、干渉対象となる地物までの 距離を運転者の好みに合わせることにより、それぞれの運転者の運転技術に適した 切り替えを可能とすることができる。  Specifically, for example, switching to a viewpoint-converted image enables switching suitable for each driver's driving skill by matching the distance to the feature to be interfered with to the driver's preference. Can do.
[0054] 磁気ディスクドライブ 304は、 CPU301の制御にしたがって磁気ディスク 305に対 するデータの読み取り Z書き込みを制御する。磁気ディスク 305は、磁気ディスクドラ イブ 304の制御で書き込まれたデータを記録する。磁気ディスク 305としては、たとえ ば、 HD (ノヽードディスク)や FD (フレキシブルディスク)を用いることができる。  The magnetic disk drive 304 controls reading and writing of data with respect to the magnetic disk 305 according to the control of the CPU 301. The magnetic disk 305 records data written under the control of the magnetic disk drive 304. As the magnetic disk 305, for example, HD (node disk) or FD (flexible disk) can be used.
[0055] 磁気ディスク 305に記録される情報の一例として、経路探索'経路誘導などに用い る地図情報が挙げられる。地図情報は、建物、河川、地表面などの地物 (フィーチャ) をあらわす背景データと、道路の形状をあらわす道路形状データとを有しており、デ イスプレイ 312の表示画面において 2次元または 3次元に描画される。  [0055] One example of information recorded on the magnetic disk 305 is map information used for route search and route guidance. The map information has background data that represents features (features) such as buildings, rivers, and the ground surface, and road shape data that represents the shape of the road, and is displayed in 2D or 3D on the display screen of the display 312. Drawn on.
[0056] なお、本実施例では地図情報を磁気ディスク 305に記録するようにしたが、後述す る光ディスク 307に記録するようにしてもよい。また、地図情報は、ナビゲーシヨン装 置 300のハードウェアと一体に設けられて!/、るものに限って記録されて 、るものでは なぐナビゲーシヨン装置 300外部に設けられていてもよい。その場合、ナビゲーショ ン装置 300は、たとえば、通信 I/F314を通じて、ネットワークを介して地図情報を取 得する。取得された地図情報は RAM303などに記憶される。 In this embodiment, the map information is recorded on the magnetic disk 305. However, the map information may be recorded on an optical disk 307 described later. The map information may be provided integrally with the hardware of the navigation device 300! /, And may be recorded outside the navigation device 300. In this case, the navigation device 300 acquires map information via the network through, for example, the communication I / F 314. To get. The acquired map information is stored in the RAM 303 or the like.
[0057] 光ディスクドライブ 306は、 CPU301の制御にしたがって光ディスク 307に対するデ ータの読み取り Z書き込みを制御する。光ディスク 307は、光ディスクドライブ 306の 制御にしたがってデータの読み出される着脱自在な記録媒体である。光ディスク 307 は、書き込み可能な記録媒体を利用することもできる。また、この着脱可能な記録媒 体として、光ディスク 307のほ力 MO、メモリカードなどであってもよい。  The optical disk drive 306 controls data reading / writing to the optical disk 307 according to the control of the CPU 301. The optical disk 307 is a detachable recording medium from which data is read according to the control of the optical disk drive 306. As the optical disk 307, a writable recording medium can be used. Further, the removable recording medium may be a power MO of the optical disc 307, a memory card, or the like.
[0058] 音声 IZF308は、音声出力用のスピーカ 309に接続される。スピーカ 309からは音 声が出力される。なお、スピーカ 309は、車両に複数設置されていてもよい。  The audio IZF 308 is connected to a speaker 309 for audio output. Audio is output from the speaker 309. A plurality of speakers 309 may be installed in the vehicle.
[0059] 入力デバイス 310は、文字、数値、各種指示などの入力のための複数のキーを備 えたリモコン、キーボード、マウス、タツチパネルなどが挙げられる。  [0059] Examples of the input device 310 include a remote controller having a plurality of keys for inputting characters, numerical values, various instructions, and the like, a keyboard, a mouse, and a touch panel.
[0060] 映像 IZF311は、ディスプレイ 312およびカメラ 313と接続される。映像 IZF311は 、具体的には、たとえば、ディスプレイ 312全体の制御をおこなうグラフィックコント口 ーラと、即時表示可能な画像情報を一時的に記録する VRAM (Video RAM)など のバッファメモリと、グラフィックコントローラから出力される画像データに基づいて、デ イスプレイ 312を表示制御する制御 ICなどによって構成される。  The video IZF 311 is connected to the display 312 and the camera 313. Specifically, the video IZF 311 includes, for example, a graphic controller that controls the entire display 312, a buffer memory such as VRAM (Video RAM) that temporarily stores image information that can be displayed immediately, and a graphic controller. This is composed of a control IC that controls the display of the display 312 based on image data output from the display.
[0061] ディスプレイ 312には、アイコン、カーソル、メニュー、ウィンドウ、あるいは文字や画 像などの各種データが表示される。このディスプレイ 312は、たとえば、 CRT、 TFT 液晶ディスプレイ、プラズマディスプレイなどを採用することができる。また、ディスプレ ィ 312は、車両に複数備えられていてもよぐたとえば、運転者に対するものと後部座 席に着座する搭乗者に対するものなどである。  [0061] The display 312 displays icons, cursors, menus, windows, or various data such as characters and images. As the display 312, for example, a CRT, a TFT liquid crystal display, a plasma display, or the like can be adopted. A plurality of displays 312 may be provided in the vehicle, for example, for the driver and for a passenger seated in the rear seat.
[0062] カメラ 313は、車両外部をカメラ画像として撮像する。カメラ 313は、たとえば、可動 式でもよぐ車両の前部や後部や天井部などに取り付けられていてもよい。カメラ 313 は、たとえば、車両後方のカメラ画像を撮像する場合、車両の後方の安全確認がで きる。また、カメラ 313によって撮像されるカメラ画像は、映像 IZF311の制御にした がって、ディスプレイ 312に表示されることとしてもよ!/、。  [0062] The camera 313 images the outside of the vehicle as a camera image. The camera 313 may be attached to, for example, a front part, a rear part, or a ceiling part of a vehicle that may be movable. For example, when the camera 313 captures a camera image behind the vehicle, the safety behind the vehicle can be confirmed. The camera image taken by the camera 313 may be displayed on the display 312 according to the control of the video IZF311! /.
[0063] 通信 IZF314は、無線を介してネットワークに接続され、ナビゲーシヨン装置 300と CPU301とのインターフェースとして機能する。通信 I/F314は、さらに、無線を介し てインターネットなどの通信網に接続され、この通信網と CPU301とのインターフエ一 スとしても機能する。 Communication IZF 314 is connected to a network via radio and functions as an interface between navigation device 300 and CPU 301. The communication I / F 314 is further connected to a communication network such as the Internet via wireless communication, and an interface between the communication network and the CPU 301 is used. It also functions as a service.
[0064] 通信網には、 LAN, WAN,公衆回線網や携帯電話網などがある。具体的には、 通信 IZF314は、たとえば、 FMチューナー、 VICS (Vehicle Information and Communication System) Zビーコンレシーノ 、無線ナビゲーシヨン装置、および その他のナビゲーシヨン装置によって構成され、 VICSセンター力も配信される渋滞 や交通規制などの道路交通情報を取得する。なお、 VICSは登録商標である。  [0064] Communication networks include LANs, WANs, public line networks, mobile phone networks, and the like. Specifically, the communication IZF314 is composed of, for example, an FM tuner, VICS (Vehicle Information and Communication System) Z beacon resino, wireless navigation device, and other navigation devices. Get road traffic information such as traffic regulations. VICS is a registered trademark.
[0065] GPSユニット 315は、 GPS衛星からの電波を受信し、車両の現在地点を示す情報 を出力する。 GPSユニット 315の出力情報は、後述する各種センサ 316の出力値とと もに、 CPU301による車両の現在地点の算出に際して利用される。現在地点を示す 情報は、たとえば緯度'経度、高度などの、地図情報上の 1点を特定する情報である  [0065] The GPS unit 315 receives radio waves from GPS satellites and outputs information indicating the current location of the vehicle. The output information of the GPS unit 315 is used when the CPU 301 calculates the current location of the vehicle together with output values of various sensors 316 described later. The information indicating the current location is information that identifies one point on the map information, such as latitude'longitude and altitude.
[0066] 各種センサ 316は、車速センサや加速度センサ、角速度センサなどを含み、車両 の位置や挙動を判断することが可能な情報を出力する。各種センサ 316の出力値は 、 CPU301による車両の現在地点の算出や、速度や方位の変化量の測定などに用 いられる。 Various sensors 316 include a vehicle speed sensor, an acceleration sensor, an angular velocity sensor, and the like, and output information that can determine the position and behavior of the vehicle. The output values of the various sensors 316 are used for the calculation of the current position of the vehicle by the CPU 301 and the measurement of the change in speed and direction.
[0067] なお、実施の形態に力かる表示装置 100の機能的構成のうち、撮像部 101はカメラ 313によって、変換部 102や検知部 103や判断部 104は CPU301や各種センサ 31 6によって、切替部 105は CPU301や映像 IZF311によって、表示部 106はデイス プレイ 312によって、それぞれその機能を実現する。  [0067] Of the functional configuration of the display device 100 that is relevant to the embodiment, the imaging unit 101 is switched by the camera 313, and the conversion unit 102, the detection unit 103, and the determination unit 104 are switched by the CPU 301 and various sensors 316. The unit 105 is realized by the CPU 301 and the video IZF 311, and the display unit 106 is realized by the display 312.
[0068] (駐車の概要)  [0068] (Outline of parking)
つぎに、図 4を用いて、本実施例に力かる車両の駐車の概要について説明する。図 4は、本実施例に力かる車両の駐車の概要を示す説明図である。  Next, with reference to FIG. 4, an outline of parking of the vehicle, which is useful in this embodiment, will be described. FIG. 4 is an explanatory view showing an outline of parking of the vehicle which is effective in this embodiment.
[0069] 図 4にお!/、て、駐車場 400は、複数の駐車スペース 410と、それぞれの駐車スぺー ス 410を区切る白線 411と、輪留め 412と、力ら構成されている。車両 401は、図 3に 示したナビゲーシヨン装置 300を搭載しており、破線で示された駐車状態の車両 402 となるまで、図示した矢印に沿って連続的に動作する。  [0069] In FIG. 4, the parking lot 400 includes a plurality of parking spaces 410, a white line 411 that separates the parking spaces 410, a ring stop 412, and the like. The vehicle 401 is equipped with the navigation device 300 shown in FIG. 3, and continuously operates along the illustrated arrows until it becomes a parked vehicle 402 indicated by a broken line.
[0070] ここで、図 4を用いて、車両 401が矢印に沿って駐車をする場合におけるディスプレ ィ 312の表示について説明する。まず、車両 401が駐車に関する動作を開始すると、 ディスプレイ 312には、カメラ 313によって撮像された車両 401後方のカメラ画像が表 示される。このカメラ画像は、たとえば、駐車スペース 410の全体概要を把握するた めの画像である。また、このカメラ画像は、たとえば、カメラによって撮像された画像に っ 、て、光学的な特性による歪みを補正した画像であってもよ 、。 [0070] Here, display of display 312 when vehicle 401 parks along the arrow will be described with reference to FIG. First, when the vehicle 401 starts an operation related to parking, On the display 312, a camera image behind the vehicle 401 captured by the camera 313 is displayed. This camera image is an image for grasping the overall outline of the parking space 410, for example. The camera image may be an image obtained by correcting distortion due to optical characteristics, for example, an image taken by a camera.
[0071] そして、画像認識によって車両 401と、白線 411や輪留め 412との位置関係が所定 の位置関係となり、車両 401が駐車スペース 410に接近して、破線で示された駐車 状態の車両 402となるにしたがって、あた力も真上力もの視点による視点変換画像へ 表示画像を切り替える。 [0071] Then, by image recognition, the positional relationship between the vehicle 401 and the white line 411 or the ring stop 412 becomes a predetermined positional relationship, the vehicle 401 approaches the parking space 410, and the vehicle 402 in the parked state indicated by the broken line is shown. As a result, the display image is switched to a viewpoint conversion image based on a viewpoint with a force that is just above the power.
[0072] 視点変換画像は、たとえば、カメラ画像を撮像したカメラ 313の視点とは異なる仮想 的な視点力ゝら撮像された画像となるような仮想的な画像である。具体的には、たとえ ば、視点変換画像は、カメラ画像に対してカメラ 313の視点とは異なる仮想的な視点 となる仮想カメラによって、あた力も真上や斜め上方力も地面を見下ろしたような視点 となる画像である。  [0072] The viewpoint conversion image is, for example, a virtual image that becomes an image captured with a virtual viewpoint power different from the viewpoint of the camera 313 that captured the camera image. Specifically, for example, the viewpoint-converted image is such that the virtual camera has a virtual viewpoint that is different from the viewpoint of the camera 313 with respect to the camera image. This is the viewpoint image.
[0073] 視点変換処理は、たとえば、カメラ画像を仮想的なスクリーンに投影し、投影された 画像を仮想的な視点となる仮想カメラによって撮像することにより、カメラ画像を視点 変換画像にすることである。視点変換処理は、カメラ 313によって撮像されたカメラ画 像をすベて処理する構成でもよ ヽし、前述した表示の切り替えが必要な場合に処理 する構成でもよい。  [0073] In the viewpoint conversion process, for example, a camera image is projected onto a virtual screen, and the projected image is captured by a virtual camera serving as a virtual viewpoint, thereby converting the camera image into a viewpoint converted image. is there. The viewpoint conversion process may be configured to process all camera images captured by the camera 313, or may be configured to perform the above-described display switching.
[0074] (ナビゲーシヨン装置 300の処理の内容)  [0074] (Contents of processing of navigation device 300)
つぎに、図 5を用いて、本実施例に力かるナビゲーシヨン装置 300の処理の内容に ついて説明する。図 5は、本実施例に力かるナビゲーシヨン装置における処理の内容 を示すフローチャートである。図 5のフローチャートにおいて、まず、カメラ 313によつ てカメラ画像の撮像を開始したカゝ否かを判断する (ステップ S501)。カメラ画像の撮 像は、たとえば、車両 401が駐車のためにバックを開始した時点で開始することとして もよい。また、常時カメラ画像を撮像して、ノ ックを開始した時点で表示画像をテレビ 画像や地図情報力もカメラ画像に切り替える構成であれば、ステップ S501では、カメ ラ画像の撮像の開始を判断する代わりに、表示画像をカメラ画像に切り替える必要が あるカゝ否かを判断する構成としてもよ!ヽ。 [0075] ステップ S501にお 、て、カメラ画像の撮像を開始して 、な 、と判断した場合 (ステ ップ S501 :No)は、処理を繰り返し、カメラ画像の撮像を開始したと判断した場合 (ス テツプ S501: Yes)は、撮像されたカメラ画像の光学的な特性による歪みを補正する 処理を行い、ディスプレイ 312によって、当該補正されたカメラ画像を表示する (ステ ップ S 502)。 Next, the contents of the processing of the navigation device 300 that is useful in the present embodiment will be described with reference to FIG. FIG. 5 is a flowchart showing the contents of the processing in the navigation device that is useful in this embodiment. In the flowchart of FIG. 5, it is first determined whether or not the camera 313 has started capturing a camera image (step S501). For example, the camera image may be captured when the vehicle 401 starts back for parking. In addition, if the camera image is always captured and the display image is switched to the TV image or the map information power when the knocking starts, the camera image is determined to be started in step S501. Instead, it may be configured to determine whether it is necessary to switch the displayed image to a camera image! [0075] If it is determined in step S501 that camera image capturing has started (No in step S501), the process is repeated and it is determined that camera image capturing has started. (Step S501: Yes) performs processing for correcting distortion due to optical characteristics of the captured camera image, and displays the corrected camera image on the display 312 (Step S502).
[0076] そして、 CPU301によって、ステップ S501において撮像されたカメラ画像力も視点 変換画像を生成する (ステップ S503)。そして、補正されたカメラ画像と視点変換画 像の少なくともいずれか一方カゝら画像中の特徴点を抽出する (ステップ S504)。画像 中の特徴点は、たとえば、駐車の際、駐車スペース 410を区切る白線 411や輪留め 4 12、あるいは、他の車両などの干渉対象となる地物でもよい。  [0076] Then, the CPU 301 also generates a viewpoint-converted image with the camera image force captured in step S501 (step S503). Then, a feature point in the image is extracted from at least one of the corrected camera image and the viewpoint conversion image (step S504). The feature points in the image may be, for example, a white line 411 or a ring stop 412 that delimits the parking space 410 when parking, or a feature to be interfered with such as another vehicle.
[0077] そして、 CPU301によって、ステップ S504において抽出された特徴点に基づいて 、車両と、車両の干渉対象となる地物との位置関係力 所定の位置関係となったか否 かを判断する (ステップ S505)。  [0077] Then, based on the feature points extracted in step S504, the CPU 301 determines whether or not the positional relationship force between the vehicle and the feature that is the object of interference of the vehicle has a predetermined positional relationship (step S505).
[0078] ステップ S505において、車両と、車両の干渉対象となる地物との位置関係が、所 定の位置関係となっていない場合 (ステップ S505 : No)は、ステップ S502に戻って 処理を繰り返す。  [0078] In step S505, if the positional relationship between the vehicle and the feature that is the object of the vehicle interference is not the predetermined positional relationship (step S505: No), the process returns to step S502 and the process is repeated. .
[0079] また、ステップ S505において、車両と、車両の干渉対象となる地物との位置関係が 、所定の位置関係となった場合 (ステップ S505 : Yes)は、 CPU301によって、表示 画像をステップ S 503にお 、て生成された視点変換画像に切り替える (ステップ S50 6)。  [0079] Also, in step S505, if the positional relationship between the vehicle and the feature that is the object of interference of the vehicle is a predetermined positional relationship (step S505: Yes), the CPU 301 displays the display image in step S505. In 503, the viewpoint conversion image generated is switched (step S506).
[0080] そして、ディスプレイ 312によって、ステップ S506において切り替えられた視点変換 画像を表示して (ステップ S 507)、一連の処理を終了する。  [0080] Then, the viewpoint conversion image switched in step S506 is displayed on display 312 (step S507), and the series of processing ends.
[0081] なお、図 5のフローチャートでは説明を省略した力 ステップ S506において視点変 換画像に切り替える場合には、あわせて、駐車に関する各種告知をおこなう構成とし てもよい。具体的には、たとえば、輪留めが存在しない場合に、輪留めがない旨を告 知することによって後退しすぎることを防ぐことができる。また、白線に対して平行にな つていない場合に、その旨を告知することによって適切な駐車を促すことができる。ま た、各種告知は、たとえば、表示部に文字情報として表示したり、告知の対象となる 白線などを強調表示したりする構成でもよ ヽ。 [0081] In addition, when switching to the viewpoint conversion image in the power step S506, which is not described in the flowchart of Fig. 5, various notifications regarding parking may be performed. Specifically, for example, when there is no hanger, it is possible to prevent the user from moving backward by notifying that there is no hanger. In addition, if it is not parallel to the white line, appropriate parking can be encouraged by notifying that effect. In addition, for example, various notifications are displayed as text information on the display unit, or are subject to notification. It may be configured to highlight white lines.
[0082] また、本実施例では、表示画像をカメラ画像から視点変換画像に切り替える構成と しているが、カメラ画像と視点変換画像とを両方表示する構成としてもよい。その際、 それぞれの表示画面が小さくなつてしまうため、表示画面を均等に分割せずに、カメ ラ画像の表示が必要な場合はカメラ画像を、視点変換画像の表示が必要な場合は 視点変換画像を、それぞれ大きく表示する構成としてもょ ヽ。  [0082] In the present embodiment, the display image is switched from the camera image to the viewpoint converted image. However, both the camera image and the viewpoint converted image may be displayed. At that time, each display screen becomes smaller, so the camera screen is displayed when the camera image display is necessary without dividing the display screen equally, and the viewpoint conversion is performed when the viewpoint conversion image display is required. It is also possible to display each image in a large size.
[0083] 以上説明したように、本実施例によれば、駐車に際して車両が駐車スペースに接近 した場合に、表示画像をカメラ画像から視点変換画像に自動的に切り替えることがで きるため、運転者は車両周辺の状況を的確に把握することができる。  [0083] As described above, according to the present embodiment, when the vehicle approaches the parking space during parking, the display image can be automatically switched from the camera image to the viewpoint conversion image. Can accurately grasp the situation around the vehicle.
[0084] 換言すれば、駐車の段階に合わせて表示する表示画像を切り替えることで、運転 者は車両外部の状況を的確に把握して駐車を適切におこなうことができる。また、駐 車動作中に自動的に最適な映像に切り替えることによって、運転者が運転動作中に 表示画像の切り替えをおこなうことなぐ運転に集中し安全運転を図ることができる。  In other words, by switching the display image to be displayed in accordance with the parking stage, the driver can accurately grasp the situation outside the vehicle and park appropriately. In addition, by automatically switching to the optimal video during parking, the driver can concentrate on driving without switching the display image during driving and can achieve safe driving.
[0085] また、視点変換画像への切り替えを、運転者による設定ができる構成とすれば、干 渉対象となる地物までの距離を運転者の好みに合わせることにより、それぞれの運転 者の運転技術に適した切り替えを可能とすることができる。  [0085] In addition, if the configuration is such that the switching to the viewpoint conversion image can be set by the driver, the distance to the feature to be interfered with is adjusted to the driver's preference, so that each driver's driving Switching suitable for the technology can be made possible.
[0086] さらに、駐車に関する各種告知をおこなう構成とすれば、運転者に最適な駐車を促 すとともに、地物や他の車両や通行人などの障害物を告知する場合は、車両周辺に 対して安全を確実に確認することができるため、安全性の向上を図ることができる。  [0086] Further, if the configuration is such that various types of parking notifications are provided, the driver is encouraged to park optimally, and in order to notify obstacles such as features and other vehicles and passers-by, Therefore, safety can be confirmed reliably, so that safety can be improved.
[0087] なお、本実施の形態で説明した表示方法は、あら力じめ用意されたプログラムをパ 一ソナル 'コンピュータやワークステーションなどのコンピュータで実行することにより 実現することができる。このプログラムは、ハードディスク、フレキシブルディスク、 CD -ROM, MO、 DVDなどのコンピュータで読み取り可能な記録媒体に記録され、コ ンピュータによって記録媒体力 読み出されることによって実行される。またこのプロ グラムは、インターネットなどのネットワークを介して配布することが可能な伝送媒体で あってもよい。  [0087] The display method described in the present embodiment can be realized by executing a prepared program on a computer such as a personal computer or a workstation. This program is recorded on a computer-readable recording medium such as a hard disk, a flexible disk, a CD-ROM, an MO, and a DVD, and is executed by being read by the computer. The program may be a transmission medium that can be distributed via a network such as the Internet.

Claims

請求の範囲 The scope of the claims
[1] 移動体の周辺の周辺映像を撮像する撮像手段と、  [1] An imaging means for imaging a peripheral image around a moving body;
前記撮像手段によって撮像された周辺映像を、当該周辺映像を撮像した視点とは 異なる仮想的な視点から撮像された仮想映像に変換する変換手段と、  Conversion means for converting the peripheral video imaged by the imaging means into a virtual video imaged from a virtual viewpoint different from the viewpoint from which the peripheral video was imaged;
前記周辺映像または前記仮想映像を解析することにより、前記移動体と、当該周辺 映像中に存在する所定の地形または物体 (以下、「地物」と 、う)との位置関係を検知 する検知手段と、  Detection means for detecting a positional relationship between the moving object and a predetermined topography or object (hereinafter referred to as “feature”) existing in the peripheral video by analyzing the peripheral video or the virtual video When,
前記検知手段によって検知された所定の地物の位置関係に基づいて、表示する表 示映像を前記周辺映像から前記仮想映像に切り替えるか否かを判断する判断手段 と、  Determination means for determining whether to switch the display video to be displayed from the peripheral video to the virtual video based on the positional relationship of the predetermined feature detected by the detection means;
前記判断手段によって前記表示映像を前記周辺映像から前記仮想映像に切り替 えると判断された場合、当該表示映像を当該周辺映像カゝら当該仮想映像に切り替え る切替手段と、  Switching means for switching the display video from the peripheral video camera to the virtual video when the determination means determines that the display video is switched from the peripheral video to the virtual video;
前記切替手段によって切り替えられた前記表示映像を表示する表示手段と、 を備えることを特徴とする表示装置。  Display means for displaying the display video switched by the switching means.
[2] 前記変換手段は、  [2] The conversion means includes
前記撮像手段によって撮像された周辺映像を、当該周辺映像を撮像した視点とは 異なる複数の仮想的な視点力ゝら撮像された仮想映像に変換し、  Converting the peripheral video imaged by the imaging means into a virtual video imaged from a plurality of virtual viewpoint powers different from the viewpoint where the peripheral video imaged,
前記検知手段は、  The detection means includes
前記周辺映像および複数の前記仮想映像の少なくともいずれか一つを解析するこ とにより、当該周辺映像中に存在する所定の地物との位置関係を検知し、  By analyzing at least one of the peripheral video and the plurality of virtual videos, a positional relationship with a predetermined feature existing in the peripheral video is detected,
前記判断手段は、  The determination means includes
前記検知手段によって検知された所定の地物の位置関係に基づいて、表示する表 示映像を前記周辺映像力 複数の前記仮想映像のうちいずれか一つに切り替える か否かを判断することを特徴とする請求項 1に記載の表示装置。  It is determined whether to switch the displayed video to be displayed to any one of the plurality of virtual videos based on the positional relationship of the predetermined feature detected by the detecting means. The display device according to claim 1.
[3] 前記検知手段は、 [3] The detection means includes
前記移動体と、前記移動体に対して干渉の対象となる地物との距離を少なくとも検 知し、 前記判断手段は、 Detecting at least a distance between the moving object and a feature that is subject to interference with the moving object; The determination means includes
前記検知手段によって検知された距離が、所定の距離よりも短い場合に前記表示 映像を切り替えることを特徴とする請求項 1に記載の表示装置。  2. The display device according to claim 1, wherein the display image is switched when a distance detected by the detection unit is shorter than a predetermined distance.
[4] 前記撮像手段は、 [4] The imaging means includes
少なくとも前記移動体の後方の映像を撮像することを特徴とする請求項 1〜3のい ずれか一つに記載の表示装置。  The display device according to any one of claims 1 to 3, wherein an image of at least the rear of the moving body is captured.
[5] 移動体の周辺の周辺映像を撮像する撮像手段と、 [5] An imaging means for capturing a peripheral image around the moving body;
前記周辺映像を解析することにより、前記移動体と、当該周辺映像中に存在する所 定の地物との位置関係を検知する検知手段と、  Detecting means for detecting a positional relationship between the moving object and a predetermined feature existing in the peripheral video by analyzing the peripheral video;
前記検知手段によって検知された結果に基づいて、前記周辺映像を当該周辺映 像を撮像した視点とは異なる仮想的な視点力ゝら撮像された仮想映像に変換する変換 手段と、  Conversion means for converting the peripheral video into a virtual image captured based on a virtual viewpoint power different from the viewpoint of capturing the peripheral video based on the result detected by the detection means;
前記変換手段によって変換された仮想映像を表示する表示手段と、  Display means for displaying the virtual image converted by the conversion means;
を備えることを特徴とする表示装置。  A display device comprising:
[6] 移動体の周辺の周辺映像を撮像する撮像工程と、 [6] An imaging process for imaging a peripheral image around a moving object;
前記撮像工程によって撮像された周辺映像を、当該周辺映像を撮像した視点とは 異なる仮想的な視点から撮像された仮想映像に変換する変換工程と、  A conversion step of converting the peripheral video imaged by the imaging step into a virtual video imaged from a virtual viewpoint different from the viewpoint of imaging the peripheral video;
前記周辺映像または前記仮想映像を解析することにより、前記移動体と、当該周辺 映像中に存在する所定の地形または物体 (以下、「地物」と 、う)との位置関係を検知 する検知工程と、  A detection step of detecting a positional relationship between the moving object and a predetermined topography or object (hereinafter referred to as “feature”) existing in the peripheral video by analyzing the peripheral video or the virtual video. When,
前記検知工程によって検知された所定の地物の位置関係に基づいて、表示する表 示映像を前記周辺映像から前記仮想映像に切り替えるか否かを判断する判断工程 と、  A determination step of determining whether to switch the display video to be displayed from the peripheral video to the virtual video based on the positional relationship of the predetermined feature detected by the detection step;
前記判断工程によって前記表示映像を前記周辺映像から前記仮想映像に切り替 えると判断された場合、当該表示映像を当該周辺映像カゝら当該仮想映像に切り替え る切替工程と、  A switching step of switching the display video from the peripheral video camera to the virtual video when the determination step determines that the display video is switched from the peripheral video to the virtual video;
前記切替工程によって切り替えられた前記表示映像を表示する表示工程と、 を含むことを特徴とする表示方法。 A display step of displaying the display video switched by the switching step.
[7] 移動体の周辺の周辺映像を撮像する撮像工程と、 [7] An imaging process for capturing a peripheral image around the moving body,
前記周辺映像を解析することにより、前記移動体と、当該周辺映像中に存在する所 定の地物との位置関係を検知する検知工程と、  A detection step of detecting a positional relationship between the moving object and a predetermined feature existing in the peripheral video by analyzing the peripheral video;
前記検知工程によって検知された結果に基づいて、前記周辺映像を当該周辺映 像を撮像した視点とは異なる仮想的な視点力ゝら撮像された仮想映像に変換する変換 工程と、  A conversion step of converting the peripheral video into a virtual image captured based on a virtual viewpoint power different from the viewpoint of capturing the peripheral video based on the result detected by the detection step;
前記変換工程によって変換された仮想映像を表示する表示工程と、  A display step for displaying the virtual image converted by the conversion step;
を含むことを特徴とする表示方法。  A display method comprising:
[8] 請求項 6または 7に記載の表示方法をコンピュータに実行させることを特徴とする表 示プログラム。 [8] A display program for causing a computer to execute the display method according to claim 6 or 7.
[9] 請求項 8に記載の表示プログラムを記録したことを特徴とするコンピュータに読み取 り可能な記録媒体。  [9] A computer-readable recording medium on which the display program according to claim 8 is recorded.
PCT/JP2007/061926 2006-06-13 2007-06-13 Display device, display method, display program and computer readable recording medium WO2007145257A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2006-163305 2006-06-13
JP2006163305 2006-06-13

Publications (1)

Publication Number Publication Date
WO2007145257A1 true WO2007145257A1 (en) 2007-12-21

Family

ID=38831771

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2007/061926 WO2007145257A1 (en) 2006-06-13 2007-06-13 Display device, display method, display program and computer readable recording medium

Country Status (1)

Country Link
WO (1) WO2007145257A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009253460A (en) * 2008-04-02 2009-10-29 Denso Corp Parking support system
JP2010205078A (en) * 2009-03-04 2010-09-16 Denso Corp Parking support device
JP2010228649A (en) * 2009-03-27 2010-10-14 Denso Corp Display system of photographed image outside vehicle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004056778A (en) * 2002-05-31 2004-02-19 Matsushita Electric Ind Co Ltd Vehicle periphery monitoring device, image generation method, and image generation program
JP2004064150A (en) * 2002-07-24 2004-02-26 Sumitomo Electric Ind Ltd Image conversion system and image processor
JP2005057536A (en) * 2003-08-05 2005-03-03 Nissan Motor Co Ltd Video presentation apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004056778A (en) * 2002-05-31 2004-02-19 Matsushita Electric Ind Co Ltd Vehicle periphery monitoring device, image generation method, and image generation program
JP2004064150A (en) * 2002-07-24 2004-02-26 Sumitomo Electric Ind Ltd Image conversion system and image processor
JP2005057536A (en) * 2003-08-05 2005-03-03 Nissan Motor Co Ltd Video presentation apparatus

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009253460A (en) * 2008-04-02 2009-10-29 Denso Corp Parking support system
JP2010205078A (en) * 2009-03-04 2010-09-16 Denso Corp Parking support device
JP2010228649A (en) * 2009-03-27 2010-10-14 Denso Corp Display system of photographed image outside vehicle

Similar Documents

Publication Publication Date Title
US8600655B2 (en) Road marking recognition system
JP4421549B2 (en) Driving assistance device
JP6453192B2 (en) Image recognition processing apparatus and program
WO2007037126A1 (en) Navigation device, navigation method, and vehicle
JP2015075889A (en) Driving support device
JP6106495B2 (en) Detection device, control method, program, and storage medium
JP2008018798A (en) Display controlling device, display controlling method, display controlling program, and recording medium which can be read by computer
JP2010026618A (en) On-vehicle navigation device and intersection entry guidance method
JPWO2008107944A1 (en) Lane departure prevention apparatus, lane departure prevention method, lane departure prevention program, and storage medium
JP4637302B2 (en) Road marking recognition system
JP2010003086A (en) Drive recorder
JP2016162424A (en) Estimation device, estimation method, and program
JP2024052899A (en) Communication device and communication method
JP2015141155A (en) virtual image display device, control method, program, and storage medium
WO2007145257A1 (en) Display device, display method, display program and computer readable recording medium
JP5702476B2 (en) Display device, control method, program, storage medium
WO2007135856A1 (en) Photographing control apparatus, photographing control method, photographing control program and recording medium
WO2007088915A1 (en) Route guidance device, route guidance method, route guidance program, and recording medium
WO2007148698A1 (en) Communication terminal device, communication method, communication program, and recording medium
JP2008262481A (en) Vehicle control device
JP4825555B2 (en) Display control apparatus, display control method, display control program, and computer-readable recording medium
JP2014044458A (en) On-vehicle device and danger notification method
JP2016143308A (en) Notification device, control method, program, and storage medium
JP2013077122A (en) Accident analysis device, accident analysis method, and program
JP2011215906A (en) Safety support device, safety support method, safety support program and recording medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07745193

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07745193

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP