WO2016075810A1 - Display device and display method - Google Patents

Display device and display method Download PDF

Info

Publication number
WO2016075810A1
WO2016075810A1 PCT/JP2014/080177 JP2014080177W WO2016075810A1 WO 2016075810 A1 WO2016075810 A1 WO 2016075810A1 JP 2014080177 W JP2014080177 W JP 2014080177W WO 2016075810 A1 WO2016075810 A1 WO 2016075810A1
Authority
WO
WIPO (PCT)
Prior art keywords
shadow image
image
vehicle
shadow
moving body
Prior art date
Application number
PCT/JP2014/080177
Other languages
French (fr)
Japanese (ja)
Inventor
拓良 柳
Original Assignee
日産自動車株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日産自動車株式会社 filed Critical 日産自動車株式会社
Priority to PCT/JP2014/080177 priority Critical patent/WO2016075810A1/en
Priority to JP2016558896A priority patent/JP6500909B2/en
Priority to PCT/JP2015/055957 priority patent/WO2016075954A1/en
Publication of WO2016075810A1 publication Critical patent/WO2016075810A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a display device and a display method for stereoscopically displaying an image around a moving object.
  • Patent Document 1 a moving body display device that projects an image of an object around the moving body and an image of the moving body onto a three-dimensional coordinate system
  • the problem to be solved by the present invention is to display an image in which the positional relationship between the moving object and the object can be easily understood.
  • the present invention solves the above problem by generating a shadow image imitating a shadow generated when light is applied to a moving body and displaying an image including the shadow image and a captured image around the moving body.
  • the present invention since the presence of the moving body and the direction of the moving body can be expressed by the shadow image of the moving body, it is possible to display an image in which the positional relationship between the moving body and surrounding objects can be easily understood.
  • FIG. 2A and 2B are diagrams showing an example of the installation position of the camera of this embodiment. It is a figure which shows an example of a solid coordinate system. It is a figure which shows an example of the shadow image of the own vehicle in the solid coordinate system shown to FIG. 3A. It is a figure which shows the other example of a solid coordinate system. It is a figure which shows the other example of the shadow image of the own vehicle in the solid coordinate system shown to FIG. 4A. It is a figure which shows the example of a display of the shadow image of the own vehicle and another vehicle. It is a flowchart figure which shows the control procedure of the display apparatus which concerns on 1st Embodiment of this invention. It is a flowchart figure which shows the subroutine which shows the control procedure of the setting process of the virtual light source shown in FIG. It is a figure for demonstrating the other example of a display of a shadow image.
  • the display system 1 of the present embodiment displays a video for grasping the moving body and the surroundings of the moving body on a display viewed by an operator of the moving body.
  • FIG. 1 is a block configuration diagram of a display system 1 including a display device 100 according to the present embodiment.
  • the display system 1 of this embodiment includes a display device 100 and a mobile device 200.
  • Each device of the display device 100 and the mobile device 200 includes a wired or wireless communication device (not shown), and exchanges information with each other.
  • the moving body to which the display system 1 of the present embodiment is applied includes a vehicle, a helicopter, a submarine explorer, an airplane, an armored vehicle, a train, a forklift, and other devices having a moving function.
  • a case where the moving body is a vehicle will be described as an example.
  • the moving body of the present embodiment may be a manned machine on which a human can be boarded, or an unmanned machine on which a human is not boarding.
  • the display system 1 of the present embodiment may be configured as a device mounted on a moving body, or may be configured as a portable device that can be brought into the moving body.
  • a part of the configuration of the display system 1 according to the present embodiment may be mounted on a moving body, and another configuration may be mounted on a device physically different from the moving body, and the configuration may be distributed.
  • the mobile body and another device are configured to be able to exchange information.
  • the mobile device 200 of this embodiment includes a camera 40, a controller 50, a sensor 60, a navigation device 70, and a display 80.
  • a LAN CANCAController ⁇ Area Network
  • LAN CANCAController ⁇ Area Network
  • the camera 40 of the present embodiment is provided at a predetermined position of a vehicle (an example of a moving body; the same applies hereinafter).
  • the number of cameras 40 provided in the vehicle may be one or plural.
  • the camera 40 mounted on the vehicle images the vehicle and / or the surroundings of the vehicle, and sends the captured image to the display device 100.
  • the captured image in the present embodiment includes a part of the vehicle and a video around the vehicle.
  • the captured image data is used for calculation processing of the positional relationship with the ground surface around the vehicle and generation processing of the image of the vehicle or the surroundings of the vehicle.
  • FIGS. 2A and 2B are diagrams illustrating an example of the installation position of the camera 40 mounted on the host vehicle V.
  • the host vehicle V includes a right front camera 40R1, a right center camera 40R2, a right rear camera 40R3, a left front camera 40L1 of the host vehicle V, and a left side.
  • Six cameras 40 of a center camera 40L2 and a left rear camera 40L3 are installed.
  • the arrangement position of the camera 40 is not specifically limited, The imaging direction can also be set arbitrarily.
  • Each camera 40 sends a captured image to the display device 100 at a command from the control device 10 to be described later or at a preset timing.
  • the captured image captured by the camera 40 is used for generating a video, and for detecting an object and measuring a distance to the object.
  • the captured image of the camera 40 of the present embodiment includes an image of an object around the vehicle.
  • the target object in the present embodiment includes other vehicles, pedestrians, road structures, parking lots, signs, facilities, and other objects existing around the host vehicle V.
  • the object in the present embodiment includes the ground surface around the moving body.
  • the “ground surface” is a term indicating a concept including the surface of the earth and the surface of the earth's crust (land).
  • the term “surface” of the present embodiment refers to a land surface, a sea surface, a river or river surface, a lake surface, a seabed surface, a road surface, a parking lot surface, a port surface, or two of these. Includes faces that contain more than one.
  • a facility such as a warehouse or a factory
  • the term “ground surface” in the present embodiment has a meaning including the floor surface and wall surface of the facility.
  • the “surface” is a surface that is exposed to the camera 40 during imaging.
  • the camera 40 of this embodiment includes an image processing device 401.
  • the image processing apparatus 401 extracts features such as an edge, a color, a shape, and a size from the captured image data of the camera 40, and identifies an attribute of the target object included in the captured image from the extracted features.
  • the image processing apparatus 401 stores in advance the characteristics of each target object, and identifies the target object included in the captured image by pattern matching processing.
  • the method for detecting the presence of an object using captured image data is not particularly limited, and a method known at the time of filing this application can be used as appropriate.
  • the image processing device 401 calculates the distance from the own vehicle to the object from the position of the feature point extracted from the data of the captured image of the camera 40 or the change over time of the position.
  • the image processing apparatus 401 uses imaging parameters such as the installation position of the camera 40, the optical axis direction, and imaging characteristics.
  • the method for measuring the distance to the object using the captured image data is not particularly limited, and a method known at the time of filing the present application can be used as appropriate.
  • the distance measuring device 41 may be provided as means for acquiring data for calculating the positional relationship with the host vehicle V.
  • the distance measuring device 41 may be used together with the camera 40 or may be used instead of the camera 40.
  • the distance measuring device 41 detects a target existing around the host vehicle V and measures the distance between the target and the host vehicle V. That is, the distance measuring device 41 has a function of detecting an object around the host vehicle V.
  • the distance measuring device 41 sends distance measurement data up to the measured object to the display device 100.
  • the distance measuring device 41 may be a radar distance measuring device or an ultrasonic distance measuring device. A ranging method known at the time of filing of the present application can be used.
  • the number of distance measuring devices 41 that install the distance measuring devices 41 on the host vehicle V is not particularly limited.
  • the installation position of the distance measuring device 41 that installs the distance measuring device 41 in the host vehicle V is not particularly limited.
  • the distance measuring device 41 may be provided at a position corresponding to the installation position of the camera 40 shown in FIG. 2 or in the vicinity thereof, or may be provided in front of or behind the host vehicle V. When the moving body is a helicopter, airplane, submarine spacecraft, or the like that moves in the height direction, the camera 40 and / or the distance measuring device 41 may be provided on the bottom side of the body.
  • the controller 50 of this embodiment controls the operation of the moving object including the host vehicle V.
  • the controller 50 centrally manages each piece of information related to the operation of the moving body, including detection information of the sensor 60 described later.
  • the sensor 60 of the present embodiment includes a speed sensor 61 and a longitudinal acceleration sensor 62.
  • the speed sensor 61 detects the moving speed of the host vehicle V.
  • the longitudinal acceleration sensor 62 detects the acceleration in the longitudinal direction of the host vehicle V.
  • the navigation device 70 of the present embodiment includes a position detection device 71 including a GPS (Global Positioning System) 711, map information 72, and road information 73.
  • the navigation device 70 obtains the current position of the host vehicle V using the GPS 711 and sends it to the display device 100.
  • the map information 72 of the present embodiment is information in which points are associated with roads, structures, facilities, and the like.
  • the navigation device 70 has a function of referring to the map information 72, obtaining a route from the current position of the host vehicle V detected by the position detection device 71 to the destination, and guiding the host vehicle V.
  • the road information 73 of this embodiment is information in which position information and road attribute information are associated with each other.
  • the road attribute information includes road attributes such as that each road is an overtaking lane, is not an overtaking lane, is an uphill lane, and is not an uphill lane.
  • the navigation device 70 refers to the road information 73 and, at the current position detected by the position detection device 71, the lane adjacent to the road on which the host vehicle V is traveling is an overtaking lane (a lane having a relatively high traveling speed). It is possible to obtain information on whether or not there is an uphill lane (a lane having a relatively low traveling speed).
  • the control device 10 can predict the vehicle speed of the other vehicle from the detected attribute information of the road on which the other vehicle travels.
  • the display 80 of the present embodiment displays an image of the host vehicle V and the surroundings of the host vehicle V generated from an arbitrary virtual viewpoint generated by the display device 100 described later.
  • the display system 1 in which the display 80 is mounted on a moving body will be described as an example.
  • the display 80 may be provided on the portable display device 100 side that can be brought into the moving body.
  • the display device 100 of this embodiment includes a control device 10.
  • the control device 10 of the display device 100 is stored in a ROM (Read Only Memory) 12 in which a program for displaying a moving body and surrounding images is stored, and in the ROM 12.
  • the CPU (Central Processing Unit) 11 serving as an operation circuit for realizing the functions of the display device 100 and the RAM (Random Access Memory) 13 functioning as an accessible storage device are provided.
  • the control device 10 may include a Graphics / Processing / Unit that executes image processing.
  • the control device 10 of the display device 100 realizes an image acquisition function, an information acquisition function, a shadow image generation function, and a display control function.
  • the control device 10 of this embodiment further includes an information acquisition function.
  • the control apparatus 10 of this embodiment performs each function by cooperation of the software for implement
  • the control device 10 acquires captured image data captured by the camera 40.
  • the display device 100 acquires captured image data from the mobile device 200 using a communication device (not shown).
  • the control device 10 acquires various types of information from the mobile device 200 using a communication device (not shown).
  • the control apparatus 10 acquires the current position information of the host vehicle V as a moving body.
  • the control device 10 acquires the current position detected by the GPS 711 of the navigation device 70.
  • the control device 10 acquires position information of an object existing around the host vehicle V as a moving body.
  • the acquired position information of the object is used for setting processing of the position of the virtual light source described later.
  • the control device 10 calculates the distance from the host vehicle V to the target object from the captured image of the camera 40 as position information of the target object with respect to the host vehicle V.
  • the control device 10 may use the imaging parameter of the camera 40 for the calculation process of the position information of the object.
  • the control device 10 may acquire the position information of the object calculated by the image processing device 401.
  • the control device 10 acquires the speed of the object.
  • the control device 10 calculates the speed of the object from the change with time of the position information of the object.
  • the control device 10 may calculate the speed of the object based on the captured image data.
  • the control device 10 may acquire the speed information of the object calculated by the image processing device 401.
  • the control device 10 acquires attribute information of the lane on which the target object travels.
  • the control device 10 refers to the road information 73 of the navigation device 70 and acquires the attribute information of the lane on which the object travels.
  • the control device 10 refers to the map information 72 or the road information 73 and identifies a road and a traveling lane that include the acquired position of the object.
  • the control device 10 refers to the road information 73 and acquires attribute information associated with the travel lane of the identified object.
  • the control device 10 calculates the positional relationship between the position of the host vehicle V detected by the GPS 711 and the symmetrical object, and considers the positional relationship, and determines the attribute of the traveling lane of the target object from the traveling lane attribute of the own vehicle V.
  • the traveling lane of the other vehicle is an overtaking lane.
  • the control device 10 acquires the acceleration in the traveling direction of the host vehicle V that is a moving body.
  • the control device 10 acquires the acceleration in the traveling direction of the host vehicle V from the host vehicle V.
  • the control device 10 acquires the longitudinal acceleration detected by the longitudinal acceleration sensor 62.
  • the control device 10 may calculate the acceleration from the speed detected by the speed sensor 61.
  • the control device 10 may calculate acceleration from a change in position information of the host vehicle V detected by the GPS 711.
  • the control device 10 sets a virtual light source according to the position information of the host vehicle V, and generates a shadow image imitating a shadow that is generated when the host vehicle V is irradiated with light from the virtual light source.
  • the control device 10 of the present embodiment displays a shadow image superimposed on a video obtained by projecting a captured image on a three-dimensional coordinate system.
  • the control device 10 sets a virtual light source according to the acquired position information of the host vehicle V, and generates a shadow image imitating a shadow that is generated when light is emitted from the virtual light source to the object.
  • the shadow image may be an image approximated to the shadow itself, or may be an image obtained by deforming the shadow itself in order to display the position and orientation of the host vehicle V.
  • the shadow image does not exactly correspond to the current shape of the host vehicle V, and is an image that looks like a shadow. Since it is not an actual shadow, a pattern or a color may be added to this shadow image.
  • the shadow image is not limited to the shape corresponding to the current shape of the host vehicle V, and may be an image showing the movable range of the host vehicle V.
  • the shadow image of the present embodiment may be an image showing the movable range of the door when the door of the host vehicle V that is currently closed is released.
  • the mode of the shadow image is not limited, and is appropriately designed according to information desired to be shown to the user.
  • a shadow mapping technique known at the time of filing may be used to generate a shadow image.
  • FIG. 3A is a diagram illustrating an example of a cylindrical solid coordinate system S1.
  • the host vehicle V shows a state of being placed on the plane G0.
  • FIG. 3B is a diagram showing a display example of the shadow image SH in the three-dimensional coordinate system S1 shown in FIG. 3A.
  • the shadow image SH is projected on the plane G0.
  • FIG. 4A is a diagram illustrating an example of a spherical solid coordinate system S2.
  • the host vehicle V shows a state placed on the plane G0.
  • FIG. 4B is a diagram illustrating a display example of a shadow image of the host vehicle V in the three-dimensional coordinate system S2 illustrated in FIG. 4A.
  • the shadow image SH is projected on the plane G0.
  • the shadow image SH shown in FIGS. 3B and 4B is an image that imitates a shadow that is generated when light is emitted from the virtual light source LG set in the three-dimensional coordinate systems S1 and S2.
  • the captured image is projected onto the three-dimensional coordinate system S (S1, S2) of FIGS. 3B and 4B.
  • the presence of the host vehicle V and the direction of the host vehicle V can be expressed by the shadow image SH of the host vehicle V.
  • the shape of the three-dimensional coordinate system S of the present embodiment is not particularly limited, and may be a bowl shape disclosed in Japanese Patent Application Laid-Open No. 2012-138660.
  • the control device 10 sets a virtual light source according to the acquired position information of an object such as another vehicle, and generates a shadow image imitating a shadow that is generated when the object is irradiated with light from the virtual light source. To do.
  • FIG. 5 is a diagram illustrating an example of an image including a shadow image SH of the host vehicle V, a shadow image SH1 of the other vehicle VX1 as a target, and a shadow image SH2 of the other vehicle VX2.
  • the virtual light source LG of the light irradiating the own vehicle V, the other vehicle VX1, and the other vehicle VX2 may be one point, or a unique virtual light source LG may be set for each of the own vehicle V, the other vehicle VX1, and the other vehicle VX2. .
  • the position of the virtual light source LG relative to the host vehicle V and the position of the virtual light source LG relative to the other vehicle VX1 (VX2) may be the same or different.
  • the control device 10 of the present embodiment sets a projection plane SQ for projecting a shadow image along the direction in which the host vehicle V as a moving body moves, and the host vehicle V travels (moves).
  • a shadow image imitating a shadow generated on the projection surface SQ when the other vehicle VX1 traveling in the adjacent lane Ln1 adjacent to the traveling lane Ln2 is irradiated with light is generated.
  • the host vehicle V and the other vehicle V1 It is possible to present an image that makes it easy to grasp the positional relationship between Further, by setting the projection plane SQ along the traveling direction of the host vehicle V, it is possible to present an image in which the distance between the host vehicle V and the other vehicle VX1 can be easily recognized. Furthermore, by setting the projection plane SQ so as to be substantially orthogonal (intersect at 90 degrees) with the road surface of the lane Ln2 on which the host vehicle V is traveling, the shadow of the host vehicle V can be easily seen by the driver of the host vehicle V.
  • the image SH and the shadow image SH1 of the other vehicle VX1 can be displayed. That is, the driver
  • the control device 10 creates a shadow on the projection surface SQ when light is applied to the preceding other vehicle VX2 traveling in front of the traveling lane in which the host vehicle V travels (moves). A simulated shadow image is generated.
  • the control device 10 of the present embodiment projects the shadow image SH of the host vehicle V, the shadow image SH1 of the other vehicle VX1, and the shadow image SH2 of the other vehicle VX2 on a common projection plane SQ.
  • the timing at which the host vehicle tries to change the lane from the lane Ln2 to the lane Ln1 is determined from the steering angle of the host vehicle, the blinker operation, the braking operation, and the like.
  • the projection position of the shadow image SH is changed according to the acceleration / deceleration of the host vehicle V. Specifically, when the host vehicle V is accelerating, the control device 10 shifts the position of the shadow image SH forward. On the other hand, when the host vehicle V is decelerating, the control device 10 shifts the position backward in the shadow image SH. Thereby, according to the condition of the own vehicle V, the shadow image SH which can grasp
  • the projection position of the shadow image SH is changed according to the difference between the speed of the lane in which the host vehicle V travels and the speed of the host vehicle V.
  • the speed of the lane flow may be an average speed of another vehicle VX traveling in the same lane as the host vehicle V is traveling, or a legal speed of the lane.
  • the difference (positive value) between the vehicle speed of the host vehicle V and the flow speed of the lane is large, that is, when the host vehicle V is approaching or overtaking the preceding other vehicle, The position is shifted forward in the image SH.
  • the control device 10 has a small difference between the vehicle speed of the host vehicle V and the flow speed of the lane, or a large difference (negative value), that is, the host vehicle V is approached by the other vehicle behind or overtakes the rear vehicle.
  • the position of the shadow image SH is shifted backward.
  • the control device 10 of the present embodiment is configured so that the area of the shadow image of the object including the other vehicle VX having a relatively high speed is larger than the area of the shadow image of the object having a relatively low speed.
  • shadow images SH1 and SH2 are generated.
  • the control device 10 of the present embodiment acquires the vehicle speed P1 of the other vehicle VX1 and the vehicle speed P2 of the other vehicle VX2, and compares the vehicle speeds P1 and P2.
  • the control device 10 generates the shadow image SH of the other vehicle VX with a high vehicle speed so that the area is larger than the shadow image SH of the other vehicle VX with a low vehicle speed.
  • the area of the shadow image SH1 of the other vehicle VX1 is set to the area of the shadow image SH2 of the other vehicle VX2. Larger than. By displaying a large shadow image SH2 having a large area, it is possible to alert the driver to another vehicle VX traveling at high speed.
  • control device 10 of the present embodiment may increase the area of the shadow image SH of the other vehicle VX as the speed of the other vehicle VX increases.
  • a driver using this system can predict the speed of the other vehicle VX from the size of the shadow image SH.
  • the speed of the other vehicle VX may be an absolute speed or may be a relative speed with respect to the vehicle speed of the host vehicle V.
  • the shadow image SH of the other vehicle V with a large degree of approaching to the host vehicle V can be displayed large.
  • the color and pattern of the shadow image SH may be changed according to the speed of the other vehicle V with respect to the host vehicle V.
  • the control device 10 of the present embodiment acquires attribute information that the lane Ln1 in which the other vehicle VX travels is an overtaking lane
  • the lane Ln2 in which the other vehicle VX is traveling is not in the overtaking lane.
  • the shadow image is generated so that the area of the shadow image of the other vehicle VX traveling on the overtaking lane is larger than that obtained. This is because the speed of the other vehicle VX traveling on the overtaking lane can be predicted to be higher than the speed of the other vehicle VX traveling on the non-overtaking lane.
  • the attribute information that the lane is an overtaking lane or is not an overtaking lane is acquired from the map information 72 and / or road information 73 of the navigation device 70.
  • the method for identifying and acquiring the lane attribute information is not particularly limited, and a method known at the time of filing can be used as appropriate.
  • control device 10 of the present embodiment may change the area of the shadow image in accordance with the steering skill of the driver of the moving object, for example, the driver of the own vehicle V.
  • the control device 10 when the skill of the pilot is low, the control device 10 generates a shadow image so that the area of the shadow image is larger than when the skill of the pilot is high.
  • the operator's skill may be input by the operator himself / herself, or may be determined based on experience such as the number of operations and distance.
  • the driver's skill may be determined from the pilot's past operation history.
  • the operation history of a pilot with high skills is compared with the operation history of individual pilots. If the difference is large, it is determined that the operation skill is low, and if the difference is small, the operation skill is Judged to be high.
  • vehicle driving driving
  • the driving skill of the vehicle can be determined based on the acceleration operation, the timing of the steering operation, and the steering amount when the travel lane is changed.
  • lane Ln1 is an overtaking lane
  • lane Ln2 is not an overtaking lane (uphill lane).
  • the shadow image SH1 of the other vehicle VX1 traveling on the lane Ln1 has a larger area than the shadow image SH2 of the other vehicle VX2 traveling on the lane Ln2 and the shadow image SH of the host vehicle V.
  • the size of the shadow image SH according to the attribute of the lane on which the host vehicle V and the other vehicle VX travel, as in the case of controlling the size of the shadow image SH by the actual vehicle speed, It is possible to alert the driver to the other vehicle VX traveling at high speed.
  • the control device 10 of the present embodiment determines that the host vehicle V is in an acceleration state from the acquired acceleration of the host vehicle V, the control device 10 determines the position of the virtual light source LG in the traveling direction of the host vehicle V (in the drawing). Shift to the opposite side (arrow F ′ direction in the figure) of arrow F direction.
  • the virtual light source LG is shifted to a rear position, for example, a virtual light source LG ⁇ b> 2 (indicated by a broken line).
  • the position of the virtual light source LG is set to the traveling direction side of the host vehicle V (the direction of arrow F in the figure). Shift.
  • the virtual light source LG is shifted to a front position, for example, a virtual light source LG1 (indicated by a broken line).
  • the shadow image suitable for the determination situation of the driver can be displayed by shifting the position of the virtual light source LG forward and shifting the projection position of the shadow image SH backward.
  • the setting position of the virtual light source LG is not limited, but may be the same position as the virtual viewpoint in the projection process.
  • the position of the virtual viewpoint for viewing the host vehicle V can be recognized from the shape of the shadow image, and the positional relationship between the host vehicle V and the surroundings can be easily understood.
  • the virtual light source LG may be arranged at infinity. In this case, since parallel projection can be performed, it is easy to grasp the positional relationship between the host vehicle V and the object (such as the other vehicle VX) from the shadow image SH.
  • the setting position of the virtual viewpoint is not particularly limited. Further, the position of the virtual viewpoint may be changeable according to the user's designation. Further, the surface on which the shadow image SH is projected (represented) may be a road surface of a road on which the host vehicle travels, or may be a projection surface set as shown in FIG. Further, when the moving body is a helicopter, the shadow information may be projected on the ground surface below the helicopter. When the moving body is a ship, the shadow information may be projected on the sea surface.
  • the control device 10 projects the captured image data acquired from the camera 40 onto the three-dimensional coordinate system S or the projection plane SQ, and generates images of the host vehicle V and surrounding objects from the set virtual viewpoint. Then, the control device 10 displays the generated video on the display 80.
  • the display 80 may be mounted on the host vehicle V and configured as the mobile device 200 or may be provided on the display device 100 side.
  • the display 80 may be a display for a two-dimensional image or a display that displays a three-dimensional image in which the positional relationship in the depth direction of the screen can be visually recognized.
  • the shadow image SH of the host vehicle V is included. Moreover, you may include the shadow images SH1 and SH2 of the other vehicle VX as an object in the video to be displayed. Of course, the image to be displayed may include both the shadow image SH of the host vehicle V and the shadow images SH1 and SH2 of the other vehicle VX.
  • the icon image V ′ (see FIGS. 3A, 3B, 4A, and 4B) indicating the host vehicle V prepared in advance is superimposed and displayed on the video displayed by the display device 100 of the present embodiment. Good.
  • the icon image V ′ of the vehicle may be created and stored in advance based on the design of the host vehicle V. In this manner, by superimposing the icon image V ′ of the host vehicle V on the video, the relationship between the position and orientation of the host vehicle V and the surrounding video can be shown in an easily understandable manner.
  • step S ⁇ b> 101 the control device 10 acquires a captured image captured by the camera 40.
  • step S102 the control device 10 acquires the current position of the host vehicle V and the position of the object including the other vehicle VX.
  • step S103 the control device 10 sets the position of the virtual light source.
  • One virtual light source may be set based on the host vehicle V, or a plurality of virtual light sources may be set for each of the host vehicle V and the other vehicle VX.
  • step S103 An example of the method of the virtual light source setting process (step S103) will be described based on the flowchart of FIG.
  • step S ⁇ b> 111 the control device acquires the acceleration of the host vehicle V.
  • step S112 the control device 10 determines whether the host vehicle V is in an acceleration state or a deceleration state based on the acceleration. When it is in the acceleration state, the process proceeds to step S113, and the virtual light source is shifted backward in the traveling direction. On the other hand, if it is determined in step S114 that the host vehicle V is in a decelerating state, the process proceeds to step S115, and the virtual light source is shifted forward in the traveling direction.
  • the control device 10 sets a projection plane.
  • the projection plane may be a three-dimensional coordinate system shown in FIGS. 3A, 3B, 4A, and 4B, or may be a two-dimensional coordinate system like the projection plane SQ shown in FIG. Furthermore, a plurality of projection planes may be set as shown in FIG.
  • step S105 the control device 10 generates a shadow image SH of the host vehicle V.
  • shadow images SH1 and SH2 of the other vehicles VX1 and VX2 are generated.
  • step S106 the control device 10 executes a process of projecting the captured image on the set projection plane, and generates a display image.
  • step S107 the control device 10 displays the generated video on the display 80.
  • the shadow image SH described here includes an image indicating the movable range of the movable member of the host vehicle V.
  • the projection plane showing the shadow image SH of the host vehicle V includes a first projection plane SQs along the vehicle length direction of the host vehicle V and a second projection plane SQb along the vehicle width direction.
  • a shadow image SHs simulating a shadow when light is emitted from the virtual light source LG set on the side of the vehicle is projected.
  • a shadow image SHb simulating a shadow when light is emitted from the virtual light source LG set in front of the vehicle is projected.
  • the shadow image SH of this example includes an image showing the movable range of the movable member of the host vehicle V.
  • the own vehicle V of this example is a hatchback type vehicle, and has a side door and a back door (hatch door).
  • the side door of the host vehicle V opens and closes sideways, and the back door of the host vehicle V opens and closes rearward.
  • the side door and the back door of the host vehicle V are movable members of the host vehicle V that is a moving body.
  • the control device 10 generates a shadow image indicating the movable range of the back door, assuming that the occupant opens the back door in order to load or unload the load from the rear loading platform.
  • the shadow image SHs projected on the first projection surface SQs includes a back door portion Vd3.
  • the back door portion Vd3 represents the rear extension (movable range) of the host vehicle V when the back door is opened.
  • the shadow image indicating the movable range of the back door is projected on the left and right sides of the host vehicle V or on the placement surface (parking surface / road surface) of the host vehicle V.
  • the shadow image may be projected on a wall surface or floor surface that actually exists.
  • the control device 10 generates a shadow image indicating the movable range of the side door, assuming that the side door is opened so that the occupant can get on and off the seat or carry in / out the luggage.
  • the shadow image SHb projected on the second projection surface SQb includes side door portions Vd1 and Vd2. Expresses the lateral extension (movable range) of the side door when the side door is opened.
  • the shadow image indicating the movable range of the side door is projected in front of and / or behind the host vehicle V.
  • the control device 10 may set a projection plane to project a shadow image, and project the shadow image onto the projection plane.
  • the driver of the host vehicle V considers the work after the host vehicle V is parked.
  • the parking position of the host vehicle V can be determined.
  • the mode of the shadow image SH will be described by taking the case where the moving body is the host vehicle V, that is, the door of the host vehicle V is movable, but the mode of the shadow image SH is not limited to this.
  • the moving body is a forklift
  • a shadow image is generated in consideration of the movable range of the forklift body and the lift equipment.
  • the moving body is a helicopter
  • a shadow image is generated in consideration of the rotation range of the helicopter body and the rotor blades.
  • the moving body is an airplane
  • a shadow image is generated in consideration of the installation range of the attached equipment such as an airplane main body and a boarding / alighting trap.
  • the mobile body is a submarine spacecraft, it is generated in consideration of the installation range of the platform provided as necessary.
  • the generated shadow image is displayed on the display 80.
  • the shadow image indicating the operating range of the lift device is displayed on, for example, the surrounding ground surface (for example, the floor or wall surface of a facility such as a warehouse or a factory) so that the forklift It can be confirmed in advance whether or not it can operate without interfering with other devices and facilities.
  • the surrounding ground surface for example, the ground surface
  • the attitude of the helicopter can be confirmed from the sky.
  • a shadow image indicating the installation range of the main body of the airplane and the attached equipment on the ground surface (for example, the ground surface) of the airplane it is possible to search from the sky for a place having an area where the airplane can make an emergency landing.
  • the shadow image SH may store a basic pattern prepared in advance, and the control device 10 may read it from the memory as necessary.
  • the display device 100 generates a shadow image SH imitating a shadow that is generated when the host vehicle V is irradiated with light, and displays an image including the shadow image SH and captured images around the host vehicle V. Display.
  • a shadow image SH imitating a shadow that is generated when the host vehicle V is irradiated with light
  • the display device 100 generates a shadow image SH imitating a shadow generated when light is irradiated on an object including the other vehicle VX, and displays a video including the shadow image SH and a captured image.
  • a shadow image SH imitating a shadow generated when light is irradiated on an object including the other vehicle VX
  • the display device 100 of the present embodiment sets a projection plane SQ that projects a shadow image along the direction in which the host vehicle V moves, and is adjacent to the travel lane Ln2 in which the host vehicle V travels (moves).
  • a shadow image imitating a shadow generated on the projection surface SQ when the other vehicle VX1 traveling on the lane Ln1 is irradiated with light is generated.
  • the shadow image SH1 of the other vehicle VX1 (target) traveling on the adjacent lane Ln1 onto the common projection plane SQ together with the shadow image SH of the host vehicle V the positional relationship between the host vehicle V and the other vehicle V1 is obtained. It is possible to present images that are easy to grasp.
  • by setting the projection surface SQ along the traveling direction of the host vehicle V it is possible to present an image in which the distance between the host vehicle V and the other vehicle VX1 can be easily recognized.
  • the display device 100 is configured so that the area of the shadow image of the object having a relatively high speed is larger than the area of the shadow image of the object having a relatively low speed. Shadow images SH1 and SH2 are generated. Thus, since the shadow image of the relatively high-speed target object is displayed relatively large, an image for calling attention to the high-speed target object can be displayed.
  • the display device 100 changes the size of the shadow image SH according to the attribute of the lane on which the host vehicle V and the other vehicle VX travel, thereby changing the size of the shadow image SH according to the actual vehicle speed. As in the case of controlling the vehicle, it is possible to alert the driver to the other vehicle VX traveling at high speed.
  • the display device 100 of the present embodiment shifts the position of the virtual light source LG backward, and shifts the projection position of the shadow image SH forward, so that the shadow suitable for the driver's judgment situation is obtained.
  • An image can be displayed.
  • the display device 100 shifts the position of the virtual light source LG forward and shifts the projection position of the shadow image SH backward when in the decelerating state. An image can be displayed.
  • the display device 100 of the present embodiment generates and displays a shadow image SH including an image showing the movable range of the movable member of the host vehicle V, so that the driver of the host vehicle V parks the host vehicle V.
  • the parking position of the host vehicle V can be determined in consideration of the work after the operation.
  • the display system 1 including the display device 100 as one embodiment of the display device according to the present invention will be described as an example, but the present invention is not limited to this.
  • the display device 100 including the control device 10 including the CPU 11, the ROM 12, and the RAM 13 is described as an embodiment of the display device according to the present invention, but the present invention is not limited to this.
  • a display device having an image acquisition unit, an information acquisition unit, a shadow image generation unit, and a display control unit according to the present invention
  • an image acquisition function, an information acquisition function The display device 100 including the control device 10 that executes the shadow image generation function and the display control function will be described as an example, but the present invention is not limited to this.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

Provided is a display device 100 comprising a control device 10 that executes: an image acquisition function that obtains captured images captured by a camera 40 mounted to a vehicle V; an information acquisition function that obtains position information for the vehicle V; a shadow image generation function that sets a virtual light source LG in accordance with the position information for the vehicle V and generates a shadow image SH imitating a shadow generated when light is irradiated on the vehicle V from the virtual light source LG; and a display control function that generates a video including the shadow image SH for the vehicle V and captured images and displays the generated video.

Description

表示装置及び表示方法Display device and display method
 本発明は、移動体の周囲の映像を立体的に表示する表示装置及び表示方法に関する。 The present invention relates to a display device and a display method for stereoscopically displaying an image around a moving object.
 この種の装置に関し、移動体周囲の対象物の像と移動体の像とを、立体座標系に投影する移動体用表示装置が知られている(特許文献1)。 Referring to this type of device, there is known a moving body display device that projects an image of an object around the moving body and an image of the moving body onto a three-dimensional coordinate system (Patent Document 1).
特開2012-138660号公報JP 2012-138660 A
 しかしながら、移動体の像と周囲の対象物の像をそのまま表示するだけでは、移動体と対象物との位置関係が把握しにくい場合があるという問題がある。 However, there is a problem in that it is difficult to grasp the positional relationship between the moving object and the object simply by displaying the image of the moving object and the surrounding object as it is.
 本発明が解決しようとする課題は、移動体と対象物との位置関係が把握しやすい映像を表示することである。 The problem to be solved by the present invention is to display an image in which the positional relationship between the moving object and the object can be easily understood.
 本発明は、移動体に光を照射したときに生じる影を模した影画像を生成し、影画像と移動体の周囲の撮像画像を含む映像を表示させることにより、上記課題を解決する。 The present invention solves the above problem by generating a shadow image imitating a shadow generated when light is applied to a moving body and displaying an image including the shadow image and a captured image around the moving body.
 本発明によれば、移動体の影画像により、移動体の存在や移動体の向きを表現できるので、移動体と周囲の対象物との位置関係が把握しやすい映像を表示できる。 According to the present invention, since the presence of the moving body and the direction of the moving body can be expressed by the shadow image of the moving body, it is possible to display an image in which the positional relationship between the moving body and surrounding objects can be easily understood.
本発明に係る表示装置を備える表示システムの構成図である。It is a block diagram of a display system provided with the display apparatus which concerns on this invention. 図2(A)(B)は本実施形態のカメラの設置位置の一例を示す図である。2A and 2B are diagrams showing an example of the installation position of the camera of this embodiment. 立体座標系の一例を示す図である。It is a figure which shows an example of a solid coordinate system. 図3Aに示す立体座標系における自車両の影画像の一例を示す図である。It is a figure which shows an example of the shadow image of the own vehicle in the solid coordinate system shown to FIG. 3A. 立体座標系の他の例を示す図である。It is a figure which shows the other example of a solid coordinate system. 図4Aに示す立体座標系における自車両の影画像の他の例を示す図である。It is a figure which shows the other example of the shadow image of the own vehicle in the solid coordinate system shown to FIG. 4A. 自車両及び他車両の影画像の表示例を示す図である。It is a figure which shows the example of a display of the shadow image of the own vehicle and another vehicle. 本発明の第1実施形態に係る表示装置の制御手順を示すフローチャート図である。It is a flowchart figure which shows the control procedure of the display apparatus which concerns on 1st Embodiment of this invention. 図6に示す仮想光源の設定処理の制御手順を示すサブルーチンを示すフローチャート図である。It is a flowchart figure which shows the subroutine which shows the control procedure of the setting process of the virtual light source shown in FIG. 影画像の他の表示例を説明するための図である。It is a figure for demonstrating the other example of a display of a shadow image.
 以下、本発明の実施形態を図面に基づいて説明する。本実施形態では、本発明に係る表示装置を、移動体に搭載された表示システム1に適用した場合を例にして説明する。本実施形態の表示システム1は、移動体及び移動体周囲の状況を把握するための映像を移動体の操作者が見るディスプレイに表示する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. In the present embodiment, a case where the display device according to the present invention is applied to a display system 1 mounted on a moving body will be described as an example. The display system 1 of the present embodiment displays a video for grasping the moving body and the surroundings of the moving body on a display viewed by an operator of the moving body.
 図1は、本実施形態に係る表示装置100を含む表示システム1のブロック構成図である。図1に示すように、本実施形態の表示システム1は、表示装置100と、移動体装置200とを備える。表示装置100と移動体装置200の各機器は、いずれも図示しない有線又は無線の通信装置を備え、互いに情報の授受を行う。 FIG. 1 is a block configuration diagram of a display system 1 including a display device 100 according to the present embodiment. As shown in FIG. 1, the display system 1 of this embodiment includes a display device 100 and a mobile device 200. Each device of the display device 100 and the mobile device 200 includes a wired or wireless communication device (not shown), and exchanges information with each other.
 本実施形態の表示システム1が適用される移動体は、車両、ヘリコプター、海底探査機、飛行機、装甲車、列車、フォークリフトその他の移動機能を備えるものを含む。本実施形態では、移動体が車両である場合を例にして説明する。なお、本実施形態の移動体は、人間が搭乗可能な有人機であってもよいし、人間が搭乗しない無人機であってもよい。 The moving body to which the display system 1 of the present embodiment is applied includes a vehicle, a helicopter, a submarine explorer, an airplane, an armored vehicle, a train, a forklift, and other devices having a moving function. In the present embodiment, a case where the moving body is a vehicle will be described as an example. Note that the moving body of the present embodiment may be a manned machine on which a human can be boarded, or an unmanned machine on which a human is not boarding.
 なお、本実施形態の表示システム1は、移動体に搭載された装置として構成してもよいし、移動体に持ち込み可能な可搬装置として構成してもよい。また、本実施形態の表示システム1の一部の構成を移動体に搭載し、他の構成を移動体とは物理的に別の装置に搭載し、構成を分散させてもよい。この場合において、移動体と別の装置とは、情報の授受が可能なように構成される。 Note that the display system 1 of the present embodiment may be configured as a device mounted on a moving body, or may be configured as a portable device that can be brought into the moving body. In addition, a part of the configuration of the display system 1 according to the present embodiment may be mounted on a moving body, and another configuration may be mounted on a device physically different from the moving body, and the configuration may be distributed. In this case, the mobile body and another device are configured to be able to exchange information.
 図1に示すように、本実施形態の移動体装置200は、カメラ40と、コントローラ50と、センサ60と、ナビゲーション装置70と、ディスプレイ80と、を備える。これらの各装置はCAN(Controller Area Network)その他の移動体に搭載されたLANによって接続され、相互に情報の授受を行うことができる。 As shown in FIG. 1, the mobile device 200 of this embodiment includes a camera 40, a controller 50, a sensor 60, a navigation device 70, and a display 80. Each of these devices is connected by a LAN (CANCAController 体 Area Network) or other LAN mounted on a mobile body, and can exchange information with each other.
 本実施形態のカメラ40は、車両(移動体の一例である。以下同じ)の所定位置に設けられている。車両に設けるカメラ40の個数は、一つでもよいし、複数でもよい。車両に搭載されたカメラ40は、車両及び/又は車両の周囲を撮像し、その撮像画像を表示装置100に送出する。本実施形態における撮像画像は、車両の一部と車両の周囲の映像を含む。撮像画像のデータは、車両の周囲の地表との位置関係の算出処理、車両又は車両の周囲の映像の生成処理に用いられる。 The camera 40 of the present embodiment is provided at a predetermined position of a vehicle (an example of a moving body; the same applies hereinafter). The number of cameras 40 provided in the vehicle may be one or plural. The camera 40 mounted on the vehicle images the vehicle and / or the surroundings of the vehicle, and sends the captured image to the display device 100. The captured image in the present embodiment includes a part of the vehicle and a video around the vehicle. The captured image data is used for calculation processing of the positional relationship with the ground surface around the vehicle and generation processing of the image of the vehicle or the surroundings of the vehicle.
 図2(A)(B)は、自車両Vに搭載されたカメラ40の設置位置の一例を示す図である。同図に示す例では、自車両Vには、自車両Vの右側前方のカメラ40R1と、右側中央のカメラ40R2と、右側後方のカメラ40R3と、自車両Vの左側前方のカメラ40L1と、左側中央のカメラ40L2と、左側後方のカメラ40L3との6つのカメラ40が設置されている。なお、カメラ40の配置位置は特に限定されず、その撮像方向も任意に設定ができる。各カメラ40は後述する制御装置10からの指令又は予め設定されたタイミングで撮像画像を表示装置100へ送出する。 FIGS. 2A and 2B are diagrams illustrating an example of the installation position of the camera 40 mounted on the host vehicle V. FIG. In the example shown in the figure, the host vehicle V includes a right front camera 40R1, a right center camera 40R2, a right rear camera 40R3, a left front camera 40L1 of the host vehicle V, and a left side. Six cameras 40 of a center camera 40L2 and a left rear camera 40L3 are installed. In addition, the arrangement position of the camera 40 is not specifically limited, The imaging direction can also be set arbitrarily. Each camera 40 sends a captured image to the display device 100 at a command from the control device 10 to be described later or at a preset timing.
 本実施形態において、カメラ40が撮像した撮像画像は、映像の生成に用いられるほか、対象物の検出、及び対象物までの距離の計測に用いられる。本実施形態のカメラ40の撮像画像は、車両の周囲の対象物の映像を含む。
 本実施形態における対象物は、自車両Vの周囲に存在する他車両、歩行者、道路構造物、駐車場、標識、施設、その他の物体を含む。本実施形態における対象物は、移動体周囲の地表を含む。本実施形態において「地表」は、地球の表面、地球の地殻(土地)の表面を含む概念を示す用語である。本実施形態の用語「地表」は、陸地の表面、海の表面、河又は川の表面、湖の表面、海底の表面、道路の表面、駐車場の表面、ポートの表面、又はこれらのうち二つ以上を含む面を含む。なお、移動体が倉庫や工場などの施設(建屋)の屋内に存在する場合において、本実施形態の用語「地表」は、その施設の床面、壁面を含む意味を有する。「表面」とは、撮像時にカメラ40側に表出される面である。
In the present embodiment, the captured image captured by the camera 40 is used for generating a video, and for detecting an object and measuring a distance to the object. The captured image of the camera 40 of the present embodiment includes an image of an object around the vehicle.
The target object in the present embodiment includes other vehicles, pedestrians, road structures, parking lots, signs, facilities, and other objects existing around the host vehicle V. The object in the present embodiment includes the ground surface around the moving body. In the present embodiment, the “ground surface” is a term indicating a concept including the surface of the earth and the surface of the earth's crust (land). The term “surface” of the present embodiment refers to a land surface, a sea surface, a river or river surface, a lake surface, a seabed surface, a road surface, a parking lot surface, a port surface, or two of these. Includes faces that contain more than one. In addition, when the moving body is present indoors in a facility (building) such as a warehouse or a factory, the term “ground surface” in the present embodiment has a meaning including the floor surface and wall surface of the facility. The “surface” is a surface that is exposed to the camera 40 during imaging.
 本実施形態のカメラ40は、画像処理装置401を備える。画像処理装置401は、カメラ40の撮像画像のデータからエッジ、色、形状、大きさなどの特徴を抽出し、抽出した特徴から撮像画像に含まれる対象物の属性を特定する。画像処理装置401は、各対象物の特徴を予め記憶し、パターンマッチング処理により撮像画像に含まれる対象物を特定する。撮像画像のデータを用いて対象物の存在を検出する手法は、特に限定されず、本願出願時に知られた手法を適宜に用いることができる。画像処理装置401は、カメラ40の撮像画像のデータから抽出した特徴点の位置又は位置の経時的な変化から自車両から対象物までの距離を算出する。距離を算出する際に、画像処理装置401は、カメラ40の設置位置、光軸方向、撮像特性などの撮像パラメータを用いる。撮像画像のデータを用いて対象物までの距離を計測する手法は、特に限定されず、本願出願時に知られた手法を適宜に用いることができる。 The camera 40 of this embodiment includes an image processing device 401. The image processing apparatus 401 extracts features such as an edge, a color, a shape, and a size from the captured image data of the camera 40, and identifies an attribute of the target object included in the captured image from the extracted features. The image processing apparatus 401 stores in advance the characteristics of each target object, and identifies the target object included in the captured image by pattern matching processing. The method for detecting the presence of an object using captured image data is not particularly limited, and a method known at the time of filing this application can be used as appropriate. The image processing device 401 calculates the distance from the own vehicle to the object from the position of the feature point extracted from the data of the captured image of the camera 40 or the change over time of the position. When calculating the distance, the image processing apparatus 401 uses imaging parameters such as the installation position of the camera 40, the optical axis direction, and imaging characteristics. The method for measuring the distance to the object using the captured image data is not particularly limited, and a method known at the time of filing the present application can be used as appropriate.
 本実施形態では、自車両Vとの位置関係を算出するためのデータを取得する手段として、測距装置41を備えてもよい。測距装置41は、カメラ40とともに使用してもよいし、カメラ40に代えて使用してもよい。測距装置41は、自車両Vの周囲に存在する対象物を検出し、対象物と自車両Vとの距離を計測する。つまり、測距装置41は、自車両Vの周囲の対象物を検出する機能を備える。測距装置41は、計測した対象物までの測距データを表示装置100に送出する。 In the present embodiment, the distance measuring device 41 may be provided as means for acquiring data for calculating the positional relationship with the host vehicle V. The distance measuring device 41 may be used together with the camera 40 or may be used instead of the camera 40. The distance measuring device 41 detects a target existing around the host vehicle V and measures the distance between the target and the host vehicle V. That is, the distance measuring device 41 has a function of detecting an object around the host vehicle V. The distance measuring device 41 sends distance measurement data up to the measured object to the display device 100.
 測距装置41は、レーダー測距装置であってもよいし、超音波測距装置であってもよい。本願出願時に知られた測距手法を利用できる。自車両Vに測距装置41を設置する測距装置41の個数は特に限定されない。また、自車両Vに測距装置41を設置する測距装置41の設置位置も、特に限定されない。測距装置41は、図2に示すカメラ40の設置位置に対応する位置又はその近傍に設けてもよいし、自車両Vの前方又は後方に設けてもよい。移動体が、高さ方向に移動するヘリコプター、飛行機、海底探査機などである場合には、カメラ40及び/又は測距装置41を機体の底面側に設けてもよい。 The distance measuring device 41 may be a radar distance measuring device or an ultrasonic distance measuring device. A ranging method known at the time of filing of the present application can be used. The number of distance measuring devices 41 that install the distance measuring devices 41 on the host vehicle V is not particularly limited. Also, the installation position of the distance measuring device 41 that installs the distance measuring device 41 in the host vehicle V is not particularly limited. The distance measuring device 41 may be provided at a position corresponding to the installation position of the camera 40 shown in FIG. 2 or in the vicinity thereof, or may be provided in front of or behind the host vehicle V. When the moving body is a helicopter, airplane, submarine spacecraft, or the like that moves in the height direction, the camera 40 and / or the distance measuring device 41 may be provided on the bottom side of the body.
 本実施形態のコントローラ50は、自車両Vを含む移動体の動作を制御する。コントローラ50は、後述するセンサ60の検出情報を含む、移動体の動作に関する各情報を集中的に管理する。 The controller 50 of this embodiment controls the operation of the moving object including the host vehicle V. The controller 50 centrally manages each piece of information related to the operation of the moving body, including detection information of the sensor 60 described later.
 本実施形態のセンサ60は、速度センサ61と、前後加速度センサ62とを含む。速度センサ61は、自車両Vの移動速度を検出する。前後加速度センサ62は、自車両Vの前後方向の加速度を検出する。 The sensor 60 of the present embodiment includes a speed sensor 61 and a longitudinal acceleration sensor 62. The speed sensor 61 detects the moving speed of the host vehicle V. The longitudinal acceleration sensor 62 detects the acceleration in the longitudinal direction of the host vehicle V.
 本実施形態のナビゲーション装置70は、GPS(Global Positioning System)711を備える位置検出装置71と、地図情報72と、道路情報73とを有する。ナビゲーション装置70は、GPS711を用いて、自車両Vの現在位置を求め、表示装置100に送出する。本実施形態の地図情報72は、地点と、道路、構造物、施設などが対応づけられた情報である。ナビゲーション装置70は、地図情報72を参照し、位置検出装置71により検出された自車両Vの現在位置から目的地までの経路を求め、自車両Vを誘導する機能を備える。 The navigation device 70 of the present embodiment includes a position detection device 71 including a GPS (Global Positioning System) 711, map information 72, and road information 73. The navigation device 70 obtains the current position of the host vehicle V using the GPS 711 and sends it to the display device 100. The map information 72 of the present embodiment is information in which points are associated with roads, structures, facilities, and the like. The navigation device 70 has a function of referring to the map information 72, obtaining a route from the current position of the host vehicle V detected by the position detection device 71 to the destination, and guiding the host vehicle V.
 本実施形態の道路情報73は、位置情報と道路の属性情報が対応づけられた情報である。道路の属性情報は、各道路が、追越し車線であること、追い越し車線ではないこと、登坂車線であること、登坂車線ではないことなどの道路の属性を含む。ナビゲーション装置70は、道路情報73を参照し、位置検出装置71により検出された現在位置において、自車両Vが走行する道路に隣接する車線が、追越し車線(相対的に走行速度が高い車線)であるか否か、又は登坂車線(相対的に走行速度が低い車線)であるか否かの情報を得ることができる。制御装置10は、検出された他車両が走行する道路の属性情報から、他車両の車速を予測できる。 The road information 73 of this embodiment is information in which position information and road attribute information are associated with each other. The road attribute information includes road attributes such as that each road is an overtaking lane, is not an overtaking lane, is an uphill lane, and is not an uphill lane. The navigation device 70 refers to the road information 73 and, at the current position detected by the position detection device 71, the lane adjacent to the road on which the host vehicle V is traveling is an overtaking lane (a lane having a relatively high traveling speed). It is possible to obtain information on whether or not there is an uphill lane (a lane having a relatively low traveling speed). The control device 10 can predict the vehicle speed of the other vehicle from the detected attribute information of the road on which the other vehicle travels.
 本実施形態のディスプレイ80は、後述する表示装置100が生成した、任意の仮想視点から自車両V及びその自車両Vの周囲を見た映像を表示する。なお、本実施形態では、ディスプレイ80を移動体に搭載させた表示システム1を例にして説明するが、移動体に持ち込み可能な可搬の表示装置100側にディスプレイ80を設けてもよい。 The display 80 of the present embodiment displays an image of the host vehicle V and the surroundings of the host vehicle V generated from an arbitrary virtual viewpoint generated by the display device 100 described later. In the present embodiment, the display system 1 in which the display 80 is mounted on a moving body will be described as an example. However, the display 80 may be provided on the portable display device 100 side that can be brought into the moving body.
 次に、本実施形態の表示装置100について説明する。本実施形態の表示装置100は、制御装置10を備える。 Next, the display device 100 of this embodiment will be described. The display device 100 of this embodiment includes a control device 10.
 図1に示すように、本実施形態に係る表示装置100の制御装置10は、移動体及びその周囲の映像を表示させるプログラムが格納されたROM(Read Only Memory)12と、このROM12に格納されたプログラムを実行することで、表示装置100の機能を実現させる動作回路としてのCPU(Central Processing Unit)11と、アクセス可能な記憶装置として機能するRAM(Random Access Memory)13と、を備える。制御装置10は、画像処理を実行するGraphics Processing Unit(グラフィックス プロセッシング ユニット)を備えてもよい。 As shown in FIG. 1, the control device 10 of the display device 100 according to the present embodiment is stored in a ROM (Read Only Memory) 12 in which a program for displaying a moving body and surrounding images is stored, and in the ROM 12. The CPU (Central Processing Unit) 11 serving as an operation circuit for realizing the functions of the display device 100 and the RAM (Random Access Memory) 13 functioning as an accessible storage device are provided. The control device 10 may include a Graphics / Processing / Unit that executes image processing.
 本実施形態に係る表示装置100の制御装置10は、画像取得機能と、情報取得機能と、影画像生成機能と、表示制御機能とを実現する。本実施形態の制御装置10は、さらに情報取得機能を備える。本実施形態の制御装置10は、上記機能を実現するためのソフトウェアと、上述したハードウェアの協働により各機能を実行する。 The control device 10 of the display device 100 according to the present embodiment realizes an image acquisition function, an information acquisition function, a shadow image generation function, and a display control function. The control device 10 of this embodiment further includes an information acquisition function. The control apparatus 10 of this embodiment performs each function by cooperation of the software for implement | achieving the said function, and the hardware mentioned above.
 以下、制御装置10が実現する各機能についてそれぞれ説明する。 Hereinafter, each function realized by the control device 10 will be described.
 まず、制御装置10の画像取得機能について説明する。本実施形態の制御装置10は、カメラ40が撮像した撮像画像のデータを取得する。表示装置100は、図示しない通信装置を用いて移動体装置200から撮像画像のデータを取得する。 First, the image acquisition function of the control device 10 will be described. The control device 10 according to the present embodiment acquires captured image data captured by the camera 40. The display device 100 acquires captured image data from the mobile device 200 using a communication device (not shown).
 次に、制御装置10の情報取得機能について説明する。制御装置10は、図示しない通信装置を用いて移動体装置200から各種の情報を取得する。制御装置10は、移動体としての自車両Vの現在の位置情報を取得する。制御装置10は、ナビゲーション装置70のGPS711により検出された現在位置を取得する。 Next, the information acquisition function of the control device 10 will be described. The control device 10 acquires various types of information from the mobile device 200 using a communication device (not shown). The control apparatus 10 acquires the current position information of the host vehicle V as a moving body. The control device 10 acquires the current position detected by the GPS 711 of the navigation device 70.
 制御装置10は、移動体としての自車両Vの周囲に存在する対象物の位置情報を取得する。取得した対象物の位置情報は、後述する仮想光源の位置の設定処理に用いられる。制御装置10は、カメラ40の撮像画像から自車両Vから対象物までの距離を、自車両Vに対する対象物の位置情報として算出する。制御装置10は、対象物の位置情報の算出処理にカメラ40の撮像パラメータを用いてもよい。制御装置10は、画像処理装置401が算出した対象物の位置情報を取得してもよい。 The control device 10 acquires position information of an object existing around the host vehicle V as a moving body. The acquired position information of the object is used for setting processing of the position of the virtual light source described later. The control device 10 calculates the distance from the host vehicle V to the target object from the captured image of the camera 40 as position information of the target object with respect to the host vehicle V. The control device 10 may use the imaging parameter of the camera 40 for the calculation process of the position information of the object. The control device 10 may acquire the position information of the object calculated by the image processing device 401.
 制御装置10は対象物の速度を取得する。制御装置10は対象物の位置情報の経時的変化から対象物の速度を算出する。制御装置10は、撮像画像のデータに基づいて対象物の速度を算出してもよい。制御装置10は、画像処理装置401が算出した対象物の速度情報を取得してもよい。 The control device 10 acquires the speed of the object. The control device 10 calculates the speed of the object from the change with time of the position information of the object. The control device 10 may calculate the speed of the object based on the captured image data. The control device 10 may acquire the speed information of the object calculated by the image processing device 401.
 制御装置10は、対象物が走行するレーンの属性情報を取得する。制御装置10は、ナビゲーション装置70の道路情報73を参照し、対象物が走行するレーンの属性情報を取得する。具体的に、制御装置10は、地図情報72又は道路情報73を参照し、取得した対象物の位置を含む道路及び走行レーンを特定する。制御装置10は、道路情報73を参照し、特定された対象物の走行レーンに対応づけられた属性情報を取得する。制御装置10は、GPS711により検出された自車両Vの位置と対称物との位置関係を算出し、その位置関係を考慮して、自車両Vの走行レーンの属性から対象物の走行レーンの属性を判断してもよい。例えば、相対的に右側のレーンが追越しレーンであるという規則が存在する場合において、対象物である他車両が自車両Vの右側のレーンを走行し、自車両の走行レーンが非追越しレーンである場合には、他車両の走行レーンは追越しレーンであると判断できる。 The control device 10 acquires attribute information of the lane on which the target object travels. The control device 10 refers to the road information 73 of the navigation device 70 and acquires the attribute information of the lane on which the object travels. Specifically, the control device 10 refers to the map information 72 or the road information 73 and identifies a road and a traveling lane that include the acquired position of the object. The control device 10 refers to the road information 73 and acquires attribute information associated with the travel lane of the identified object. The control device 10 calculates the positional relationship between the position of the host vehicle V detected by the GPS 711 and the symmetrical object, and considers the positional relationship, and determines the attribute of the traveling lane of the target object from the traveling lane attribute of the own vehicle V. May be judged. For example, in the case where there is a rule that the right lane is an overtaking lane, the other vehicle as the object travels on the right lane of the own vehicle V, and the traveling lane of the own vehicle is a non-overtaking lane. In this case, it can be determined that the traveling lane of the other vehicle is an overtaking lane.
 制御装置10は、移動体である自車両Vの進行方向の加速度を取得する。制御装置10は、自車両Vの進行方向の加速度を自車両Vから取得する。制御装置10は、前後加速度センサ62が検出した前後加速度を取得する。制御装置10は、速度センサ61が検出した速度から加速度を算出してもよい。制御装置10は、GPS711が検出した自車両Vの位置情報の変化から加速度を算出してもよい。 The control device 10 acquires the acceleration in the traveling direction of the host vehicle V that is a moving body. The control device 10 acquires the acceleration in the traveling direction of the host vehicle V from the host vehicle V. The control device 10 acquires the longitudinal acceleration detected by the longitudinal acceleration sensor 62. The control device 10 may calculate the acceleration from the speed detected by the speed sensor 61. The control device 10 may calculate acceleration from a change in position information of the host vehicle V detected by the GPS 711.
 次に、制御装置10の影画像生成機能について説明する。制御装置10は、自車両Vの位置情報に応じて仮想光源を設定し、仮想光源から自車両Vに光を照射したときに生じる影を模した影画像を生成する。 Next, the shadow image generation function of the control device 10 will be described. The control device 10 sets a virtual light source according to the position information of the host vehicle V, and generates a shadow image imitating a shadow that is generated when the host vehicle V is irradiated with light from the virtual light source.
 本実施形態の制御装置10は、撮像画像を立体座標系に投影した映像に影画像を重畳して表示する。制御装置10は、取得した自車両Vの位置情報に応じて仮想光源を設定し、仮想光源から対象物に光を照射したときに生じる影を模した影画像を生成する。影画像は、影そのものに近似させた画像であってもよいし、自車両Vの位置や向きなどを表示するために影そのものを変形させた画像であってもよい。影画像は、自車両Vの現在の形状に厳密に対応するものではなく、影のように見える画像である。実際の影ではないので、この影画像に模様や色を付してもよい。影画像は、自車両Vの現在の形状そのものに対応する形状に限られず、自車両Vの可動範囲を示す画像であってもよい。たとえば、本実施形態の影画像は、現在は閉まっている自車両Vのドアが解放された場合に、ドアの可動範囲を示す画像であってもよい。影画像の態様は限定されず、ユーザに示したい情報に応じて適宜に設計される。影画像の生成に、出願時に知られたシャドウマッピング技術を利用してもよい。 The control device 10 of the present embodiment displays a shadow image superimposed on a video obtained by projecting a captured image on a three-dimensional coordinate system. The control device 10 sets a virtual light source according to the acquired position information of the host vehicle V, and generates a shadow image imitating a shadow that is generated when light is emitted from the virtual light source to the object. The shadow image may be an image approximated to the shadow itself, or may be an image obtained by deforming the shadow itself in order to display the position and orientation of the host vehicle V. The shadow image does not exactly correspond to the current shape of the host vehicle V, and is an image that looks like a shadow. Since it is not an actual shadow, a pattern or a color may be added to this shadow image. The shadow image is not limited to the shape corresponding to the current shape of the host vehicle V, and may be an image showing the movable range of the host vehicle V. For example, the shadow image of the present embodiment may be an image showing the movable range of the door when the door of the host vehicle V that is currently closed is released. The mode of the shadow image is not limited, and is appropriately designed according to information desired to be shown to the user. A shadow mapping technique known at the time of filing may be used to generate a shadow image.
 図3Aに円筒形状の立体座標系S1の一例を示す図である。図3Aに示す立体座標系S1において、自車両Vは平面G0に載置された状態を示す。図3Bは、図3Aに示す立体座標系S1における影画像SHの表示例を示す図である。影画像SHは、平面G0に投影される。また、図4Aは、球形状の立体座標系S2の一例を示す図である。図4Aに示す立体座標系S2において、自車両Vは平面G0に載置された状態を示す。図4Bは、図4Aに示す立体座標系S2における自車両Vの影画像の表示例を示す図である。影画像SHは、平面G0に投影される。 FIG. 3A is a diagram illustrating an example of a cylindrical solid coordinate system S1. In the three-dimensional coordinate system S1 shown in FIG. 3A, the host vehicle V shows a state of being placed on the plane G0. FIG. 3B is a diagram showing a display example of the shadow image SH in the three-dimensional coordinate system S1 shown in FIG. 3A. The shadow image SH is projected on the plane G0. FIG. 4A is a diagram illustrating an example of a spherical solid coordinate system S2. In the three-dimensional coordinate system S2 shown in FIG. 4A, the host vehicle V shows a state placed on the plane G0. FIG. 4B is a diagram illustrating a display example of a shadow image of the host vehicle V in the three-dimensional coordinate system S2 illustrated in FIG. 4A. The shadow image SH is projected on the plane G0.
 図3B、図4Bに示す影画像SHは、立体座標系S1,S2において設定された仮想光源LGから自車両Vに光を照射した場合に生じる影を模した画像である。図示はしないが、図3B、図4Bの立体座標系S(S1,S2)には、撮像画像が投影される。図3B、図4Bに示すように、自車両Vの影画像SHにより、自車両Vの存在や自車両Vの向きを表現できる。これにより、立体座標系S1,S2に周囲の対象物の映像が投影された場合であっても、自車両Vと対象物との位置関係が把握しやすい映像を表示できる。
 なお、本実施形態の立体座標系Sの形状は特に限定されず、特開2012-138660号公報に開示された、お椀形状としてもよい。
The shadow image SH shown in FIGS. 3B and 4B is an image that imitates a shadow that is generated when light is emitted from the virtual light source LG set in the three-dimensional coordinate systems S1 and S2. Although not shown, the captured image is projected onto the three-dimensional coordinate system S (S1, S2) of FIGS. 3B and 4B. As shown in FIGS. 3B and 4B, the presence of the host vehicle V and the direction of the host vehicle V can be expressed by the shadow image SH of the host vehicle V. Thereby, even if the image of the surrounding object is projected onto the three-dimensional coordinate system S1, S2, an image in which the positional relationship between the host vehicle V and the object can be easily grasped can be displayed.
Note that the shape of the three-dimensional coordinate system S of the present embodiment is not particularly limited, and may be a bowl shape disclosed in Japanese Patent Application Laid-Open No. 2012-138660.
 本実施形態の制御装置10は、取得した他車両などの対象物の位置情報に応じて仮想光源を設定し、仮想光源から対象物に光を照射したときに生じる影を模した影画像を生成する。 The control device 10 according to the present embodiment sets a virtual light source according to the acquired position information of an object such as another vehicle, and generates a shadow image imitating a shadow that is generated when the object is irradiated with light from the virtual light source. To do.
 図5は、自車両Vの影画像SHと、対象物としての他車両VX1の影画像SH1と、他車両VX2の影画像SH2とを含む映像の一例を示す図である。自車両Vと他車両VX1と他車両VX2に照射する光の仮想光源LGは一点でもよいし、自車両V、他車両VX1及び他車両VX2のそれぞれに固有の仮想光源LGを設定してもよい。自車両Vに対する仮想光源LGと、他車両VX1(VX2)に対する仮想光源LGの位置は、同じ位置関係であってもよいし、異なる位置関係であってもよい。このように、自車両Vのみならず他車両VXなどの対象物の影画像も併せて表示することにより、自車両Vと対象物との位置関係が把握しやすい映像を表示できる。 FIG. 5 is a diagram illustrating an example of an image including a shadow image SH of the host vehicle V, a shadow image SH1 of the other vehicle VX1 as a target, and a shadow image SH2 of the other vehicle VX2. The virtual light source LG of the light irradiating the own vehicle V, the other vehicle VX1, and the other vehicle VX2 may be one point, or a unique virtual light source LG may be set for each of the own vehicle V, the other vehicle VX1, and the other vehicle VX2. . The position of the virtual light source LG relative to the host vehicle V and the position of the virtual light source LG relative to the other vehicle VX1 (VX2) may be the same or different. Thus, by displaying not only the own vehicle V but also the shadow image of the object such as the other vehicle VX, it is possible to display an image in which the positional relationship between the own vehicle V and the object can be easily understood.
 図5に示すように、本実施形態の制御装置10は、移動体である自車両Vが移動する方向に沿って影画像を投影する投影面SQを設定し、自車両Vが走行(移動)する走行レーンLn2に隣接する隣接レーンLn1を走行する他車両VX1に光を照射したときに投影面SQに生じる影を模した影画像を生成する。このように、隣接レーンLn1を走行する他車両VX1(対象物)の影画像SH1を、自車両Vの影画像SHとともに共通の投影面SQに投影することにより、自車両Vと他車両V1との位置関係を把握しやすい映像を提示できる。また、投影面SQを自車両Vの進行方向に沿って設定することにより、自車両Vと他車両VX1との距離が認識されやすい映像を提示できる。さらに、自車両Vが走行するレーンLn2の路面と略直交する(90度で交差する)ように投影面SQを設定することにより、自車両Vのドライバが視認しやすい位置に自車両Vの影画像SHと他車両VX1の影画像SH1を表示できる。つまり、自車両Vのドライバが運転席から、自車両Vと他車両VX1との位置関係を正確に認識できる映像を提示できる。 As shown in FIG. 5, the control device 10 of the present embodiment sets a projection plane SQ for projecting a shadow image along the direction in which the host vehicle V as a moving body moves, and the host vehicle V travels (moves). A shadow image imitating a shadow generated on the projection surface SQ when the other vehicle VX1 traveling in the adjacent lane Ln1 adjacent to the traveling lane Ln2 is irradiated with light is generated. Thus, by projecting the shadow image SH1 of the other vehicle VX1 (target object) traveling in the adjacent lane Ln1 onto the common projection plane SQ together with the shadow image SH of the host vehicle V, the host vehicle V and the other vehicle V1 It is possible to present an image that makes it easy to grasp the positional relationship between Further, by setting the projection plane SQ along the traveling direction of the host vehicle V, it is possible to present an image in which the distance between the host vehicle V and the other vehicle VX1 can be easily recognized. Furthermore, by setting the projection plane SQ so as to be substantially orthogonal (intersect at 90 degrees) with the road surface of the lane Ln2 on which the host vehicle V is traveling, the shadow of the host vehicle V can be easily seen by the driver of the host vehicle V. The image SH and the shadow image SH1 of the other vehicle VX1 can be displayed. That is, the driver | operator of the own vehicle V can show the image | video which can recognize the positional relationship of the own vehicle V and the other vehicle VX1 correctly from a driver's seat.
 同図に示すように、本実施形態の制御装置10は、自車両Vが走行(移動)する走行レーンの前方を走行する先行他車両VX2に光を照射したときに投影面SQに生じる影を模した影画像を生成する。 As shown in the figure, the control device 10 according to the present embodiment creates a shadow on the projection surface SQ when light is applied to the preceding other vehicle VX2 traveling in front of the traveling lane in which the host vehicle V travels (moves). A simulated shadow image is generated.
 ちなみに、図5に示すように、自車両Vが、矢印Rnで示す経路に沿ってレーンLn2からレーンLn1へ車線変更をしようとするときに、自車両Vのドライバは、隣接レーンLn1上の後方を走行する他車両VX1と、自車両Vの走行レーンLn2の前方を走行する他車両VX2の挙動に注意を払う。このような場面において、本実施形態の制御装置10は、自車両Vの影画像SH、他車両VX1の影画像SH1、及び他車両VX2の影画像SH2を、共通の投影面SQに投影させる。これにより、自車両Vのドライバは影画像SH,SH1,SH2との位置関係から自車両V,他車両VX1及び他車両VX2の位置関係を正確に把握できる。自車両がレーンLn2からレーンLn1へ車線変更をしようとするタイミングは、自車両の操舵角、ウィンカー操作、制動操作などから判断する。 Incidentally, as shown in FIG. 5, when the host vehicle V tries to change the lane from the lane Ln2 to the lane Ln1 along the route indicated by the arrow Rn, the driver of the host vehicle V is rearward on the adjacent lane Ln1. Pay attention to the behavior of the other vehicle VX1 traveling in front of the vehicle and the other vehicle VX2 traveling in front of the traveling lane Ln2 of the host vehicle V. In such a scene, the control device 10 of the present embodiment projects the shadow image SH of the host vehicle V, the shadow image SH1 of the other vehicle VX1, and the shadow image SH2 of the other vehicle VX2 on a common projection plane SQ. Thereby, the driver of the own vehicle V can correctly grasp the positional relationship between the own vehicle V, the other vehicle VX1, and the other vehicle VX2 from the positional relationship with the shadow images SH, SH1, and SH2. The timing at which the host vehicle tries to change the lane from the lane Ln2 to the lane Ln1 is determined from the steering angle of the host vehicle, the blinker operation, the braking operation, and the like.
 この場合において、自車両Vの加減速に応じて、影画像SHの投影位置を変更する。具体的に、制御装置10は、自車両Vが加速しているときには、影画像SHの位置を前方にシフトさせる。他方、制御装置10は、自車両Vが減速しているときには、影画像SHに位置を後方にシフトさせる。これにより、自車両Vの状況に応じて、自車両V及び他車両VXとの位置関係を把握しやすい影画像SHを表示できる。 In this case, the projection position of the shadow image SH is changed according to the acceleration / deceleration of the host vehicle V. Specifically, when the host vehicle V is accelerating, the control device 10 shifts the position of the shadow image SH forward. On the other hand, when the host vehicle V is decelerating, the control device 10 shifts the position backward in the shadow image SH. Thereby, according to the condition of the own vehicle V, the shadow image SH which can grasp | ascertain the positional relationship with the own vehicle V and the other vehicle VX can be displayed.
 さらに、自車両Vが走行する車線の流れの速度と自車両Vの速度との差に応じて、影画像SHの投影位置を変更する。車線の流れの速度とは、自車両Vが走行する車線と同じ車線を走行する他車両VXの平均速度であってもよいし、その車線の法定速度であってもよい。具体的に、制御装置10は、自車両Vの車速と車線の流れ速度との差(正の値)が大きい、つまり、自車両Vが先行他車両に接近又は追い越しをしているときには、影画像SHに位置を前方にシフトさせる。他方、制御装置10は、自車両Vの車速と車線の流れ速度との差が小さい、又は差(負の値)が大きい、つまり、自車両Vが後方他車両に接近され又は後方車両に追い越されれるときには、影画像SHに位置を後方にシフトさせる。これにより、自車両Vと他車両VXとの相対的な状況に応じて、自車両V及び他車両VXとの位置関係を把握しやすい影画像SHを表示できる。 Furthermore, the projection position of the shadow image SH is changed according to the difference between the speed of the lane in which the host vehicle V travels and the speed of the host vehicle V. The speed of the lane flow may be an average speed of another vehicle VX traveling in the same lane as the host vehicle V is traveling, or a legal speed of the lane. Specifically, when the difference (positive value) between the vehicle speed of the host vehicle V and the flow speed of the lane is large, that is, when the host vehicle V is approaching or overtaking the preceding other vehicle, The position is shifted forward in the image SH. On the other hand, the control device 10 has a small difference between the vehicle speed of the host vehicle V and the flow speed of the lane, or a large difference (negative value), that is, the host vehicle V is approached by the other vehicle behind or overtakes the rear vehicle. When this occurs, the position of the shadow image SH is shifted backward. Thereby, according to the relative situation of the own vehicle V and the other vehicle VX, the shadow image SH which can grasp | ascertain the positional relationship with the own vehicle V and the other vehicle VX can be displayed.
 本実施形態の制御装置10は、他車両VXを含む対象物の速度が相対的に高い対象物の影画像の面積が、相対的に速度が低い対象物の影画像の面積よりも大きくなるように、影画像SH1,SH2を生成する。本実施形態の制御装置10は、他車両VX1の車速P1と、他車両VX2の車速P2とを取得し、車速P1とP2とを比較する。制御装置10は、車速が高い他車両VXの影画像SHを、車速が低い他車両VXの影画像SHよりも面積が大きくなるように生成する。本例で、他車両VX1の車速P1が他車両VX2の車速P2よりも高い場合には、図5に示すように、他車両VX1の影画像SH1の面積を他車両VX2の影画像SH2の面積よりも大きくする。面積の大きい影画像SH2を大きく表示することにより、高速で走行する他車両VXに対するドライバの注意を喚起できる。 The control device 10 of the present embodiment is configured so that the area of the shadow image of the object including the other vehicle VX having a relatively high speed is larger than the area of the shadow image of the object having a relatively low speed. In addition, shadow images SH1 and SH2 are generated. The control device 10 of the present embodiment acquires the vehicle speed P1 of the other vehicle VX1 and the vehicle speed P2 of the other vehicle VX2, and compares the vehicle speeds P1 and P2. The control device 10 generates the shadow image SH of the other vehicle VX with a high vehicle speed so that the area is larger than the shadow image SH of the other vehicle VX with a low vehicle speed. In this example, when the vehicle speed P1 of the other vehicle VX1 is higher than the vehicle speed P2 of the other vehicle VX2, as shown in FIG. 5, the area of the shadow image SH1 of the other vehicle VX1 is set to the area of the shadow image SH2 of the other vehicle VX2. Larger than. By displaying a large shadow image SH2 having a large area, it is possible to alert the driver to another vehicle VX traveling at high speed.
 特に限定されないが、本実施形態の制御装置10は、他車両VXの速度が高くなるほど、他車両VXの影画像SHの面積を大きくしてもよい。本システムを利用するドライバは、影画像SHの大きさから他車両VXの速度を予測できる。上記他車両VXの速度は、絶対速度であってもよいし、自車両Vの車速に対する相対速度であってもよい。これにより、自車両Vに接近する度合いの大きい他車両Vの影画像SHを大きく表示できる。もちろん、自車両Vに対する他車両Vの速度に応じて、影画像SHの色や模様を変更してもよい。 Although not particularly limited, the control device 10 of the present embodiment may increase the area of the shadow image SH of the other vehicle VX as the speed of the other vehicle VX increases. A driver using this system can predict the speed of the other vehicle VX from the size of the shadow image SH. The speed of the other vehicle VX may be an absolute speed or may be a relative speed with respect to the vehicle speed of the host vehicle V. Thereby, the shadow image SH of the other vehicle V with a large degree of approaching to the host vehicle V can be displayed large. Of course, the color and pattern of the shadow image SH may be changed according to the speed of the other vehicle V with respect to the host vehicle V.
 また、本実施形態の制御装置10は、他車両VXが走行するレーンLn1が追越しレーンであるという属性情報を取得した場合には、他車両VXが走行するレーンLn2が追越しレーンではない属性情報を取得した場合よりも、追越しレーンを走行する他車両VXの影画像の面積が大きくなるように影画像を生成する。追い越しレーンを走行する他車両VXの速度のほうが、非追い越しレーンを走行する他車両VXの速度よりも高いと予測できるからである。レーンが追越しレーンである、又は追越しレーンではないという属性情報は、ナビゲーション装置70の地図情報72及び/又は道路情報73から取得する。レーンの属性情報の特定手法、取得手法は、特に限定されず、出願時に知られた手法を適宜に用いることができる。 In addition, when the control device 10 of the present embodiment acquires attribute information that the lane Ln1 in which the other vehicle VX travels is an overtaking lane, the lane Ln2 in which the other vehicle VX is traveling is not in the overtaking lane. The shadow image is generated so that the area of the shadow image of the other vehicle VX traveling on the overtaking lane is larger than that obtained. This is because the speed of the other vehicle VX traveling on the overtaking lane can be predicted to be higher than the speed of the other vehicle VX traveling on the non-overtaking lane. The attribute information that the lane is an overtaking lane or is not an overtaking lane is acquired from the map information 72 and / or road information 73 of the navigation device 70. The method for identifying and acquiring the lane attribute information is not particularly limited, and a method known at the time of filing can be used as appropriate.
 また、本実施形態の制御装置10は、移動体の操縦者、例えば自車両Vのドライバの操縦スキルに応じて影画像の面積を変化させてもよい。例えば、制御装置10は、操縦者のスキルが低い場合には、操縦者のスキルが高い場合よりも影画像の面積が大きくなるように影画像を生成する。これにより、スキルの低い操縦者に自車両V及び他車両Vの各位置及びそれらの位置関係を分かりやすく示すことができる。操縦者のスキルは、操縦者が自ら入力してもよいし、操縦回数、距離などの経験に基づいて判断してもよい。操縦者のスキルは、操縦者の過去の操縦履歴から判断してもよい。例えば、高いスキルを持つ操縦者の操縦履歴と、個々の操縦者の操縦履歴とを比較し、その差が大きい場合には操縦スキルが低いと判断し、その差が小さい場合には操縦スキルが高いと判断する。車両の操縦(運転)を例にすると、走行レーンの変更時における加速操作、操舵操作のタイミング、操舵量に基づいて車両の運転スキルを判断できる。 Further, the control device 10 of the present embodiment may change the area of the shadow image in accordance with the steering skill of the driver of the moving object, for example, the driver of the own vehicle V. For example, when the skill of the pilot is low, the control device 10 generates a shadow image so that the area of the shadow image is larger than when the skill of the pilot is high. Thereby, each position of the own vehicle V and the other vehicle V and those positional relationships can be shown in an easy-to-understand manner to a pilot with low skill. The operator's skill may be input by the operator himself / herself, or may be determined based on experience such as the number of operations and distance. The driver's skill may be determined from the pilot's past operation history. For example, the operation history of a pilot with high skills is compared with the operation history of individual pilots. If the difference is large, it is determined that the operation skill is low, and if the difference is small, the operation skill is Judged to be high. Taking vehicle driving (driving) as an example, the driving skill of the vehicle can be determined based on the acceleration operation, the timing of the steering operation, and the steering amount when the travel lane is changed.
 図5に示す例では、レーンLn1が追越しレーンであり、レーンLn2が追越しレーンではない(登坂車線)である。このため、レーンLn1を走行する他車両VX1の影画像SH1の方が、レーンLn2を走行する他車両VX2の影画像SH2及び自車両Vの影画像SHよりも面積が大きい。このように、自車両V及び他車両VXが走行するレーンの属性に応じて影画像SHの大きさを変更することにより、実際の車速により影画像SHの大きさを制御する場合と同様に、高速で走行する他車両VXに対するドライバの注意を喚起できる。 In the example shown in FIG. 5, lane Ln1 is an overtaking lane, and lane Ln2 is not an overtaking lane (uphill lane). For this reason, the shadow image SH1 of the other vehicle VX1 traveling on the lane Ln1 has a larger area than the shadow image SH2 of the other vehicle VX2 traveling on the lane Ln2 and the shadow image SH of the host vehicle V. In this way, by changing the size of the shadow image SH according to the attribute of the lane on which the host vehicle V and the other vehicle VX travel, as in the case of controlling the size of the shadow image SH by the actual vehicle speed, It is possible to alert the driver to the other vehicle VX traveling at high speed.
 さらに、本実施形態の制御装置10は、取得した自車両Vの加速度から、自車両Vが加速状態であると判断した場合には、仮想光源LGの位置を自車両Vの進行方向(図中矢印F方向)の反対側(図中矢印F´方向)にずらす。図5に示す例において、自車両Vが加速状態であると判断された場合には、仮想光源LGを、後方の位置、例えば、仮想光源LG2(破線で表示)の位置にシフトする。このように、加速状態のときには、仮想光源LGの位置を後方にシフトさせ、影画像SHの投影位置を前方にシフトさせることにより、ドライバの判断状況に適した影画像を表示できる。 Furthermore, when the control device 10 of the present embodiment determines that the host vehicle V is in an acceleration state from the acquired acceleration of the host vehicle V, the control device 10 determines the position of the virtual light source LG in the traveling direction of the host vehicle V (in the drawing). Shift to the opposite side (arrow F ′ direction in the figure) of arrow F direction. In the example illustrated in FIG. 5, when it is determined that the host vehicle V is in an acceleration state, the virtual light source LG is shifted to a rear position, for example, a virtual light source LG <b> 2 (indicated by a broken line). Thus, in the acceleration state, by shifting the position of the virtual light source LG backward and shifting the projection position of the shadow image SH forward, it is possible to display a shadow image suitable for the determination status of the driver.
 本実施形態の制御装置10は、取得した加速度から、自車両Vが減速状態であると判断した場合には、仮想光源LGの位置を自車両Vの進行方向側(図中矢印F方向)にずらす。図5に示す例において、自車両Vが減速状態であると判断された場合には、仮想光源LGを、前方の位置、例えば、仮想光源LG1(破線で表示)の位置にシフトする。このように、減速状態のときには、仮想光源LGの位置を前方にシフトさせ、影画像SHの投影位置を後方にシフトさせることにより、ドライバの判断状況に適した影画像を表示できる。 When the control device 10 of the present embodiment determines from the acquired acceleration that the host vehicle V is in a decelerating state, the position of the virtual light source LG is set to the traveling direction side of the host vehicle V (the direction of arrow F in the figure). Shift. In the example shown in FIG. 5, when it is determined that the host vehicle V is in a decelerating state, the virtual light source LG is shifted to a front position, for example, a virtual light source LG1 (indicated by a broken line). As described above, in the deceleration state, the shadow image suitable for the determination situation of the driver can be displayed by shifting the position of the virtual light source LG forward and shifting the projection position of the shadow image SH backward.
 なお、仮想光源LGの設定位置は限定されないが、投影処理における仮想視点と同じ位置としてもよい。これにより、影画像の形状によって自車両Vを見る仮想視点の位置を認識でき、自車両Vと周囲との位置関係を把握しやすくなる。また、仮想光源LGは無限遠に配置してもよい。この場合には、並行投影ができるので、影画像SHから自車両V、対象物(他車両VXなど)の位置関係を把握しやすくなる。 The setting position of the virtual light source LG is not limited, but may be the same position as the virtual viewpoint in the projection process. As a result, the position of the virtual viewpoint for viewing the host vehicle V can be recognized from the shape of the shadow image, and the positional relationship between the host vehicle V and the surroundings can be easily understood. The virtual light source LG may be arranged at infinity. In this case, since parallel projection can be performed, it is easy to grasp the positional relationship between the host vehicle V and the object (such as the other vehicle VX) from the shadow image SH.
 仮想視点の設定位置は、特に限定されない。また、仮想視点の位置は、ユーザの指定に応じて変更可能としてもよい。また、影画像SHが投影(表現)される面は、自車両が走行する道路の路面であってもよいし、図5のような設定された投影面であってもよい。また、移動体がヘリコプターである場合には、ヘリコプターから見て下方の地表にその影情報を投影してもよい。移動体が船である場合には、海面にその影情報を投影してもよい。 The setting position of the virtual viewpoint is not particularly limited. Further, the position of the virtual viewpoint may be changeable according to the user's designation. Further, the surface on which the shadow image SH is projected (represented) may be a road surface of a road on which the host vehicle travels, or may be a projection surface set as shown in FIG. Further, when the moving body is a helicopter, the shadow information may be projected on the ground surface below the helicopter. When the moving body is a ship, the shadow information may be projected on the sea surface.
 最後に、本実施形態の制御装置10の表示制御機能について説明する。制御装置10は、カメラ40から取得した撮像画像のデータを、立体座標系S、又は投影面SQに投影し、設定された仮想視点から自車両V及び周囲の対象物の映像を生成する。そして、制御装置10は、生成した映像をディスプレイ80に表示させる。なお、ディスプレイ80は、自車両Vに搭載し、移動体装置200として構成してもよいし、表示装置100側に設けてもよい。ディスプレイ80は、二次元画像用のディスプレイでもよいし、画面の奥行方向の位置関係を視認できる三次元画像を映し出すディスプレイであってもよい。 Finally, the display control function of the control device 10 of this embodiment will be described. The control device 10 projects the captured image data acquired from the camera 40 onto the three-dimensional coordinate system S or the projection plane SQ, and generates images of the host vehicle V and surrounding objects from the set virtual viewpoint. Then, the control device 10 displays the generated video on the display 80. The display 80 may be mounted on the host vehicle V and configured as the mobile device 200 or may be provided on the display device 100 side. The display 80 may be a display for a two-dimensional image or a display that displays a three-dimensional image in which the positional relationship in the depth direction of the screen can be visually recognized.
 本実施形態における表示する映像には、自車両Vの影画像SHを含ませる。また、表示する映像に、対象物としての他車両VXの影画像SH1,SH2を含ませてもよい。もちろん、表示する画像に、自車両Vの影画像SH及び他車両VXの影画像SH1,SH2の両方を含ませてもよい。 In the video displayed in the present embodiment, the shadow image SH of the host vehicle V is included. Moreover, you may include the shadow images SH1 and SH2 of the other vehicle VX as an object in the video to be displayed. Of course, the image to be displayed may include both the shadow image SH of the host vehicle V and the shadow images SH1 and SH2 of the other vehicle VX.
 本実施形態の表示装置100が表示させる映像には、予め準備された自車両Vを示すアイコン画像V´(図3A,図3B,図4A,図4Bを参照)を重畳させて表示してもよい。車両のアイコン画像V´は、自車両Vの意匠に基づいて予め作成し、記憶させてもよい。このように、自車両Vのアイコン画像V´を映像に重畳させることにより、自車両Vの位置及び向きと周囲の映像との関係を分かりやすく示すことができる。 Even if the icon image V ′ (see FIGS. 3A, 3B, 4A, and 4B) indicating the host vehicle V prepared in advance is superimposed and displayed on the video displayed by the display device 100 of the present embodiment. Good. The icon image V ′ of the vehicle may be created and stored in advance based on the design of the host vehicle V. In this manner, by superimposing the icon image V ′ of the host vehicle V on the video, the relationship between the position and orientation of the host vehicle V and the surrounding video can be shown in an easily understandable manner.
 以下、図6のフローチャートに基づいて、本実施形態の制御装置10の動作を説明する。
 ステップS101において、制御装置10は、カメラ40により撮像された撮像画像を取得する。ステップS102において、制御装置10は、自車両Vの現在位置と、他車両VXを含む対象物の位置を取得する。
Hereinafter, based on the flowchart of FIG. 6, operation | movement of the control apparatus 10 of this embodiment is demonstrated.
In step S <b> 101, the control device 10 acquires a captured image captured by the camera 40. In step S102, the control device 10 acquires the current position of the host vehicle V and the position of the object including the other vehicle VX.
 ステップS103において、制御装置10は、仮想光源の位置を設定する。仮想光源は、自車両Vを基準に一つ設定してもよいし、自車両V及び他車両VXのそれぞれについて複数設定してもよい。 In step S103, the control device 10 sets the position of the virtual light source. One virtual light source may be set based on the host vehicle V, or a plurality of virtual light sources may be set for each of the host vehicle V and the other vehicle VX.
 仮想光源の設定処理(ステップS103)の手法の一例を図7のフローチャートに基づいて説明する。
 図7に示すように、ステップS111において、制御装置は、自車両Vの加速度を取得する。ステップS112において、制御装置10は、加速度に基づいて、自車両Vが加速状態であるか、又は減速状態であるかを判断する。加速状態である場合には、ステップS113に進み、仮想光源を進行方向後方側にシフトする。他方、ステップS114において自車両Vが減速状態であると判断された場合には、ステップS115に進み、仮想光源を進行方向前方側にシフトする。
An example of the method of the virtual light source setting process (step S103) will be described based on the flowchart of FIG.
As shown in FIG. 7, in step S <b> 111, the control device acquires the acceleration of the host vehicle V. In step S112, the control device 10 determines whether the host vehicle V is in an acceleration state or a deceleration state based on the acceleration. When it is in the acceleration state, the process proceeds to step S113, and the virtual light source is shifted backward in the traveling direction. On the other hand, if it is determined in step S114 that the host vehicle V is in a decelerating state, the process proceeds to step S115, and the virtual light source is shifted forward in the traveling direction.
 図6に戻り、ステップS104において、制御装置10は、投影面を設定する。投影面は、図3A、図3B、図4A、図4Bに示す立体座標系であってもよいし、図5に示す投影面SQのように二次元の座標系であってもよい。さらには、後述する図8に示すように、複数の投影面を設定してもよい。 Referring back to FIG. 6, in step S104, the control device 10 sets a projection plane. The projection plane may be a three-dimensional coordinate system shown in FIGS. 3A, 3B, 4A, and 4B, or may be a two-dimensional coordinate system like the projection plane SQ shown in FIG. Furthermore, a plurality of projection planes may be set as shown in FIG.
 続くステップS105において、制御装置10は、自車両Vの影画像SHを生成する。自車両Vの周囲に他車両VXが存在する場合には、他車両VX1,VX2の影画像SH1,SH2を生成する。 In subsequent step S105, the control device 10 generates a shadow image SH of the host vehicle V. When another vehicle VX exists around the host vehicle V, shadow images SH1 and SH2 of the other vehicles VX1 and VX2 are generated.
 ステップS106において、制御装置10は、設定された投影面に撮像画像を投影する処理を実行し、表示用の映像を生成する。 In step S106, the control device 10 executes a process of projecting the captured image on the set projection plane, and generates a display image.
 最後に、ステップS107において、制御装置10は、生成した映像をディスプレイ80に表示する。 Finally, in step S107, the control device 10 displays the generated video on the display 80.
 以下、図8に基づいて、移動体である自車両Vの影画像の表示態様の他の例を説明する。ここで説明する影画像SHは、自車両Vの可動部材の可動範囲を示す画像を含む。図8に示す例では、自車両Vの影画像SHを示す投影面は、自車両Vの車長方向に沿う第1投影面SQsと、車幅方向に沿う第2投影面SQbを含む。第1投影面SQsには、車両の側方に設定された仮想光源LGから光を照射したときの影を模した影画像SHsを投影する。第2投影面SQbには、車両の前方に設定された仮想光源LGから光を照射したときの影を模した影画像SHbを投影する。 Hereinafter, another example of the display mode of the shadow image of the host vehicle V as a moving body will be described with reference to FIG. The shadow image SH described here includes an image indicating the movable range of the movable member of the host vehicle V. In the example shown in FIG. 8, the projection plane showing the shadow image SH of the host vehicle V includes a first projection plane SQs along the vehicle length direction of the host vehicle V and a second projection plane SQb along the vehicle width direction. On the first projection surface SQs, a shadow image SHs simulating a shadow when light is emitted from the virtual light source LG set on the side of the vehicle is projected. On the second projection surface SQb, a shadow image SHb simulating a shadow when light is emitted from the virtual light source LG set in front of the vehicle is projected.
 本例の影画像SHは、自車両Vの可動部材の可動範囲を示す画像を含む。本例の自車両Vは、ハッチバック型の車両であり、サイドドアとバックドア(ハッチドア)を有する。自車両Vのサイドドアは側方に開閉し、自車両Vのバックドアは後方に開閉する。自車両Vのサイドドア及びバックドアは、移動体である自車両Vの可動部材である。 The shadow image SH of this example includes an image showing the movable range of the movable member of the host vehicle V. The own vehicle V of this example is a hatchback type vehicle, and has a side door and a back door (hatch door). The side door of the host vehicle V opens and closes sideways, and the back door of the host vehicle V opens and closes rearward. The side door and the back door of the host vehicle V are movable members of the host vehicle V that is a moving body.
 制御装置10は、乗員が後方の荷台から荷物を搬入又は搬出するためにバックドアを開いた場合を想定して、バックドアの可動範囲を示す影画像を生成する。図8に示すように、第1投影面SQsに投影される影画像SHsは、バックドア部Vd3を含む。バックドア部Vd3は、バックドアを開けたときの自車両Vの後方の外延(可動範囲)を表現する。バックドアの可動範囲を示す影画像は、自車両Vの左右側方又は自車両Vの載置面(駐車面・路面)に投影する。影画像は、実際に存在する壁面や床面に投影してもよい。 The control device 10 generates a shadow image indicating the movable range of the back door, assuming that the occupant opens the back door in order to load or unload the load from the rear loading platform. As shown in FIG. 8, the shadow image SHs projected on the first projection surface SQs includes a back door portion Vd3. The back door portion Vd3 represents the rear extension (movable range) of the host vehicle V when the back door is opened. The shadow image indicating the movable range of the back door is projected on the left and right sides of the host vehicle V or on the placement surface (parking surface / road surface) of the host vehicle V. The shadow image may be projected on a wall surface or floor surface that actually exists.
 制御装置10は、乗員が座席から乗降できるように又は荷物を搬入/搬出できるようにサイドドアを開いた場合を想定して、サイドドアの可動範囲を示す影画像を生成する。図8に示すように、第2投影面SQbに投影される影画像SHbは、サイドドア部Vd1,Vd2を含む。サイドドアを開いたときのサイドドアの側方の外延(可動範囲)を表現する。サイドドアの可動範囲を示す影画像は、自車両Vの前方及び/又は後方に投影する。
 なお、制御装置10は、影画像を投影するために投影面を設定し、その投影面に影画像を投影してもよい。 
The control device 10 generates a shadow image indicating the movable range of the side door, assuming that the side door is opened so that the occupant can get on and off the seat or carry in / out the luggage. As shown in FIG. 8, the shadow image SHb projected on the second projection surface SQb includes side door portions Vd1 and Vd2. Expresses the lateral extension (movable range) of the side door when the side door is opened. The shadow image indicating the movable range of the side door is projected in front of and / or behind the host vehicle V.
Note that the control device 10 may set a projection plane to project a shadow image, and project the shadow image onto the projection plane.
 このように、自車両Vの可動部材の可動範囲を示す画像を含む影画像SHを生成し、表示することにより、自車両Vのドライバは、自車両Vを駐車させた後の作業を考慮して、自車両Vの駐車位置を決定できる。さらに、影画像SHとともに、周囲の対象物の撮像画像を重畳させれば、周囲の対象物を避けつつ、駐車後の作業に支障のない位置に駐車できる。 Thus, by generating and displaying the shadow image SH including the image indicating the movable range of the movable member of the host vehicle V, the driver of the host vehicle V considers the work after the host vehicle V is parked. Thus, the parking position of the host vehicle V can be determined. Furthermore, by superimposing the captured image of the surrounding object together with the shadow image SH, it is possible to park at a position that does not interfere with the work after parking while avoiding the surrounding object.
 本例では、移動体が自車両Vである場合、つまり自車両Vのドアが可動であることを例にして影画像SHの態様を説明するが、影画像SHの態様はこれに限定されない。例えば、移動体がフォークリフトである場合には、フォークリフト本体とそのリフト装備の可動範囲を考慮して影画像を生成する。移動体がヘリコプターである場合には、ヘリコプター本体と回転翼の回動範囲を考慮して影画像を生成する。移動体が飛行機である場合には、飛行機本体と乗降用のタラップなどの付属設備の設置範囲を考慮して影画像を生成する。移動体が海底探査機である場合には、必要に応じて設けられるプラットフォームの設置範囲を考慮して生成する。生成した影画像は、ディスプレイ80に表示される。 In this example, the mode of the shadow image SH will be described by taking the case where the moving body is the host vehicle V, that is, the door of the host vehicle V is movable, but the mode of the shadow image SH is not limited to this. For example, when the moving body is a forklift, a shadow image is generated in consideration of the movable range of the forklift body and the lift equipment. When the moving body is a helicopter, a shadow image is generated in consideration of the rotation range of the helicopter body and the rotor blades. When the moving body is an airplane, a shadow image is generated in consideration of the installation range of the attached equipment such as an airplane main body and a boarding / alighting trap. When the mobile body is a submarine spacecraft, it is generated in consideration of the installation range of the platform provided as necessary. The generated shadow image is displayed on the display 80.
 ちなみに、移動体がフォークリフトである場合には、リフト装置の稼働範囲を示す影画像を、例えば、周囲の地表(例えば倉庫や工場などの施設の床面又は壁面)に表示することにより、フォークリフトが他の装置や設備に干渉することなく稼働できるか否かを事前に確認できる。ヘリコプターの回転翼の回転範囲を示す影画像を、例えば、周囲の地表(例えば地面)に表示することにより、ヘリコプターを着地させることができる広さの場所を上空から探すことができる。回転翼の回転範囲を示す影画像を例えば、ヘリコプター周囲の地表(例えば崖面)に表示することにより、ヘリコプターの姿勢を上空から確認できる。飛行機の本体及び付属設備の設置範囲を示す影画像を、飛行機の地表(例えば地面)に表示することにより、飛行機を緊急着陸させることができる広さの場所を上空から探すことができる。 Incidentally, when the moving body is a forklift, the shadow image indicating the operating range of the lift device is displayed on, for example, the surrounding ground surface (for example, the floor or wall surface of a facility such as a warehouse or a factory) so that the forklift It can be confirmed in advance whether or not it can operate without interfering with other devices and facilities. By displaying, for example, a shadow image indicating the rotation range of the rotor blades of the helicopter on the surrounding ground surface (for example, the ground surface), it is possible to search from the sky for a wide area where the helicopter can land. For example, by displaying a shadow image indicating the rotation range of the rotor blade on the ground surface (for example, a cliff surface) around the helicopter, the attitude of the helicopter can be confirmed from the sky. By displaying a shadow image indicating the installation range of the main body of the airplane and the attached equipment on the ground surface (for example, the ground surface) of the airplane, it is possible to search from the sky for a place having an area where the airplane can make an emergency landing.
 影画像SHは、予め準備した基本パターンを記憶しておき、必要に応じて制御装置10がメモリから読み出してもよい。 The shadow image SH may store a basic pattern prepared in advance, and the control device 10 may read it from the memory as necessary.
 本発明は以上のように構成され、以上のように作用するので、以下の効果を奏する。 Since the present invention is configured as described above and operates as described above, the following effects can be obtained.
 [1]本実施形態の表示装置100は、自車両Vに光を照射したときに生じる影を模した影画像SHを生成し、影画像SHと自車両Vの周囲の撮像画像を含む映像を表示させる。このように、自車両Vの影画像SHにより、自車両Vの存在や自車両Vの向きを表現できるので、自車両Vと周囲の対象物との位置関係が把握しやすい映像を表示できる。 [1] The display device 100 according to the present embodiment generates a shadow image SH imitating a shadow that is generated when the host vehicle V is irradiated with light, and displays an image including the shadow image SH and captured images around the host vehicle V. Display. Thus, since the presence of the host vehicle V and the direction of the host vehicle V can be expressed by the shadow image SH of the host vehicle V, it is possible to display an image in which the positional relationship between the host vehicle V and surrounding objects can be easily understood.
 [2]本実施形態の表示装置100は、他車両VXを含む対象物に光を照射したときに生じる影を模した影画像SHを生成し、影画像SHと撮像画像を含む映像を表示させる。このように、他車両VXの影画像SH1,SH2により、他車両VXと自車両Vとの位置関係を表現できるので、自車両Vとその周囲の他車両VXなどの対象物との位置関係を把握しやすい映像を表示できる。 [2] The display device 100 according to the present embodiment generates a shadow image SH imitating a shadow generated when light is irradiated on an object including the other vehicle VX, and displays a video including the shadow image SH and a captured image. . Thus, since the positional relationship between the other vehicle VX and the host vehicle V can be expressed by the shadow images SH1 and SH2 of the other vehicle VX, the positional relationship between the host vehicle V and the surrounding object such as the other vehicle VX can be expressed. Easy-to-understand video can be displayed.
 [3]本実施形態の表示装置100は、自車両Vが移動する方向に沿って影画像を投影する投影面SQを設定し、自車両Vが走行(移動)する走行レーンLn2に隣接する隣接レーンLn1を走行する他車両VX1に光を照射したときに投影面SQに生じる影を模した影画像を生成する。隣接レーンLn1を走行する他車両VX1(対象物)の影画像SH1を、自車両Vの影画像SHとともに共通の投影面SQに投影することにより、自車両Vと他車両V1との位置関係を把握しやすい映像を提示できる。また、投影面SQを自車両Vの進行方向に沿って設定することにより、自車両Vと他車両VX1との距離を認識しやすい映像を提示できる。 [3] The display device 100 of the present embodiment sets a projection plane SQ that projects a shadow image along the direction in which the host vehicle V moves, and is adjacent to the travel lane Ln2 in which the host vehicle V travels (moves). A shadow image imitating a shadow generated on the projection surface SQ when the other vehicle VX1 traveling on the lane Ln1 is irradiated with light is generated. By projecting the shadow image SH1 of the other vehicle VX1 (target) traveling on the adjacent lane Ln1 onto the common projection plane SQ together with the shadow image SH of the host vehicle V, the positional relationship between the host vehicle V and the other vehicle V1 is obtained. It is possible to present images that are easy to grasp. In addition, by setting the projection surface SQ along the traveling direction of the host vehicle V, it is possible to present an image in which the distance between the host vehicle V and the other vehicle VX1 can be easily recognized.
 [4]本実施形態の表示装置100は、速度が相対的に高い対象物の影画像の面積が、相対的に速度が低い対象物の影画像の面積よりも大きくなるように、対象物の影画像SH1,SH2を生成する。このように、相対的に速度の高い対象物の影画像を相対的に大きく表示するので、速度の高い対象物への注意を喚起する映像を表示できる。 [4] The display device 100 according to the present embodiment is configured so that the area of the shadow image of the object having a relatively high speed is larger than the area of the shadow image of the object having a relatively low speed. Shadow images SH1 and SH2 are generated. Thus, since the shadow image of the relatively high-speed target object is displayed relatively large, an image for calling attention to the high-speed target object can be displayed.
 [5]本実施形態の表示装置100は、自車両V及び他車両VXが走行するレーンの属性に応じて影画像SHの大きさを変更することにより、実際の車速により影画像SHの大きさを制御する場合と同様に、高速で走行する他車両VXに対するドライバの注意を喚起できる。 [5] The display device 100 according to the present embodiment changes the size of the shadow image SH according to the attribute of the lane on which the host vehicle V and the other vehicle VX travel, thereby changing the size of the shadow image SH according to the actual vehicle speed. As in the case of controlling the vehicle, it is possible to alert the driver to the other vehicle VX traveling at high speed.
 [6]本実施形態の表示装置100は、加速状態のときには、仮想光源LGの位置を後方にシフトさせ、影画像SHの投影位置を前方にシフトさせることにより、ドライバの判断状況に適した影画像を表示できる。 [6] In the acceleration state, the display device 100 of the present embodiment shifts the position of the virtual light source LG backward, and shifts the projection position of the shadow image SH forward, so that the shadow suitable for the driver's judgment situation is obtained. An image can be displayed.
 [7]本実施形態の表示装置100は、減速状態のときには、仮想光源LGの位置を前方にシフトさせ、影画像SHの投影位置を後方にシフトさせることにより、ドライバの判断状況に適した影画像を表示できる。 [7] The display device 100 according to the present embodiment shifts the position of the virtual light source LG forward and shifts the projection position of the shadow image SH backward when in the decelerating state. An image can be displayed.
 [8]本実施形態の表示装置100は、自車両Vの可動部材の可動範囲を示す画像を含む影画像SHを生成し、表示することにより、自車両Vのドライバは、自車両Vを駐車させた後の作業を考慮して、自車両Vの駐車位置を決定できる。 [8] The display device 100 of the present embodiment generates and displays a shadow image SH including an image showing the movable range of the movable member of the host vehicle V, so that the driver of the host vehicle V parks the host vehicle V. The parking position of the host vehicle V can be determined in consideration of the work after the operation.
 [9]本実施形態の表示装置100に本実施形態の表示方法を実行させることにより、上記効果を奏する。 [9] By causing the display device 100 of the present embodiment to execute the display method of the present embodiment, the above-described effects are achieved.
 なお、以上説明した実施形態は、本発明の理解を容易にするために記載されたものであって、本発明を限定するために記載されたものではない。したがって、上記の実施形態に開示された各要素は、本発明の技術的範囲に属する全ての設計変更や均等物をも含む趣旨である。 The embodiment described above is described for easy understanding of the present invention, and is not described for limiting the present invention. Therefore, each element disclosed in the above embodiment is intended to include all design changes and equivalents belonging to the technical scope of the present invention.
 すなわち、本明細書では、本発明に係る表示装置の一態様としての表示装置100を含む表示システム1を例にして説明するが、本発明はこれに限定されるものではない。 That is, in this specification, the display system 1 including the display device 100 as one embodiment of the display device according to the present invention will be described as an example, but the present invention is not limited to this.
 また、本明細書では、本発明に係る表示装置の一態様として、CPU11、ROM12、RAM13を含む制御装置10を備える表示装置100を説明するが、これに限定されるものではない。 In this specification, the display device 100 including the control device 10 including the CPU 11, the ROM 12, and the RAM 13 is described as an embodiment of the display device according to the present invention, but the present invention is not limited to this.
 また、本明細書では、本願発明に係る画像取得手段と、情報取得手段と、影画像生成手段と、表示制御手段とを有する表示装置の一態様として、画像取得機能と、情報取得機能と、影画像生成機能と、表示制御機能とを実行させる制御装置10を備える表示装置100を例にして説明するが、本発明はこれに限定されるものではない。 Further, in the present specification, as one aspect of a display device having an image acquisition unit, an information acquisition unit, a shadow image generation unit, and a display control unit according to the present invention, an image acquisition function, an information acquisition function, The display device 100 including the control device 10 that executes the shadow image generation function and the display control function will be described as an example, but the present invention is not limited to this.
1…表示システム
100…表示装置
 10…制御装置
  11…CPU
  12…ROM
  13…RAM
200…移動体装置
 40…カメラ
  41…測距装置
 50…コントローラ
 60…センサ
  61…速度センサ
  62…前後加速度センサ
 70…ナビゲーション装置
  71…位置検出装置
   711…GPS
  72…地図情報
  73…道路情報
 80…ディスプレイ
DESCRIPTION OF SYMBOLS 1 ... Display system 100 ... Display apparatus 10 ... Control apparatus 11 ... CPU
12 ... ROM
13 ... RAM
DESCRIPTION OF SYMBOLS 200 ... Mobile body apparatus 40 ... Camera 41 ... Distance measuring device 50 ... Controller 60 ... Sensor 61 ... Speed sensor 62 ... Front-back acceleration sensor 70 ... Navigation apparatus 71 ... Position detection apparatus 711 ... GPS
72 ... Map information 73 ... Road information 80 ... Display

Claims (9)

  1.  移動体に搭載されたカメラが撮像した撮像画像を取得する画像取得手段と、
     前記移動体の位置情報を取得する情報取得手段と、
     前記移動体の位置情報に応じて仮想光源を設定し、前記仮想光源から前記移動体に光を照射したときに生じる影を模した影画像を生成する影画像生成手段と、
     前記移動体の前記影画像と前記撮像画像とを含む映像を生成し、前記映像を表示させる表示制御手段と、
     前記表示制御手段により生成された前記映像を表示する表示手段と、
    を有する表示装置。
    Image acquisition means for acquiring a captured image captured by a camera mounted on a moving body;
    Information acquisition means for acquiring position information of the mobile body;
    A shadow image generating means for setting a virtual light source according to the position information of the moving body, and generating a shadow image imitating a shadow generated when the moving body is irradiated with light from the virtual light source;
    Display control means for generating a video including the shadow image and the captured image of the moving body and displaying the video;
    Display means for displaying the video generated by the display control means;
    A display device.
  2.  前記情報取得手段は、前記移動体の周囲に存在する対象物の位置情報を取得し、
     前記影画像生成手段は、前記取得した対象物の位置情報に応じて仮想光源を設定し、前記仮想光源から前記対象物に光を照射したときに生じる影を模した影画像を生成し、
     前記表示制御手段は、前記対象物の前記影画像と前記撮像画像とを含む映像を生成し、前記映像を表示させる、請求項1に記載の表示装置。
    The information acquisition means acquires position information of an object existing around the moving body,
    The shadow image generation unit sets a virtual light source according to the acquired position information of the object, generates a shadow image imitating a shadow generated when the object is irradiated with light from the virtual light source,
    The display device according to claim 1, wherein the display control unit generates a video including the shadow image and the captured image of the object and displays the video.
  3.  前記影画像生成手段は、前記移動体が移動する方向に沿って前記影画像を投影する投影面を設定し、前記移動体が移動する移動レーンに隣接する隣接レーンを移動する前記対象物に光を照射したときに前記投影面に生じる影を模した影画像を生成する請求項2に記載の表示装置。 The shadow image generation means sets a projection plane for projecting the shadow image along a direction in which the moving body moves, and applies light to the object moving in an adjacent lane adjacent to the moving lane in which the moving body moves. The display device according to claim 2, wherein a shadow image imitating a shadow generated on the projection surface when the light is irradiated is generated.
  4.  前記情報取得手段は、前記対象物の速度を取得し、
     前記影画像生成手段は、前記取得した速度が相対的に高い対象物の影画像の面積が、前記取得した速度が相対的に低い対象物の影画像の面積よりも大きくなるように、前記影画像を生成する請求項2又は3に記載の表示装置。
    The information acquisition means acquires the speed of the object,
    The shadow image generation means is configured to cause the shadow image area of the object having a relatively high acquired speed to be larger than the area of the shadow image of the object having a relatively low acquired speed. The display device according to claim 2 or 3, which generates an image.
  5.  前記情報取得手段は、前記対象物が走行するレーンの属性情報を取得し、
     前記影画像生成手段は、前記対象物が走行するレーンが追越しレーンであるという属性情報を取得した場合には、前記対象物が走行するレーンが追越しレーンではないという属性情報を取得した場合よりも、前記追越しレーンを走行する対象物の影画像の面積が大きくなるように、前記影画像を生成する請求項2~4の何れか一項に記載の表示装置。
    The information acquisition means acquires attribute information of a lane in which the object travels,
    When the shadow image generation unit acquires attribute information that the lane on which the object travels is an overtaking lane, the shadow image generation unit may acquire attribute information that the lane on which the object travels is not an overtaking lane. The display device according to any one of claims 2 to 4, wherein the shadow image is generated so that an area of the shadow image of the object traveling in the overtaking lane is increased.
  6.  前記情報取得手段は、前記移動体の進行方向の加速度を取得し、
     前記影画像生成手段は、前記取得した加速度から前記移動体が加速状態であると判断した場合には、前記仮想光源の位置を前記移動体の進行方向の反対側にずらす請求項1~5の何れか一項に記載の表示装置。
    The information acquisition means acquires acceleration in the traveling direction of the moving body,
    The shadow image generating means shifts the position of the virtual light source to the opposite side of the moving direction of the moving body when it is determined from the acquired acceleration that the moving body is in an accelerating state. The display device according to any one of the above.
  7.  前記情報取得手段は、前記移動体の進行方向の加速度を取得し、
     前記影画像生成手段は、前記取得した加速度から前記移動体が減速状態であると判断した場合には、前記仮想光源の位置を前記移動体の進行方向側にずらす請求項1~6の何れか一項に記載の表示装置。
    The information acquisition means acquires acceleration in the traveling direction of the moving body,
    7. The shadow image generating unit, when judging that the moving body is in a decelerating state from the acquired acceleration, shifts the position of the virtual light source toward the traveling direction of the moving body. The display device according to one item.
  8.  前記移動体の影画像は、前記移動体の可動部材の可動範囲を示す画像を含む請求項1~7の何れか一項に記載の表示装置。 The display device according to any one of claims 1 to 7, wherein the shadow image of the movable body includes an image indicating a movable range of a movable member of the movable body.
  9.  移動体に用いられる表示装置が実行する表示方法であって、
     前記表示装置は、画像取得手段と、情報取得手段と、影画像生成手段と、表示制御手段と、表示手段と、を有し、
     前記画像取得手段は、移動体に搭載されたカメラが撮像した撮像画像を取得するステップを実行し、
     前記情報取得手段は、前記移動体の位置情報を取得するステップを実行し、
     前記影画像生成手段は、前記取得した移動体の位置情報に応じて仮想光源を設定し、前記仮想光源から前記移動体に光を照射したときに生じる影を模した影画像と前記撮像画像とを含む映像を生成するステップを実行し、
     前記表示制御手段は、表示手段に前記映像を表示させるステップを実行する表示方法。
    A display method executed by a display device used for a mobile body,
    The display device includes an image acquisition unit, an information acquisition unit, a shadow image generation unit, a display control unit, and a display unit.
    The image acquisition means executes a step of acquiring a captured image captured by a camera mounted on a moving body,
    The information acquisition means executes a step of acquiring position information of the moving body,
    The shadow image generation means sets a virtual light source according to the acquired position information of the moving body, and a shadow image imitating a shadow generated when the moving body is irradiated with light from the virtual light source and the captured image To generate a video containing
    The display control means is a display method for executing a step of causing the display means to display the video.
PCT/JP2014/080177 2014-11-14 2014-11-14 Display device and display method WO2016075810A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/JP2014/080177 WO2016075810A1 (en) 2014-11-14 2014-11-14 Display device and display method
JP2016558896A JP6500909B2 (en) 2014-11-14 2015-02-27 Display device and display method
PCT/JP2015/055957 WO2016075954A1 (en) 2014-11-14 2015-02-27 Display device and display method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2014/080177 WO2016075810A1 (en) 2014-11-14 2014-11-14 Display device and display method

Publications (1)

Publication Number Publication Date
WO2016075810A1 true WO2016075810A1 (en) 2016-05-19

Family

ID=55953923

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/JP2014/080177 WO2016075810A1 (en) 2014-11-14 2014-11-14 Display device and display method
PCT/JP2015/055957 WO2016075954A1 (en) 2014-11-14 2015-02-27 Display device and display method

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/055957 WO2016075954A1 (en) 2014-11-14 2015-02-27 Display device and display method

Country Status (2)

Country Link
JP (1) JP6500909B2 (en)
WO (2) WO2016075810A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019087909A (en) * 2017-11-08 2019-06-06 クラリオン株式会社 Image display apparatus and image display system

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106851193A (en) * 2016-12-22 2017-06-13 安徽保腾网络科技有限公司 New device for shooting accident vehicle chassis
JP7020550B2 (en) 2018-06-20 2022-02-18 日産自動車株式会社 Communication method, vehicle allocation system and communication device for vehicle allocation system
JP2022142515A (en) * 2021-03-16 2022-09-30 本田技研工業株式会社 Information processing device for estimating moving amount of moving body, information processing method, and program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10176931A (en) * 1996-12-18 1998-06-30 Nissan Motor Co Ltd Navigator for vehicle
WO2007083494A1 (en) * 2006-01-17 2007-07-26 Nec Corporation Graphic recognition device, graphic recognition method, and graphic recognition program
JP2009230225A (en) * 2008-03-19 2009-10-08 Mazda Motor Corp Periphery monitoring device for vehicle
JP2012023658A (en) * 2010-07-16 2012-02-02 Toshiba Alpine Automotive Technology Corp Image display device for vehicle

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007118762A (en) * 2005-10-27 2007-05-17 Aisin Seiki Co Ltd Circumference monitoring system
JP5058491B2 (en) * 2006-02-09 2012-10-24 日産自動車株式会社 VEHICLE DISPLAY DEVICE AND VEHICLE VIDEO DISPLAY CONTROL METHOD
JP2007282098A (en) * 2006-04-11 2007-10-25 Denso Corp Image processing apparatus and image processing program
JP5271186B2 (en) * 2009-07-28 2013-08-21 東芝アルパイン・オートモティブテクノロジー株式会社 Image display device for vehicle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10176931A (en) * 1996-12-18 1998-06-30 Nissan Motor Co Ltd Navigator for vehicle
WO2007083494A1 (en) * 2006-01-17 2007-07-26 Nec Corporation Graphic recognition device, graphic recognition method, and graphic recognition program
JP2009230225A (en) * 2008-03-19 2009-10-08 Mazda Motor Corp Periphery monitoring device for vehicle
JP2012023658A (en) * 2010-07-16 2012-02-02 Toshiba Alpine Automotive Technology Corp Image display device for vehicle

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019087909A (en) * 2017-11-08 2019-06-06 クラリオン株式会社 Image display apparatus and image display system
JP7028609B2 (en) 2017-11-08 2022-03-02 フォルシアクラリオン・エレクトロニクス株式会社 Image display device and image display system

Also Published As

Publication number Publication date
WO2016075954A1 (en) 2016-05-19
JP6500909B2 (en) 2019-04-17
JPWO2016075954A1 (en) 2017-09-28

Similar Documents

Publication Publication Date Title
US9910442B2 (en) Occluded area detection with static obstacle maps
JP6630976B2 (en) Display system, display method, and program
JP7086798B2 (en) Vehicle control devices, vehicle control methods, and programs
US10684133B2 (en) Route generator, route generation method, and route generation program
US10179588B2 (en) Autonomous vehicle control system
US10908604B2 (en) Remote operation of vehicles in close-quarter environments
JP2019182305A (en) Vehicle control device, vehicle control method, and program
CN105807763A (en) Vehicle system
JP7374098B2 (en) Information processing device, information processing method, computer program, information processing system, and mobile device
US11812197B2 (en) Information processing device, information processing method, and moving body
JP2020163907A (en) Vehicle control device, vehicle control method, and program
WO2016075810A1 (en) Display device and display method
JP6380550B2 (en) Display device and display method
JPWO2019092846A1 (en) Display system, display method, and program
JP2020052559A (en) Vehicle control device, vehicle control method, and program
CA3069108A1 (en) Parking assistance method and parking assistance device
JP2022510450A (en) User assistance methods for remote control of automobiles, computer program products, remote control devices and driving assistance systems for automobiles
CN110271487A (en) Vehicle display with augmented reality
JP2022008854A (en) Control unit
CN112061138A (en) Vehicle eccentricity map
DE102016103859A1 (en) Vehicle motion control device, vehicle motion control program and vehicle
JP7449751B2 (en) Vehicle control device, vehicle control method, and program
CN117784768A (en) Vehicle obstacle avoidance planning method, device, computer equipment and storage medium
CN112987053A (en) Method and apparatus for monitoring yaw sensor
JP6731071B2 (en) Vehicle control device and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14905881

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14905881

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP