WO2012164704A1 - Affichage tête haute, procédé d'affichage d'image par affichage tête haute et programme d'affichage d'image - Google Patents

Affichage tête haute, procédé d'affichage d'image par affichage tête haute et programme d'affichage d'image Download PDF

Info

Publication number
WO2012164704A1
WO2012164704A1 PCT/JP2011/062612 JP2011062612W WO2012164704A1 WO 2012164704 A1 WO2012164704 A1 WO 2012164704A1 JP 2011062612 W JP2011062612 W JP 2011062612W WO 2012164704 A1 WO2012164704 A1 WO 2012164704A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
image
area
virtual image
head
Prior art date
Application number
PCT/JP2011/062612
Other languages
English (en)
Japanese (ja)
Inventor
義弘 橋塚
和茂 川名
英昭 鶴見
智陽 丹野
峻行 豊田
敬介 栃原
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Priority to PCT/JP2011/062612 priority Critical patent/WO2012164704A1/fr
Publication of WO2012164704A1 publication Critical patent/WO2012164704A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/167Driving aids for lane monitoring, lane changing, e.g. blind spot detection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/20Optical features of instruments
    • B60K2360/33Illumination features
    • B60K2360/334Projection means
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0118Head-up displays characterised by optical features comprising devices for improving the contrast of the display / brillance control visibility
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems

Definitions

  • the present invention relates to an image display device such as a head-up display.
  • HUD head-up display
  • Examples of the problem to be solved by the present invention include the above. It is an object of the present invention to provide a head-up display, an image display method for the head-up display, and an image display program capable of improving the visibility of an image while ensuring the user's field of view from the viewpoint of safety. And
  • the head-up display has a light source that projects light onto the display unit to display a virtual image, and a safety aspect when observing from a predetermined position through the display unit.
  • a recognizing unit for recognizing a predetermined area on the display unit for determining a display position and / or a size of the virtual image so that the visibility and the visibility of the virtual image are ensured, and the recognizing unit recognizes Control means for performing control to change the display position and / or size of the virtual image based on the predetermined area.
  • an image display method executed by a head-up display having a light source that projects light onto a display unit to display a virtual image when observing from a predetermined position via the display unit.
  • a recognition step of recognizing a predetermined area on the display means for determining a display position and / or a size of the virtual image so as to realize a safety field of view and a virtual image visibility.
  • a control step of performing control to change the display position and / or size of the virtual image based on the predetermined area recognized by the recognition step.
  • an image display program executed by a head-up display that includes a computer and has a light source that projects light onto a display unit to display a virtual image.
  • a head-up display that includes a computer and has a light source that projects light onto a display unit to display a virtual image.
  • the display means for determining the display position and / or size of the virtual image so as to ensure the field of view in terms of safety and the visibility of the virtual image when observing via the display means
  • Recognition means for recognizing the predetermined area, and control means for performing control to change the display position and / or size of the virtual image based on the predetermined area recognized by the recognition means.
  • the head-up display includes the light source that projects light onto the display means to display a virtual image, and the predetermined position corresponding to the viewpoint of the driver of the moving body through the display means.
  • Recognizing means for recognizing an area on the display means where the driver's field of view is blocked by a part of the moving body as a blind spot area when observing the outside of the moving body;
  • Control means for performing control to change the display position or the size of the virtual image so that the virtual image is recognized when the outside of the moving body is observed through.
  • the image display method of the head-up display includes a projecting step of projecting light from the light source to the display means to display a virtual image, and a predetermined position corresponding to the viewpoint of the driver of the moving body. Then, when observing the outside of the moving body through the display means, a recognition step of recognizing the area on the display means where the driver's field of view is blocked by a part of the moving body as a blind spot area; A control step of performing control to change the display position or size of the virtual image so that the virtual image is recognized when the driver observes the outside of the moving body through the blind spot region.
  • the image display device has a light source that projects light onto the display unit in order to display a virtual image, and a safety aspect when observing from a predetermined position through the display unit.
  • a recognizing unit for recognizing a predetermined area on the display unit for determining a display position and / or a size of the virtual image so that the visibility and the visibility of the virtual image are ensured, and the recognizing unit recognizes Control means for performing control to change the display position and / or size of the virtual image based on the predetermined area.
  • FIG. 1 shows an overall configuration of a head-up display (HUD) according to the present embodiment.
  • the figure for demonstrating the basic concept of the control method which concerns on a present Example is shown.
  • An example of a blind spot area extracted by the blind spot area extraction unit is shown.
  • regulates a safe visual field area is shown.
  • region which the safe visual field area extraction part extracted is shown.
  • region is shown.
  • the figure for demonstrating the image display area determination method concretely is shown.
  • the processing flow concerning a present Example is shown.
  • the head-up display includes a light source that projects light onto the display unit to display a virtual image, and a field of view in terms of safety when observing from a predetermined position through the display unit.
  • Recognizing means for recognizing a predetermined area on the display means for determining the display position and / or size of the virtual image such that securing and visibility of the virtual image are realized, and the recognizing means recognized by the recognizing means
  • Control means for performing control to change the display position and / or size of the virtual image based on a predetermined area.
  • the above-described head-up display is applied to a head-up display having a light source that projects light onto a display means in order to display a virtual image.
  • the recognizing unit determines the display position and / or the size of the virtual image that can ensure the field of view and the visibility of the virtual image in terms of safety when observing from the predetermined position through the display unit.
  • a predetermined area on the display means is recognized.
  • the control means performs control to change the display position and / or size of the virtual image based on the predetermined area recognized by the recognition means. By doing so, it is possible to improve the visibility of the image while appropriately ensuring the user's field of view from the viewpoint of safety.
  • the recognizing unit displays a blind spot region, which is a region on the display unit, in which a blind spot is formed when observing from the predetermined position through the display unit. Recognize as an area. This is because when the image is displayed in the blind spot area, the field of view from the viewpoint of safety can be appropriately secured.
  • the control means displays the display position and / or the size so that the virtual image is displayed at a position that can be observed from the predetermined position through the blind spot area. Can be changed.
  • the virtual image is displayed so as to be within the blind spot area.
  • the apparatus has an attaching means for mounting on the moving body, and the blind spot area is one of the moving bodies when the outside of the moving body is observed from the predetermined position via the display means. It is an area on the display means where the observation is blocked by the part.
  • the recognizing unit recognizes the blind spot region when the opposite side of the observation position with respect to the display unit is observed in the projection direction of the light projected from the light source.
  • the recognition unit is a region on the display unit that should ensure a field of view from the viewpoint of safety when observing from the predetermined position through the display unit.
  • the safe view area is recognized as the predetermined area, and the control unit changes the display position and / or the size so that the virtual image does not straddle both the upper end and the lower end of the safe view area.
  • the recognition unit obtains, from the camera, a photographed image obtained by photographing the outside of the moving body from the position of the driver's line of sight of the moving body on which the head-up display is mounted.
  • the predetermined area is recognized based on the captured image.
  • the recognizing unit is configured to display the blind spot by a part of the moving body when the outside of the moving body is observed through the display unit from the position of the driver's eyes.
  • the upper area is recognized as the predetermined area.
  • the recognition means recognizes the predetermined area based on information indicating characteristics of the moving body and the captured image.
  • the exterior color, interior color, overall length, overall width, overall height, etc. of the mobile object can be used as information indicating the characteristics of the mobile object.
  • control unit further includes a display control unit that displays a predetermined virtual image when the control unit performs the control, and the recognition unit detects a portion of the predetermined region and the predetermined virtual image from the captured image.
  • the control means changes the display position and / or the size based on a relative positional relationship between the predetermined area portion and the predetermined virtual image portion extracted by the recognition means. To control. Thereby, it is possible to determine the optimal display position and / or size appropriately using the above-described predetermined area.
  • the display control means can display the predetermined virtual image by a color and / or luminance corresponding to an exterior color and an interior color of the moving body. Thereby, a predetermined virtual image can be extracted with high accuracy.
  • the blind spot area is an area where the outside observation is hindered when observing the outside of the moving body on which the head-up display is mounted from the predetermined position.
  • the blind spot area is an area where the external observation is hindered by a part of the body of the moving body.
  • an image display method executed by a head-up display having a light source that projects light onto a display unit in order to display a virtual image is used when observing from a predetermined position through the display unit.
  • an image display program executed by a head-up display having a computer and a light source that projects light onto a display unit to display a virtual image.
  • On the display means for determining the display position and / or size of the virtual image so as to ensure the field of view in terms of safety and the visibility of the virtual image when observing via the display means
  • Recognition means for recognizing the predetermined area, and control means for performing control to change the display position and / or size of the virtual image based on the predetermined area recognized by the recognition means.
  • the above-described image display method and image display program of the head-up display can also improve the visibility of the image while ensuring the user's field of view from the viewpoint of safety.
  • the image display program can be suitably handled in a state of being recorded on a recording medium.
  • the head-up display includes a light source that projects light onto the display unit to display a virtual image, and a predetermined position corresponding to the viewpoint of the driver of the moving body through the display unit.
  • Recognizing means for recognizing an area on the display means where the driver's field of view is blocked by a part of the moving body as a blind spot area when observing the outside of the moving body;
  • Control means for performing control to change the display position or the size of the virtual image so that the virtual image is recognized when the outside of the moving body is observed through.
  • an image display method for a head-up display includes a projection step of projecting light from a light source to a display unit to display a virtual image, and a predetermined position corresponding to the viewpoint of a driver of a moving object. Then, when observing the outside of the moving body through the display means, a recognition step of recognizing the area on the display means where the driver's field of view is blocked by a part of the moving body as a blind spot area; A control step of performing control to change the display position or size of the virtual image so that the virtual image is recognized when the driver observes the outside of the moving body through the blind spot region.
  • the image display device includes a light source that projects light onto the display unit to display a virtual image, and a safety aspect when observing from a predetermined position through the display unit.
  • a recognizing unit for recognizing a predetermined area on the display unit for determining a display position and / or a size of the virtual image so that the visibility and the visibility of the virtual image are ensured, and the recognizing unit recognizes Control means for performing control to change the display position and / or size of the virtual image based on the predetermined area.
  • FIG. 1 is a block diagram illustrating a schematic configuration of a head-up display (HUD) 1 according to the present embodiment.
  • HUD head-up display
  • the HUD 1 mainly includes a camera 11, an image region extraction unit 12, a blind spot region extraction unit 13, a safe visual field region extraction unit 14, an image display region determination unit 15, a light source control unit 16, and an image control unit 17. And a light source drive driver 18, a light source 19, and drive motors 20 and 21.
  • the HUD 1 is mounted on a moving body such as a vehicle by attachment means (not shown), and projects light from the light source 19 toward the windshield, thereby visually recognizing an image as a virtual image from the user's eye position (eye point). It is.
  • image is used for an image formed as a virtual image.
  • the camera 11 is provided at a location corresponding to the position of the driver's eyes, and images the outside of the vehicle from the position of the driver's eyes.
  • the camera 11 supplies an image signal S11 corresponding to the photographed image to the image region extraction unit 12, the blind spot region extraction unit 13, and the safe visual field region extraction unit 14.
  • the image region extraction unit 12 Based on the image signal S11 supplied from the camera 11, the image region extraction unit 12 extracts a portion (hereinafter referred to as “image region”) corresponding to a predetermined image displayed by the HUD 1 in the captured image.
  • This predetermined image is an image used for adjusting the display position and size of the image displayed by the HUD 1 and is hereinafter referred to as an “adjustment image”.
  • the adjustment image is displayed in a predetermined color (for example, a bonnet color or a complementary color of the dashboard color), and the image area extraction unit 12 is displayed in the predetermined color in the captured image. An area is extracted as an image area. Then, the image area extraction unit 12 supplies a signal S12 indicating the extracted image area to the image display area determination unit 15.
  • the blind spot region extraction unit 13 A portion corresponding to a region where a blind spot is formed by a part of the vehicle mounted (that is, a blind spot region) is extracted.
  • the term “blind spot area” is used for both the area that becomes a blind spot when the driver observes the outside of the vehicle and the part corresponding to the area in the captured image.
  • the blind spot area corresponds to an area where external observation is hindered when the driver observes the outside of the vehicle through the windshield.
  • the blind spot area is an area on the windshield where external observation is hindered by a part of the vehicle body.
  • the blind spot area extraction unit 13 acquires information indicating the characteristics of the vehicle (for example, the exterior color or interior color of the vehicle) and extracts a blind spot area in the captured image based on the information. Then, the blind spot area extraction unit 13 supplies the image display area determination unit 15 with a signal S13 indicating the extracted blind spot area.
  • the safe visual field region extraction unit 14 When the driver observes the outside of the vehicle in a region where the image can be displayed by the HUD 1 in the captured image based on the image signal S11 supplied from the camera 11, the safe visual field region extraction unit 14. From a safety aspect, a portion corresponding to a region where a view should be secured (that is, a safe view region) is extracted.
  • the term “safety field of view” is used for both a region where the driver should secure the field of view from the viewpoint of safety when observing the outside of the vehicle and a part corresponding to the region in the captured image.
  • the safety view area corresponds to an area on the windshield where the driver should ensure visibility from the viewpoint of safety when the driver observes the outside of the vehicle through the windshield. Such a safe visual field is determined by the provisions of the Road Traffic Law.
  • the safety view area extraction unit 14 acquires information indicating the characteristics of the vehicle (for example, full length, full width, total height, etc.) and information indicating the set position of the camera 11, and based on these information, the safe view area in the captured image Extract regions. Then, the safety view area extraction unit 14 supplies the image display area determination unit 15 with a signal S14 indicating the extracted safety view area.
  • the image display area determination unit 15 determines an area in which an image is displayed by the HUD 1 based on the signals S12, S13, and S14 supplied from the image area extraction unit 12, the blind spot area extraction unit 13, and the safe view area extraction unit 14. .
  • the image display region determination unit 15 includes the image region extracted by the image region extraction unit 12, and the blind spot region and the safe view region extracted by the blind spot region extraction unit 13 and the safe view region extraction unit 14, respectively. Based on the relative relationship, the display position and size of the image displayed by the HUD 1 are determined. More specifically, the image display area determination unit 15 determines an optimal image display position and size using the blind spot area and the safe view area to the maximum while observing the restrictions defined by the safe view area. . Then, the image display area determination unit 15 supplies a signal S15 indicating the display position and size of the determined image to the light source control unit 16.
  • the light source control unit 16 supplies control signals S16a and S16b for controlling the image control unit 17 and the light source drive driver 18 based on the signal S15 supplied from the image display region determination unit 15, respectively.
  • the light source control unit 16 controls the drive motors 20 and 21 by the light source drive driver 18 so that the display position and size of the image determined by the image display region determination unit 15 are realized. Is supplied to the light source driver 18.
  • the image control unit 17 supplies an image signal S17 corresponding to RGB, luminance, and the like to the light source 19 based on an image signal supplied from the outside (TV, Internet, navigation device, DVD player, or the like).
  • the image control unit 17 supplies the light source 19 with an image signal S17 for the adjustment image in order to display the adjustment image based on the control signal S16a supplied from the light source control unit 16.
  • the light source 19 has a red, green, and blue laser light source, a scanning mechanism that scans the laser light emitted from the laser light source, and the like, and is directed toward the windshield (hereinafter simply referred to as “laser light”). (Also referred to as “light”). In this case, the light source 19 projects light toward the windshield so that an image (virtual image) corresponding to the image signal S17 supplied from the image control unit 17 is displayed.
  • the light source 19 is not limited to using a device that projects laser light.
  • the light source driving driver 18 Based on the control signal S16b supplied from the light source control unit 16, the light source driving driver 18 directs the light source 19 in the direction indicated by the arrow A1 (corresponding to the front-rear direction of the driver) and in the direction indicated by the arrow A2 (right and left of the driver).
  • the control signals S18a and S18b are supplied to the drive motors 20 and 21 in order to move them in the direction corresponding to the direction.
  • the light source drive driver 18 controls the drive motors 20 and 21 so that the display position and size of the image determined by the image display area determination unit 15 are realized.
  • the drive motors 20 and 21 move the light source 19 in the direction indicated by the arrow A1 and the direction indicated by the arrow A2 based on the control signals S18a and S18b supplied from the light source drive driver 18, respectively.
  • the drive motor 20 moves the light source 19 in the front-rear direction of the driver, thereby enlarging or reducing the size of the image to be displayed.
  • the drive motor 21 moves the display position of the image to the left or right by moving the light source 19 in the left-right direction of the driver.
  • the image area extraction unit 12, the blind spot area extraction unit 13, and the safe view area extraction unit 14 correspond to an example of “recognition means” in the present invention
  • the image display area determination unit 15 and the light source control unit 16 correspond to the present invention.
  • control means correspond to an example of “control means”.
  • FIG. 2 schematically shows the HUD 1 mounted on the vehicle (only the light source 19 is shown).
  • an example of an image that is displayed when the light source 19 is moved in the front-rear direction as indicated by an arrow B4 will be described.
  • the arrow B1 shows an example of an image located in the blind spot area (in this example, the blind spot area is not used to the maximum extent).
  • the image since the image is located in the blind spot area, it can be said that there is no problem in terms of safety.
  • the size of the image is relatively small, it can be said that the visibility of the image is not so good.
  • the arrow B2 shows an example of an image extending in the vertical direction so as to straddle both the upper end and the lower end of the above-described safety view area. In this case, since the size of the image is relatively large, it can be said that the visibility of the image is good. However, it can be said that there is a problem in terms of safety because the safe visual field is largely blocked by the image.
  • the HUD 1 determines the optimal image display position and size using the blind spot area and the safety view area to the maximum while complying with the restrictions defined by the safety view area.
  • the display position and size of the image are automatically adjusted. For example, the maximum size that can display an image in the blind spot area and the safe view area that satisfies the condition that the image does not straddle both the upper end and the lower end of the safe view area is determined. Thereby, for example, an image as indicated by an arrow B3 is displayed. In this case, the image is displayed with a display position and a size that do not prevent the driver from observing the front end of the vehicle.
  • the control method according to the present embodiment will be specifically described.
  • the flow of the control method according to the present embodiment will be briefly described.
  • the camera 11 captures an image
  • the blind spot area extraction unit 13 extracts a blind spot area in the captured image
  • the safe view area extraction unit 14 extracts a safe view area in the captured image.
  • the image control unit 17 and the light source 19 display the adjustment image
  • the image region extraction unit 12 extracts an image region in the captured image.
  • the image display area determining unit 15 compares the image area extracted by the image area extracting unit 12 with the blind spot area and the safe view area extracted by the blind spot area extracting unit 13 and the safe view area extracting unit 14, respectively.
  • the display position and size of the image displayed by the HUD 1 are determined based on the positional relationship.
  • blind spot area extraction method the safe view area extraction method, and the image display area determination method will be described in detail.
  • FIG. 3 shows an example of a blind spot area extracted by the blind spot area extraction unit 13.
  • FIG. 3 is a diagram corresponding to an image captured by the camera 11.
  • the blind spot area C ⁇ b> 1 is an area in which external observation is hindered when the driver observes the outside of the vehicle, and corresponds to a dashboard, an A pillar, a bonnet, a ceiling, and the like. is there.
  • the blind spot area extraction unit 13 acquires information indicating the characteristics of the vehicle, and extracts a blind spot area C1 in the captured image based on the information. For example, the blind spot area extraction unit 13 acquires information on the exterior color and the interior color of the vehicle, and extracts a portion having the exterior color and the interior color in the captured image as the blind spot area C1.
  • first safe visual field and second safe visual field
  • safety field of view the standard of the road traffic law that defines the safe field of view.
  • FIG. 4 (a) shows a diagram for explaining the criteria for defining the first safety field of view.
  • the first safety view area is defined based on a standard that the driver can directly see the pole (height 1 m, diameter 0.3 m) disposed in the hatching area D1 covering the front and left sides of the vehicle.
  • the hatching area D1a is a part corresponding to the A pillar, and this area is excluded from the application of the standard.
  • FIG. 4B is a diagram for explaining a criterion for defining the second safety view area.
  • the second safety view area is defined by a standard such that the driver can directly view a pole (height 1 m, diameter 0.3 m) disposed in a hatching area D2 located 2 m ahead of the vehicle.
  • FIG. 5 shows an example of the first and second safe view areas extracted by the safe view area extracting unit 14. Specifically, FIG. 5 (a) shows an example of the first safety field of view E1, and FIG. 5 (b) shows an example of the second safety field of view E2.
  • FIG. 5 is a diagram corresponding to an image captured by the camera 11.
  • the safety view area extraction unit 14 acquires information indicating the characteristics of the vehicle and information indicating the set position of the camera 11, and based on these information, the first and second safety view areas E1 and E2 in the captured image. To extract. Specifically, the safety view area extraction unit 14 performs a predetermined calculation based on the above-described road traffic law standard (see FIG. 4), information indicating the characteristics of the vehicle, and information indicating the set position of the camera 11. To obtain the first and second safe visual field areas E1 and E2 in the captured image. For example, the safety view area extraction unit 14 obtains the first and second safety view areas E1 and E2 by using the total length, the total width, and the total height of the vehicle as information indicating the characteristics of the vehicle.
  • the above-mentioned standards of the Road Traffic Law prohibit the display of images extending in the vertical direction so as to straddle both the upper and lower ends of the first and second safety view areas E1, E2. Displaying an image in the first and second safety view areas E1 and E2 is not prohibited. In other words, even if the image is located in the first and second safety view areas E1 and E2, if the image does not straddle both the upper end and the lower end of the first and second safety view areas E1 and E2, It is permissible to display such an image.
  • the first safety view area E1 will be described as a representative, but the contents thereof are similarly applied to the second safety view area E2.
  • FIG. 6A it is prohibited to display the image F so as to straddle both the upper end and the lower end of the first safety field of view E1.
  • FIG. 6B even if the image F is located in the first safe view area E1, the image F must straddle both the upper end and the lower end of the first safe view area E1. For example, displaying such an image F is allowed.
  • FIGS. 6C and 6D it is allowed to display an image F that straddles only one of the upper end and the lower end of the first safety view area E1.
  • FIG. 7 shows a diagram corresponding to an image captured by the camera 11.
  • FIG. 7A shows the blind spot area C1 extracted by the blind spot area extraction unit 13.
  • the blind spot area C1 is the same as that shown in FIG.
  • FIG. 7B shows a diagram in which the first safe visual field area E1 extracted by the safe visual field area extraction unit 14 is superimposed on the blind spot area C1.
  • the first safety view area E1 is the same as that shown in FIG.
  • FIG. 7C shows a diagram in which the second safe visual field area E2 extracted by the safe visual field area extraction unit 14 is superimposed on the blind spot area C1 and the first safe visual field area E1.
  • the second safety view area E2 is the same as that shown in FIG.
  • the image control unit 17 and the light source 19 display the adjustment image, and the camera 11 performs imaging again.
  • the image area extraction unit 12 extracts an image area corresponding to the adjustment image in the captured image. Specifically, the adjustment image is displayed in a predetermined color (for example, a bonnet color or a complementary color of the dashboard color), and the image area extracting unit 12 in the captured image An area displayed in color is extracted as an image area.
  • FIG. 7D shows a diagram in which the image area G1 extracted by the image area extraction unit 12 is superimposed on the blind spot area C1, the first and second safety view areas E1, E2.
  • the image display area determination unit 15 determines the display position and size of the image to be displayed by the HUD 1 based on the image area G1, the blind spot area C1, the first and second safety view areas E1, E2 thus extracted. decide. Specifically, the image display area determination unit 15 determines the first and second safety based on the relative positional relationship between the image area G1, the blind spot area C1, and the first and second safety view areas E1 and E2. The maximum image size that can display an image in the blind spot area C1 and in the first and second safe view areas E1 and E2, which satisfies the condition that the image does not extend over both the upper and lower ends of the view areas E1 and E2. To decide.
  • the adjustment image is gradually enlarged and the image area G1 is sequentially extracted from the captured image.
  • the image display area determination unit 15 performs the HUD1 based on the image area G1 thus extracted. Determine the size of the image to be displayed. In this example, by enlarging the adjustment image, the enlargement of the adjustment image is stopped when the image region G1 straddles both the upper end and the lower end of the first safe view region E1 or the second safe view region E2.
  • the image display area determination unit 15 determines the size of the adjustment image set before the image area G1 straddles both the upper end and the lower end of the first safe view area E1 or the second safe view area E2 by the HUD1. It is determined as the size of the image to be displayed.
  • the blind spot area and the safety view area are used to the maximum while properly complying with the restrictions (in other words, the standards of the Road Traffic Act) defined by the safety view area.
  • the optimal image display position and size can be determined. Therefore, by displaying the image determined in this manner, it is possible to improve the visibility of the image while appropriately securing the driver's field of view from the viewpoint of safety.
  • step S ⁇ b> 101 the image region extraction unit 12, the blind spot region extraction unit 13, and the safe visual field region extraction unit 14 obtain a photographed image generated by photographing with the camera 11. Then, the process proceeds to step S102.
  • step S102 the blind spot area extraction unit 13 extracts a blind spot area C1 in the captured image.
  • the blind spot area extraction unit 13 acquires information on the exterior color and the interior color of the vehicle, and extracts a portion having the exterior color and the interior color in the captured image as the blind spot area C1. Then, the process proceeds to step S103.
  • step S103 the safe view area extraction unit 14 extracts the first safe view area E1 in the captured image.
  • the safety view area extraction unit 14 is based on the road traffic law standard (see FIG. 4A), information indicating the characteristics of the vehicle, and information indicating the set position of the camera 11. By performing a predetermined calculation, a first safe field of view area E1 in the captured image is obtained. Then, the process proceeds to step S104.
  • step S104 the safe view area extraction unit 14 extracts a second safe view area E2 in the captured image.
  • the safety view area extraction unit 14 is based on the road traffic standard (see FIG. 4B), information indicating the characteristics of the vehicle, and information indicating the set position of the camera 11.
  • a second safe visual field region E2 in the captured image is obtained. Then, the process proceeds to step S105.
  • step S105 the image control unit 17 and the light source 19 display an adjustment image based on an instruction from the light source control unit 16.
  • the image control unit 17 and the light source 19 display the adjustment image in a predetermined color (for example, a bonnet color or a complementary color of the dashboard color). Then, the process proceeds to step S106.
  • step S106 the image region extraction unit 12 extracts an image region G1 in the captured image.
  • the image region extraction unit 12 extracts a portion having a predetermined color in the captured image as the image region G1. Then, the process proceeds to step S107.
  • step S107 the image display area determination unit 15 performs the first safety visual field area E1 or the first safety visual field area E1 based on the image area G1, the blind spot area C1, the first and second safety visual field areas E1, E2 extracted as described above. 2. It is determined whether or not the image area G1 straddles both the upper end and the lower end of the safety view area E2.
  • step S107 the process proceeds to step S108.
  • step S108 the light source control unit 16 performs control so that the size of the image is enlarged. Specifically, the light source control unit 16 supplies a control signal S16b to the light source drive driver 18 in order to move the light source 19 by the drive motor 20 so that the size of the image is enlarged. Then, the process returns to step S106.
  • the image region extraction unit 12 extracts the image region G1 again (step S106), and the image display region determination unit 15 determines that the image region G1 is the upper end of the first safe view region E1 or the second safe view region E2. It is determined again whether it straddles both lower ends (step S107).
  • the size of the image is enlarged until the image area G1 straddles both the upper end and the lower end of the first safe view area E1 or the second safe view area E2. It will be.
  • step S109 the light source control unit 16 performs control so that the size of the image is reduced. Specifically, the light source controller 16 returns the image area G1 to the size of the image set before straddling both the upper end and the lower end of the first safe view area E1 or the second safe view area E2. Take control. In this case, the light source control unit 16 supplies a control signal S16b to the light source drive driver 18 in order to move the light source 19 by the drive motor 20 so that the size of the image is reduced. Then, the process ends.
  • the size can be determined.
  • Modification 1 In the first modification, when it is difficult to extract an image region corresponding to the adjustment image from the photographed image, the color and luminance of the adjustment image are changed. Specifically, in Modification 1, when the image region extraction unit 12 cannot accurately extract the image region, the light source control unit 16 changes the image control unit so that the color and brightness of the adjustment image are changed. 17 is controlled. Thereby, the precision which extracts an image area
  • Modification 2 In the above example, the display position and the size of the image are changed by moving the position of the light source 19. However, in the second modification, the position of the light source 19 is fixed and predetermined image processing (image processing is performed). The display position and size of the image are changed by a process for adjusting the elevation angle. According to the second modification, it is not necessary to use the light source drive driver 18 and the drive motors 20 and 21 that move the light source 19.
  • Modification 3 In the above, the embodiment in which the display position and the size of the image are changed is shown. However, in the third modification, only one of the display position and the size of the image is changed. Specifically, in the third modification, as in the above-described embodiment, either one of the display position and the size of the image is changed based on the extracted image area, blind spot area, and safety view area.
  • Modification 4 In the above, the embodiment in which the display position and the size of the image are changed based on both the blind spot area and the safe view area has been shown. However, in the fourth modification, the image is based on only one of the blind spot area and the safe view area. The display position and size of are changed. For example, when only the blind spot area is used, the size of the image can be set to match the vertical length of the blind spot area. That is, an image can be displayed over the entire blind spot area.
  • Modification 5 In the fifth modification, the present invention is applied to a configuration in which the windshield itself functions as a display. As described above, when the windshield itself functions as a display screen, the position of the blind spot area and the safety view area on the windshield can be easily specified only by the position information of the camera 11. Further, according to the fifth modification, it is not necessary to use the light source driver 18 and the drive motors 20 and 21 that move the light source 19.
  • Modification 6 In the sixth modification, the present invention is applied to a portable terminal device (such as a smartphone) having a call function.
  • a portable terminal device such as a smartphone
  • the components camera 11, image region extraction unit 12, blind spot region extraction unit 13, safety view region extraction unit 14, image display region determination unit 15 and light source control unit 16
  • the portable terminal device By providing the portable terminal device with the same function, it is possible to realize the same control as in the above-described embodiment.
  • the HUD 1 is installed on the dashboard of the vehicle.
  • the HUD 1 can be installed at various positions in the vehicle other than the dashboard.
  • the present invention can be used for an image display device such as a head-up display.
  • HUD Head-up display
  • SYMBOLS 11
  • Camera Image area extraction part 13
  • Blind spot area extraction part 14
  • Safety visual field area extraction part 15
  • Image display area determination part 16
  • Light source control part 17
  • Image control part 18
  • Light source drive driver 19
  • Light source 20 21 Drive motor

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Optics & Photonics (AREA)
  • Instrument Panels (AREA)

Abstract

Un affichage tête haute comprend : une source de lumière qui projette de la lumière sur un moyen d'affichage pour afficher une image virtuelle ; un moyen de reconnaissance servant à reconnaître une zone indiquée sur le moyen d'affichage afin de déterminer l'emplacement d'affichage et/ou la taille d'une image virtuelle et d'assurer ainsi un champ de vision à des fins de sécurité tout en garantissant la visibilité de l'image virtuelle lors de l'observation par le biais du moyen d'affichage et à partir d'un emplacement indiqué ; et un moyen de commande permettant d'exercer, sur la base de la zone indiquée que le moyen de reconnaissance a reconnu, une commande qui change l'emplacement d'affichage et/ou la taille de ladite image virtuelle. Il est par conséquent possible d'améliorer la visibilité de l'image tout en assurant à l'utilisateur un champ de vision correct à des fins de sécurité.
PCT/JP2011/062612 2011-06-01 2011-06-01 Affichage tête haute, procédé d'affichage d'image par affichage tête haute et programme d'affichage d'image WO2012164704A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2011/062612 WO2012164704A1 (fr) 2011-06-01 2011-06-01 Affichage tête haute, procédé d'affichage d'image par affichage tête haute et programme d'affichage d'image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2011/062612 WO2012164704A1 (fr) 2011-06-01 2011-06-01 Affichage tête haute, procédé d'affichage d'image par affichage tête haute et programme d'affichage d'image

Publications (1)

Publication Number Publication Date
WO2012164704A1 true WO2012164704A1 (fr) 2012-12-06

Family

ID=47258589

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/062612 WO2012164704A1 (fr) 2011-06-01 2011-06-01 Affichage tête haute, procédé d'affichage d'image par affichage tête haute et programme d'affichage d'image

Country Status (1)

Country Link
WO (1) WO2012164704A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015027852A (ja) * 2013-07-30 2015-02-12 トヨタ自動車株式会社 運転支援装置
WO2015137361A1 (fr) * 2014-03-10 2015-09-17 矢崎総業株式会社 Dispositif d'affichage électroluminescent de véhicule et système d'affichage de véhicule
JPWO2017043107A1 (ja) * 2015-09-10 2017-12-14 富士フイルム株式会社 投写型表示装置及び投写制御方法
JPWO2017043108A1 (ja) * 2015-09-10 2018-02-08 富士フイルム株式会社 投写型表示装置及び投写制御方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003291688A (ja) * 2002-04-03 2003-10-15 Denso Corp 表示方法、運転支援装置、プログラム
JP2005059660A (ja) * 2003-08-08 2005-03-10 Nissan Motor Co Ltd 車両用表示装置
JP2008280026A (ja) * 2007-04-11 2008-11-20 Denso Corp 運転支援装置
JP2009184554A (ja) * 2008-02-07 2009-08-20 Denso Corp 安全走行支援システム
JP2010188811A (ja) * 2009-02-17 2010-09-02 Honda Motor Co Ltd 車両用情報提示装置
JP2011002660A (ja) * 2009-06-18 2011-01-06 Honda Motor Co Ltd 車両用画像表示装置
JP2011081547A (ja) * 2009-10-06 2011-04-21 Toyota Motor Corp 車両用注意喚起装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003291688A (ja) * 2002-04-03 2003-10-15 Denso Corp 表示方法、運転支援装置、プログラム
JP2005059660A (ja) * 2003-08-08 2005-03-10 Nissan Motor Co Ltd 車両用表示装置
JP2008280026A (ja) * 2007-04-11 2008-11-20 Denso Corp 運転支援装置
JP2009184554A (ja) * 2008-02-07 2009-08-20 Denso Corp 安全走行支援システム
JP2010188811A (ja) * 2009-02-17 2010-09-02 Honda Motor Co Ltd 車両用情報提示装置
JP2011002660A (ja) * 2009-06-18 2011-01-06 Honda Motor Co Ltd 車両用画像表示装置
JP2011081547A (ja) * 2009-10-06 2011-04-21 Toyota Motor Corp 車両用注意喚起装置

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015027852A (ja) * 2013-07-30 2015-02-12 トヨタ自動車株式会社 運転支援装置
CN105209299A (zh) * 2013-07-30 2015-12-30 丰田自动车株式会社 驾驶辅助装置
WO2015137361A1 (fr) * 2014-03-10 2015-09-17 矢崎総業株式会社 Dispositif d'affichage électroluminescent de véhicule et système d'affichage de véhicule
US9892643B2 (en) 2014-03-10 2018-02-13 Yazaki Corporation Vehicle light emitting display device and vehicle display system
JPWO2017043107A1 (ja) * 2015-09-10 2017-12-14 富士フイルム株式会社 投写型表示装置及び投写制御方法
JPWO2017043108A1 (ja) * 2015-09-10 2018-02-08 富士フイルム株式会社 投写型表示装置及び投写制御方法
US10450728B2 (en) 2015-09-10 2019-10-22 Fujifilm Corporation Projection type display device and projection control method

Similar Documents

Publication Publication Date Title
US10754154B2 (en) Display device and moving body having display device
KR101544524B1 (ko) 차량용 증강현실 디스플레이 시스템 및 차량용 증강현실 디스플레이 방법
CN111169382B (zh) 驾驶辅助装置、驾驶辅助***、驾驶辅助方法以及程序
JP6413207B2 (ja) 車両用表示装置
JP6877842B2 (ja) 車載表示システム
JP2009269551A (ja) 車両用表示装置
US20160125631A1 (en) Apparatus for dynamically controlling hud (head-up display) information display position
JP2009150947A (ja) 車両用ヘッドアップディスプレイ装置
JP6804805B2 (ja) ヘッドアップディスプレイ装置
JP2011213186A (ja) 電子サイドミラー装置
JP2008044603A (ja) 車両用防眩装置
JP2006248374A (ja) 車両安全確認装置及びヘッドアップディスプレイ
JP2003104145A (ja) 運転支援表示装置
WO2012164704A1 (fr) Affichage tête haute, procédé d'affichage d'image par affichage tête haute et programme d'affichage d'image
JP2010143411A (ja) ヘッドアップディスプレイ装置
WO2019058492A1 (fr) Système et procédé d'affichage
JP2016101771A (ja) 車両用ヘッドアップディスプレイ装置
JPWO2018221039A1 (ja) ぶれ補正装置及び撮像装置
JP2018121287A (ja) 車両用表示制御装置、車両用表示システム、車両用表示制御方法およびプログラム
JP6213300B2 (ja) 車両用表示装置
JPWO2019026747A1 (ja) 車両用拡張現実画像表示装置
KR20140130802A (ko) 헤드 업 디스플레이 시스템
JP2022084266A (ja) 表示制御装置、表示装置、及び画像の表示制御方法
JP6653184B2 (ja) 車両用表示装置
JP3923747B2 (ja) 二輪車用ヘルメットマウントディスプレイシステム及びその表示制御方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11866527

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11866527

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP