WO2021241718A1 - Head-up display device - Google Patents

Head-up display device Download PDF

Info

Publication number
WO2021241718A1
WO2021241718A1 PCT/JP2021/020328 JP2021020328W WO2021241718A1 WO 2021241718 A1 WO2021241718 A1 WO 2021241718A1 JP 2021020328 W JP2021020328 W JP 2021020328W WO 2021241718 A1 WO2021241718 A1 WO 2021241718A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
warping
display
viewpoint
virtual image
Prior art date
Application number
PCT/JP2021/020328
Other languages
French (fr)
Japanese (ja)
Inventor
勇希 舛屋
Original Assignee
日本精機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本精機株式会社 filed Critical 日本精機株式会社
Priority to JP2022526655A priority Critical patent/JPWO2021241718A1/ja
Publication of WO2021241718A1 publication Critical patent/WO2021241718A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position

Definitions

  • the present invention is a head-up display (HUD) device or the like that projects (projects) the display light of an image onto a projected member such as a windshield or a combiner of a vehicle and displays a virtual image in front of the driver or the like.
  • HUD head-up display
  • an image correction process (hereinafter referred to as warping process) in which the projected image is pre-distorted so as to have characteristics opposite to the distortion of the virtual image caused by the curved surface shape of the optical system, the windshield, etc. )It has been known.
  • the warping process in the HUD apparatus is described in, for example, Patent Document 1.
  • Patent Document 2 describes that the warping process is performed based on the viewpoint position of the driver (viewpoint following warping).
  • the present inventor has considered implementing viewpoint position-following warping control that updates the warping parameter according to the viewpoint position of the driver (which can be widely interpreted by the driver, crew, etc.), and is described below. Recognized the challenge.
  • the present inventor further improves the inclined surface HUD to include an inclined surface and an elevation surface (including a pseudo elevation surface) in one virtual image display surface, for example, a distant inclined surface is long on the road surface.
  • a depth image virtual image
  • non-superimposed content for example, a virtual image composed of numbers and characters that are always displayed
  • a nearby elevation is displayed on a nearby elevation.
  • the viewpoint position of the driver may move (shift) along the width direction (left-right direction) of the vehicle, and at this time, the visibility changes due to the motion parallax. If this causes a visual adverse effect, it is preferable to devise a warping process and take measures against that point as well.
  • the "motion parallax” is the parallax caused by the movement of the observer's viewpoint (or observation target), and even if the line-of-sight direction is changed by the same amount, the closer the object is, the more the position in the visual field is. It changes greatly, and the farther it is, the less the position changes, and it is said that the difference in the amount of change in this position makes it easier to feel the perspective.
  • Patent Documents 1 and 2 the relationship between parallax and warping processing is not examined at all. In addition, no consideration is given to what to do when the viewpoint moves during the warping process.
  • One of the objects of the present invention is, for example, to provide a more natural visibility for a depth image (tilted image) premised on three-dimensional vision or a facing image (standing image) not premised on three-dimensional vision. It is to realize warping control that can be secured.
  • the head-up display device is a head-up display (HUD) device mounted on a vehicle and projecting an image onto a projected member provided on the vehicle so that the driver can visually recognize a virtual image of the image.
  • An image generation unit that generates the image and A display unit that displays the image and An optical system including an optical member that reflects the display light of the image and projects it onto the projected member.
  • a control unit that updates the warping parameter according to the viewpoint position of the driver in the eyebox and performs viewpoint position tracking warping control that corrects the image displayed on the display unit using the warping parameter.
  • the control unit When performing warping processing on a depth image, warping control is performed with the contour when the contour of the image area of the depth image is correctly recognized as a virtual image as a trapezoid in which at least one set of opposite sides is parallel to each other. death, When the warping process is performed on the facing image, the warping control is performed by setting the contour when the contour of the image area of the facing image is correctly recognized as a virtual image as a rectangle or a rectangle including a square.
  • the contour (outer shape) of the image area to be warped is trapezoidal in the depth image (for example, tilted image) and rectangular (rectangular or square) in the facing image (for example, standing image).
  • the depth image is, for example, a deep image displayed on a virtual image display surface of an inclined surface. It may be simply referred to as a depth image or an inclined image.
  • the facing image is preferably an image displayed so as to face the driver. For example, an elevation that stands up against the road surface (not only a surface that stands orthogonal to the road surface, but at least part of it is inclined, but it is treated as an upright surface as a whole. It is an image displayed on a "pseudo-elevation" that can be). In the following description, it may be simply referred to as a face-to-face image or a standing image.
  • the viewpoint position tracking warping process is, for example, a function value for a filter that determines a warping parameter (for example, a polynomial, a multiplier, a constant, etc. for warping image correction using a digital filter) according to the position of the viewpoint in the eyebox. Etc.) and use the warping parameters to perform coordinate conversion for multiple coordinate points set in the image area of the image to be warped, thereby preliminarily distorting the characteristics opposite to the distortion caused by the optical member. It can be realized by giving to.
  • a warping parameter for example, a polynomial, a multiplier, a constant, etc. for warping image correction using a digital filter
  • an image (virtual image) is displayed on an elevation surface perpendicular to the road surface
  • an image area (may be an entire area or a partial area) in a display to be image-corrected may be used.
  • the contour corresponding to the contour (visible for convenience of explanation, the shape is, for example, a rectangle) in the virtual image display (in other words, the contour when the viewer correctly sees it as a virtual image).
  • the contour (outer shape) suitable for each of the depth image (for example, an inclined image) and the facing image (for example, a standing image) is uniformly set as a rectangle (rectangle or square). Is adopted.
  • the above-mentioned "outline when the image is correctly viewed by the viewer as a virtual image” is "a contour as a" normal image (virtual image) "that the HUD device intends to display, or a" correct image (virtual image) ". It can also be said. If the outline of the image (virtual image) is a trapezoid, it means that the HUD device takes the trapezoid as the correct answer and warps the image of the outline (rectangle) of the image area to display it.
  • the virtual image display surface of the inclined surface arranged in front of the vehicle has its contour (here, this contour is seen from the viewpoint located in the center of the eyebox) (in other words, when the driver sees it from the front). It is assumed that it is visible.
  • this contour is seen from the viewpoint located in the center of the eyebox
  • it is assumed that it is visible.
  • it is assumed to be the outline of the entire virtual image display surface here, it may be the outline of the image display area for a certain image). It looks like a trapezoid (including a parallelogram).
  • the viewpoint moves (shifts) in the width direction (horizontal direction) of the vehicle
  • the relative position between the viewpoint and the virtual image will change, so the effect of motion parallax
  • the trapezoidal shape is distorted and deformed to the left and right.
  • the outline of the image area of the image displayed on the virtual image display surface of the inclined surface (the image of the long arrow, the image of the center line, etc.) is also based on the trapezoid.
  • the contour corresponding to the contour of the image area and visually recognized at the time of displaying a virtual image is a rectangle (rectangle or square).
  • the virtual image after the warping process (after the distortion caused by the windshield or the like is removed) has a more natural perspective. Visibility is improved by obtaining a more natural three-dimensional effect.
  • the change in the appearance of the image (virtual image) due to the movement of the viewpoint is felt to be the same as that of the road surface in the background. This is advantageous for making the driver feel that the image (virtual image) is superimposed on the road surface (matches the road surface), and in this respect as well, the visibility of the road surface superimposed HUD is improved. Leads to.
  • the outline (outer shape) of the image area is a rectangle (rectangle or square) as in the conventional case.
  • the vehicle speed display displayed on the elevation is often displayed at a fixed position at all times, and it is important to be able to read numbers, symbols, characters, etc. accurately, and since perspective is not particularly necessary, it is tilted. Since there is no advantage of making it trapezoidal as in the case of an image, the rectangular outer shape is adopted here as before.
  • the contour of the image area of the depth image (tilted image) and the contour of the image area of the facing image (standing image) are individually set and warped. This means that different methods of warping are being implemented.
  • a display with improved visibility that gives a natural perspective is realized, and for facing images (standing images), it is easier to see and has improved cognition. (Display suitable for grasping accurate information, etc.) is realized.
  • the control unit When the viewpoint moves along the width direction of the vehicle while the warping process is performed on the depth image, the deformation of the trapezoid caused by the movement is reflected. Warping control that deforms the trapezoid is performed. When the viewpoint moves along the width direction of the vehicle while the warping process is performed on the facing image, the deformation of the rectangle caused by the movement is canceled out. Warping control that deforms the rectangle may be performed.
  • the viewpoint moves (shifts) to the left or right, and in this case, the influence of parallax is taken into consideration for each of the depth image (tilted image) and the facing image (standing image).
  • warping is performed with the image area as a trapezoid.
  • the trapezoid is an isosceles trapezoid (the upper base and the lower base are parallel, the internal angles of both ends of the upper base are equal, and the internal angles of both ends of the lower base are also equal. It looks like an isosceles trapezoid).
  • the viewpoint shifts in the width direction (left-right direction) of the vehicle.
  • the position of the displayed virtual image is fixed (to be exact, the position in the coordinate system set in the real space is fixed) (for example, the virtual image of the center line is meaningless unless it is superimposed on the center of the road surface).
  • the position cannot be moved), but if the viewpoint moves, the relative positional relationship between the virtual image and the viewpoint changes.
  • the above Image distortion (the image looks distorted under the influence of motion parallax) can, of course, be expected from the beginning.
  • the motion parallax creates a natural stereoscopic effect (perspective), and therefore the above distortion assists the natural vision of the depth image.
  • the motion parallax associated with the movement of the viewpoint adversely affects the visual sense. It is necessary for the vehicle speed display and the like to always be seen as a standing image without deformation so as not to reduce the cognitive recognition of the display.
  • the viewpoint moves in the left-right direction, for example, assuming that a rectangular image (virtual image) is visible from the viewpoint located at the center of the eye box.
  • the position of the virtual image in the coordinate system set in the real space is fixed (for example, it can be assumed that the vehicle speed display is always displayed near the lower right end of the windshield).
  • the deformation based on such motion parallax (deformation of a rectangle such as a rectangle) is suppressed (preferably offset).
  • Motion parallax correction is performed to deform the image (outline of the image area: rectangular) in the direction opposite to the actual deformation due to the influence of motion parallax, and what is the distortion that occurs in the optical system for the deformed image area?
  • Warping is performed, including distortion correction that gives distortion of the opposite characteristic.
  • the "warping" of the present invention has a meaning including a correction process for optimizing the appearance of a virtual image in consideration of parallax and motion parallax, rather than simply correcting the distortion.
  • the "image by motion parallax (virtual image)" has a meaning.
  • the motion parallax correction and the distortion correction are performed as a set.
  • the viewpoint shifts the light of the deformed image is incident on the human eye, which is the opposite of the deformation due to the motion parallax.
  • the deformation due to the motion parallax is canceled out by the reverse deformation applied in advance, and when the human brain judges the image, it looks like a rectangular (rectangular or square) image.
  • the image maintains a rectangle such as a rectangle and does not change, so that the cognition does not deteriorate.
  • Visuality is improved because an accurate facing image is always obtained.
  • the control unit deforms the rectangle so as to cancel the deformation of the rectangle in the warping control for the facing image.
  • the lower side of the rectangle may be fixed and the upper side may be moved to distort it.
  • the upper side may be fixed and the lower side may be moved to distort.
  • the upper side of the rectangle displayed in front is far away, and the lower side is near, and considering the motion parallax, the farther the rectangle is, the smaller the change in position is perceived.
  • the amount of movement (substantial deformation amount) can be reduced as compared with the case where the lower side is moved to deform the rectangle, and there is a difference in this respect. ..
  • the control unit If the viewpoint moves along the width direction of the vehicle while the warping process is being performed on the facing image, The position of the facing image may be changed so that the virtual image of the facing image is fixed at a predetermined position in the viewpoint coordinate system with respect to the viewpoint.
  • the distortion opposite to the distortion due to the motion parallax (deformation of the rectangle) (deformation of the inverse rectangle).
  • the display position of the facing image (virtual image) is shifted according to the movement of the viewpoint.
  • the problem that the facing image is distorted due to the viewpoint position shift can be caused by the fixed display position of the facing image (virtual image) in the coordinate system set in the real space. ..
  • the above problem is dealt with by eliminating the premise that the display position in the coordinate system set in the real space is fixed.
  • the viewpoint coordinate system (a coordinate system in which axes along each of the front-rear direction, the left-right direction, and the vertical direction of the vehicle are set with the person's viewpoint as the center, and the person's viewpoint (including the face, etc.)). If the image moves, the coordinate system also moves according to the movement), the position of the image (rectangular image area) on the display surface of the display unit (so that the virtual image of the opposite image is always in the same position). Alternatively, control is performed to appropriately move the image generation surface in the image generation unit or the position in the image generation space) in correspondence with the movement of the viewpoint.
  • control unit In the case of displaying a mixture of the depth image and the facing image, the depth image and the facing image are individually subjected to image processing, and then the images after the image processing are combined to form one image. May be.
  • each image (which can also be referred to as an area of each image) is individually corrected, for example, an image correction unique to each image. Then, a method of synthesizing images to generate one image is adopted.
  • This method can be realized by, for example, image rendering.
  • image rendering By treating the depth image (or the image area of the depth image) and the facing image (or the image area of the facing image) separately, there is a particular problem even when it is necessary to perform different image corrections for each image. It does not occur and the process can be simplified.
  • the control unit When the region surrounded by a trapezoid whose opposite sides are parallel to each other is defined as the first contour in the image region of the depth image viewed from a predetermined first position in the left-right direction of the eye box.
  • the upper side of the first contour When viewed from the left side of the first position, the upper side of the first contour is on the right side relative to the lower side with respect to the deformation of the trapezoid caused by the movement.
  • the warping control is performed so that the upper side of the first contour is on the left side relative to the lower side with respect to the deformation of the trapezoid caused by the movement.
  • the motion parallax of the depth image (tilted image) and the depth image (tilted image) ) Is different from the motion parallax of the background such as the road surface where it overlaps.
  • the motion parallax of the depth image (tilted image) becomes larger than the motion parallax of the background such as the road surface, and the superposition (coincidence) between the depth image and the background such as the road surface decreases. ..
  • the motion parallax is used. Warping is performed so that the distortion of the first contour caused by the above is reduced and the driver can visually recognize the distortion.
  • the position of the upper side of the first contour is distorted to the left with respect to the position of the lower side due to motion parallax. Make corrections to reduce.
  • the correction for the first contour and the correction for the second contour are both corrections in the direction of weakening the distortion caused by the motion parallax, but the correction for the first contour is smaller than the correction for the second contour.
  • the motion parallax of the depth image approaches the motion parallax with the background such as the road surface, so that a display with improved visibility is realized, in which a natural perspective can be felt.
  • the control unit When the control unit performs warping processing on the depth image and the viewpoint moves along the width direction of the vehicle, the deformation of the trapezoid caused by the movement is reflected.
  • the displayed image is a road surface superimposed image displayed so as to be superimposed on the road surface.
  • the virtual image of the road surface superimposed image is displayed on the inclined surface inclined with respect to the road surface and the virtual image appears to be lifted from the road surface, the trapezoid is deformed as compared with the case where the virtual image is lifted from the road surface and cannot be seen.
  • Warping control with motion parallax correction may be performed to reduce the degree of.
  • the displayed image is a road surface superimposed image (for example, an image of a figure of an arrow) displayed so as to be superimposed on the road surface, and the virtual image of the road surface superimposed image is inclined with respect to the road surface.
  • the image of the arrow in other words, the image of the arrow
  • the motion parallax correction is performed so as to reduce the degree of deformation of the trapezoidal shape as the outline of the display area of.
  • the position of the virtual image of the actual arrow figure is more than the position of the road surface where the arrow is superimposed (visual landing position of the arrow), the driver (visual person). This is because it is on the front side when viewed from the front.
  • kinetic parallax occurs in the virtual image of the arrow seen in the foreground, while kinetic parallax also occurs in the road surface seen in the back. Since the road surface seen in the back is displayed on the distant side, the motion parallax is smaller, and the virtual image of the arrow seen in the foreground is displayed on the front side, so that the motion parallax is more perceived.
  • the motion parallax for the virtual image of the arrow feels larger than when the virtual image of the arrow is in close contact with the road surface, for example. Therefore, the deformation (tilt) of the virtual image of the arrow is emphasized. In this case, the deformation (tilt) is reduced, in other words, the deformation of the trapezoid, which is the outline of the display area of the arrow image, is suppressed to some extent (however, the deformation is offset). (Remains without), perform motion parallax correction. As a result, a display with appropriate motion parallax can be obtained.
  • FIG. 1A shows an outline of warping control (conventional warping control) in a conventional HUD device that displays a virtual image on a virtual image display surface erected with respect to the road surface, and a virtual image (and) displayed through the warping control.
  • FIG. 1B is a diagram for explaining a mode of distortion of the virtual image display surface), and is a diagram showing an example of a virtual image visually recognized by a driver through a windshield.
  • FIG. 2A is a diagram for explaining an outline of viewpoint position tracking warping control
  • FIG. 2B is a diagram showing a configuration example of an eyebox whose inside is divided into a plurality of partial regions.
  • FIG. 3A is a diagram showing an example of a virtual image visually recognized by a user through a windshield, FIG.
  • FIG. 3B is a diagram showing a state of virtual image display on a virtual image display surface
  • FIG. 3C is a diagram showing a state of virtual image display on a virtual image display surface. It is a figure which shows an example of the image display on the display surface of a display part.
  • 4 (A) is a diagram showing a configuration of a HUD device mounted on a vehicle and an example of a virtual image display surface
  • FIGS. 4 (B) and 4 (C) are views of the virtual image display surface shown in FIG. 4 (A). It is a figure which shows the example of the method to realize.
  • FIG. 1 is a diagram showing a configuration of a HUD device mounted on a vehicle and an example of a virtual image display surface
  • FIGS. 4 (B) and 4 (C) are views of the virtual image display surface shown in FIG. 4 (A). It is a figure which shows the example of the method to realize.
  • FIG. 5 is a diagram showing an example of a display method of a HUD device (parallax type HUD device or 3D HUD device) that displays a three-dimensional virtual image.
  • FIG. 6 is a diagram showing an example of viewpoint tracking warping control in a HUD device capable of displaying at least one of a virtual image of a standing image (including a pseudo standing image) and a virtual image of an inclined image (depth image).
  • an inclined surface for example, an inclined image HUD region of a virtual image display surface
  • the viewpoint of the driver (monocular) looking at the inclined surface is located in the central divided region of the eye box.
  • FIGS. 7 (F), 7 (G), (H) is a virtual image of an arrow when the viewpoint is located in each of the left divided area, the central divided area, and the right divided area of the eyebox.
  • FIGS. 7 (I) and 7 (J) are diagrams showing the appearance of the arrow (shape of the virtual image of the arrow), and FIGS. 7 (I) and 7 (J) are diagrams showing the appearance of the virtual image of the arrow with / without motion misalignment correction.
  • an elevation including a pseudo-elevation, for example, a standing image HUD region of a virtual image display surface
  • the viewpoint (monocular) of the driver looking at it is the center of the eyebox.
  • FIG. 8 (B), (C), and (D) are views showing the case where the vehicle moves along the width direction (horizontal direction) of the vehicle from the state of being located in the divided area of the above.
  • FIGS. G) is a diagram showing the contents of image correction applied to the display image in order to suppress deformation of the rectangle which is the contour of the elevation
  • FIGS. 8 (H), (I), and (J) have the viewpoint of the eye box. It is a figure which shows the appearance of the virtual image of an arrow (the shape of the virtual image of an arrow) when it is located in each of the left division area, the center division area, and the right division area of.
  • 9 (A) to 9 (D) are diagrams for explaining two examples (an example of deforming the lower side or the upper side) of image correction applied to a display image while suppressing deformation of a rectangle which is an elevation contour. Is.
  • FIG. 10A is a diagram showing the contents of warping control for an inclined image (depth image) displayed on an inclined surface
  • FIG. 10B is a warping for an standing image (facing image) displayed on an elevation surface
  • 10 (C) to 10 (E) are diagrams showing the contents of control
  • FIGS. 10 (C) to 10 (E) are diagrams showing an example of generating an image to be displayed on the display surface by image rendering
  • FIGS. 10 (F) to 10 (H) are views according to the viewpoint position.
  • it is a figure which shows the example of the image (visual-viewing image, visual-viewing virtual image) visually recognized by a driver.
  • FIG. 10 (F) to 10 (H) are views according to the viewpoint position.
  • FIG. 12A is a diagram showing an example of a configuration of a main part of the HUD device when a standing image (pseudo standing image) is fixedly displayed at a predetermined position in the viewpoint coordinate system regardless of the viewpoint position
  • FIG. 12B Is a diagram showing that the display (virtual image) appears to move following the movement of the viewpoint. It is a figure which shows an example of the procedure in viewpoint tracking warping control.
  • 14 (A) and 14 (B) are views showing a modified example of the virtual image display surface.
  • the depth image is, for example, a deep image displayed on a virtual image display surface of an inclined surface.
  • the case where the inclination angle is zero with respect to the road surface that is, the case where the image is superimposed on the road surface
  • the facing image is preferably an image displayed so as to face the driver.
  • an elevation that stands up against the road surface (not only a surface that stands orthogonal to the road surface, but at least part of it is inclined, but it is treated as an upright surface as a whole. It is an image displayed on a "pseudo-elevation" that can be). It may be referred to simply as a face-to-face image or a standing image.
  • FIG. 1A shows an outline of warping control (conventional warping control) in a conventional HUD device that displays a virtual image on a virtual image display surface erected with respect to the road surface, and a virtual image (and) displayed through the warping control.
  • FIG. 1B is a diagram for explaining a mode of distortion of the virtual image display surface), and is a diagram showing an example of a virtual image visually recognized by a driver through a windshield.
  • the HUD device 100 is mounted on the vehicle (which can be interpreted in a broad sense) 1.
  • the HUD device 100 includes a display unit (for example, a light transmission type screen) 101, a reflecting mirror 103, and a curved mirror (for example, a concave mirror) as an optical member for projecting display light, and the reflecting surface may be a free curved surface.
  • a display unit for example, a light transmission type screen
  • a reflecting mirror 103 for example, a concave mirror
  • a curved mirror for example, a concave mirror
  • the image displayed on the display unit 101 is projected onto the virtual image display area 5 of the windshield 2 as a projected member via the reflecting mirror 103 and the curved mirror 105.
  • Reference numeral 4 indicates a projection area.
  • the HUD device 100 may be provided with a plurality of curved mirrors.
  • a refraction optical element such as a lens, a diffractive optical element, or the like can be used.
  • a configuration including a functional optical element may be adopted.
  • a part of the display light of the image is reflected by the windshield 2, and the driver or the like located inside (or on the EB) the preset eye box (here, a rectangular shape having a predetermined area) EB.
  • the virtual image V is displayed on the virtual virtual image display surface PS corresponding to the display surface 102 of the display unit 101 by being incident on the viewpoint (eye) A of the above and forming an image in front of the vehicle 1.
  • the image of the display unit 101 is distorted due to the influence of the shape of the curved mirror 105, the shape of the windshield 2, and the like.
  • distortion occurs due to the optical system of the HUD device 100 and the optical member including the windshield 2.
  • warping processing warping image correction processing
  • the virtual image V displayed on the virtual image display surface PS by the warping process becomes a flat image without curvature.
  • the display light is projected onto the wide projection area 4 on the windshield 2 and the display light is projected.
  • the virtual image display distance is set in a considerably wide range, it is undeniable that some distortion remains, which is unavoidable.
  • PS'shown by a broken line indicates a virtual image display surface in which the distortion is not completely removed, and V'indicates a virtual image displayed on the virtual image display surface PS'. There is.
  • the degree of distortion or the mode of distortion of the virtual image V'in which distortion remains differs depending on the position of the viewpoint A on the eyebox EB. Since the optical system of the HUD device 100 is designed on the assumption that the viewpoint A is located near the central portion, when the viewpoint A is near the central portion, the distortion of the virtual image is relatively small, and the distortion of the virtual image is relatively small in the peripheral portion. Indeed, the distortion of the virtual image tends to increase.
  • FIG. 1B shows an example of a virtual image V visually recognized by the driver through the windshield 2.
  • the virtual image V having a rectangular outer shape has, for example, five vertically and five horizontally, for a total of 25 reference points (reference pixel points or coordinate points) GD (i, j). )
  • i and j are variables that can take values of 1 to 5.
  • i and j are variables that can take values of 1 to 5.
  • reference numeral 7 is a steering wheel.
  • FIG. 2A is a diagram for explaining an outline of the viewpoint position tracking warping process
  • FIG. 2B is a diagram showing a configuration example of an eyebox whose inside is divided into a plurality of partial regions.
  • FIG. 2 the same reference numerals are given to the parts common to those in FIG. 1 (this point is the same in the following figures).
  • the eyebox EB is divided into a plurality of (here, nine) partial regions J1 to J9, and the driver's viewpoint A is set in units of the respective partial regions J1 to J9. The position of is detected.
  • the display light K of the image is emitted from the projection optical system 118 of the HUD device 100, and a part of the display light K is reflected by the windshield 2 and incident on the driver's viewpoint (eye) A.
  • the viewpoint A is in the eye box, the driver can visually recognize the virtual image of the image.
  • the HUD device 100 has a ROM 210, and the ROM 210 has a built-in image conversion table 212.
  • the image conversion table 212 stores, for example, a warping parameter WP that determines a polynomial, a multiplier, a constant, or the like for image correction (warping image correction) by a digital filter.
  • the warping parameter WP is provided corresponding to each of the partial regions J1 to J9 in the eyebox EB.
  • WP (J1) to WP (J9) are shown as warping parameters corresponding to each partial region.
  • only WP (J1), WP (J4), and WP (J7) are shown as reference numerals.
  • the viewpoint A moves, the position of the viewpoint A in the plurality of partial regions J1 to J9 is detected. Then, any one of the warping parameters WP (J1) to WP (J9) corresponding to the detected partial area is read from the ROM 210 (warping parameter update), and the warping process is performed using the warping parameter.
  • FIG. 2B shows an eyebox EB in which the number of partial regions is increased as compared with the example of FIG. 2A.
  • the eyebox EB is divided into a total of 60 partial regions, 6 in the vertical direction and 10 in the horizontal direction.
  • Each subregion is displayed as J (X, Y) with each coordinate position in the X direction and the Y direction as a parameter.
  • FIG. 3A is a diagram showing an example of a virtual image visually recognized by a user through a windshield
  • FIG. 3B is a diagram showing a state of virtual image display on a virtual image display surface
  • FIG. 3C is a diagram showing a state of virtual image display on a virtual image display surface. It is a figure which shows an example of the image display on the display surface of a display part.
  • the virtual image produced by the HUD device 100 is displayed in the virtual image display area 5 in the windshield 2.
  • a virtual image SP of a vehicle speed display (display of “120 km / h”) is displayed on the front side when viewed from a user (driver or the like).
  • the virtual image of the vehicle speed display SP can be said to be a virtual image of a "standing image”, in other words, a virtual image G1 of a "face-to-face image” displayed so as to face the user.
  • the virtual image G1 of the "standing image (face-to-face image)" is a non-superimposed content that is always displayed on the front side when viewed from the user (for example, the state of the vehicle 1 or the situation around the vehicle 1 is not intended to be superimposed on the target).
  • Etc. or may be a navigation display consisting of at least one of characters, figures, symbols, and the like.
  • a curved navigation arrow AW is displayed on the back side so as to cover the road surface 40 extending linearly in front of the vehicle 1 and gradually ascending from the side close to the vehicle 1 to the side far from the vehicle 1.
  • the display of the arrow AW gives a unique three-dimensional vision, and is also an excellent display with a sense of presence and aesthetics.
  • the display of the arrow AW can be broadly referred to as a virtual image G2 of a "depth image" having depth information (distance information) displayed as an inclined image.
  • the "curved surface” includes a part which is a plane in a part thereof.
  • the virtual image G2 of the "depth image” is an information image including a figure of an arrow relating to the progress of the vehicle 1 or another vehicle, and a figure other than the arrow (for example, a triangular figure indicating the traveling direction, a straight line indicating the center line, etc.).
  • the figure may be, for example, a figure that covers the road surface 40 and has at least a main part separated from the road surface 40 and is drawn along the road surface 40 (in other words, "visual overlap with the road surface”. It may be "a figure to be given” or "a figure extending along the road surface”).
  • an operation unit 9 capable of switching on / off of the HUD device or the like and setting an operation mode or the like is provided in the vicinity of the steering wheel (in a broad sense, the steering handle) 7.
  • a display device for example, a liquid crystal display device 13 is provided in the center of the front panel 11.
  • the display device 13 can be used, for example, for assisting the display by the HUD device.
  • the display device 13 may be a composite panel having a touch panel or the like.
  • the virtual image display surface PS1 which is an image plane is far from the near end U1 on the side closer to the vehicle 1 when viewed from the viewpoint (eye) A of the user (driver or the like) on the vehicle (own vehicle) 1. It is a curved surface that extends integrally to the far end U3 on the side, and is a second distance between the road surface 40 and the far end U3 rather than the first distance h1 between the road surface 40 and the near end U1.
  • the distance h2 is set large.
  • the virtual image display surface PS1 includes the near end portion U1 and is shown as a standing image HUD region Z1 (in the figure, surrounded by a broken line) in which the virtual image G1 of the “standing image (face-to-face image)” on the road surface 40 is displayed. (), It is located farther from the standing image HUD region Z1 (front in the front-rear direction (Z direction) of the vehicle 1), and is inclined toward the road surface 40 side from the virtual image G1 of the "standing image (face-to-face image)". It is distinguished from the tilted image HUD region Z2 in which the virtual image G2 of the tilted image (depth image) is displayed.
  • the intermediate portion U2 is a portion (or a point) on the virtual image display surface PS1 located at the boundary between the standing image HUD region Z1 and the tilted image HUD region Z2.
  • the virtual image G1 of the standing image can be paraphrased as the first virtual image G1. Further, the virtual image G2 of the tilted image (depth image) can be paraphrased as the second virtual image G2.
  • the distance from the user's viewpoint A (or the reference point corresponding to the viewpoint A set in the vehicle 1 or the like) to the near end U1 of the virtual image display surface PS1 (“imaging distance”, in other words, “virtual image display”. "Distance”) is L1
  • the distance to the intermediate portion U2 is L2
  • the distance to the far end portion U3 is L3.
  • L1 ⁇ L2 ⁇ L3 is established.
  • the length (extended range) of the virtual image display surface PS1 along the road surface 40 can be, for example, about 10 m to 30 m.
  • the area of the tilted image HUD region Z2 on the virtual image display surface PS1 is set to be larger than the area of the standing image HUD region Z1 (however, it is not limited to this). No).
  • the tilted image HUD area Z2 By setting a large relative area of the tilted image HUD area Z2, a wide area that can be displayed with a sense of depth (in other words, an area with expressive power in the depth direction) is secured, and the depth is realistic.
  • the display is easy to realize. In other words, it is easy to display by effectively utilizing the tilted image HUD region Z2, and the existence of the standing image HUD region Z1 is not greatly restricted. Therefore, for example, guidance information (for example, an arrow) can be presented to the user in an intuitive and realistic virtual image.
  • FIG. 3C shows an example of display control by the display control unit (reference numeral 190 in FIG. 11).
  • the specific configuration of the HUD device will be described later.
  • the display control unit uses the image display area 45 of the display surface 164 of the display unit 160 as a reference, for example, the boundary position LN'which is the boundary between the near side and the far side when viewed from the user. As a result, it is divided into a first display area Z1'corresponding to the standing image HUD area Z1 and a second display area Z2' corresponding to the tilted image HUD area Z2.
  • the display control unit (reference numeral 190 in FIG. 11) displays the first image RG1 (here, the vehicle speed display SP') at a predetermined position in the first display area Z1', and displays the second display area Z2.
  • the second image RG2 (here, the figure AW of the arrow for navigation) is displayed at a predetermined position of'.
  • each point of U1', U2', and U3' corresponds to each point of U1, U2, and U3 in FIG. 3B.
  • the horizontal direction on the display surface 164 corresponds to the "left-right direction (X direction)" of the vehicle 1 in the real space
  • the vertical direction is the "height direction (height direction)” which is a direction perpendicular to the road surface 40 in the real space. Y direction) ”.
  • vehicle information As the first image RG1, vehicle information, vehicle surrounding information, navigation information, and the like can be exemplified. These are required to have accurate display, quick recognition, etc., and vehicle speed information, etc. are often displayed at all times. It is displayed in an easy-to-see manner in a mode of standing (standing) at an angle (a predetermined angle or a threshold angle of an image (for example, 45 degrees or more)).
  • these information include vehicle speed display, road speed limit information, turn-by-turn information (for example, intersection name information, POI (specific point on map) information, etc.), various icons and characters. Display etc. can be mentioned.
  • the second image RG2 a figure of an arrow relating to the progress of the own vehicle or another vehicle, information including a figure other than the arrow, and the like can be mentioned. These are mainly figures, and it is important that they can be intuitively grasped and that distance information can be obtained without discomfort. It should be noted that the second image RG2 does not exclude, for example, characters.
  • the information image of the figure having depth is displayed as a virtual image with a sense of depth by the tilted image HUD. Specifically, these information include arrow information as a route guide, white line indicating a center line, colored graphic information indicating a frozen road surface area, or a steering wheel of a driver who is a user. Information on ADAS (advanced driver assistance system) that supports the operation of equipment can be given.
  • ADAS advanced driver assistance system
  • a virtual image display with a sense of reality as shown in FIG. 3 (A) is realized.
  • the first virtual image G1 which is a standing image standing at a certain angle or more with respect to the road surface 40, and the figure of the arrow extending along the front-rear direction of the vehicle 1 so as to cover the road surface 2, for example.
  • the second virtual image G2 it is possible to realize a display that is full of presence, easy to see, and has high expressive power. Therefore, the visibility of the virtual image display in the HUD device is improved.
  • FIG. 4 (A) is a diagram showing a configuration of a HUD device mounted on a vehicle and an example of a virtual image display surface
  • FIGS. 4 (B) and 4 (C) are views of the virtual image display surface shown in FIG. 4 (A). It is a figure which shows the example of the method to realize. Note that FIG. 4 (A) adopts a configuration different from that of FIG. 1 (A). Therefore, even if they are the same part, different reference numerals are given.
  • the direction along the front of the vehicle 1 (also referred to as the front-rear direction) is the Z direction
  • the direction along the width (horizontal width) of the vehicle 1 (horizontal width) is the X direction
  • the height direction of the vehicle 1 (also referred to as the front-rear direction).
  • the direction of the line segment perpendicular to the flat road surface 40 away from the road surface 40) is defined as the Y direction.
  • FIG. 4A it is also possible to display only the HUD device (specifically, the “standing image”) of the present embodiment inside the dashboard 41 of the vehicle (own vehicle) 1, and it is possible to display “tilt”.
  • a HUD device) 100 that can display only an “image” and can display a "standing image and an inclined image” at the same time is mounted.
  • the HUD device 100 has a display unit (sometimes referred to as an image display unit, specifically, a screen) 160 having a display surface 164 for displaying an image, and a display light K for displaying an image. It has an optical system 120 including an optical member that projects onto the windshield, and a light projecting unit (image projection unit) 150, and the optical member is a curved mirror (concave mirror or magnifying mirror) having a reflecting surface 179.
  • the reflective surface 179 of the curved mirror 170 having 170 is not a shape having a uniform radius of curvature, but can be a shape consisting of, for example, a set of partial regions having a plurality of radius of curvature, for example, a free curved surface.
  • a free curved surface is a curved surface that cannot be expressed by a simple mathematical formula, and a curved surface is expressed by setting some intersections and curvatures in a space and interpolating each intersection with a higher-order equation.
  • the shape of the reflective surface 179 has a considerable influence on the shape of the virtual image display surface PS1 and the relationship with the road surface.
  • the shape of the virtual image display surface PS1 includes the shape of the reflective surface 179 of the curved mirror (concave mirror) 130, the curved shape of the windshield (reflection translucent member 2), and other optics mounted in the optical system 120. It is also affected by the shape of the member (for example, the correction mirror). It is also affected by the shape of the display surface 164 of the display unit 160 (generally a flat surface, but all or part of it may be non-planar) and the arrangement of the display surface 164 with respect to the reflective surface 179.
  • the curved mirror (concave mirror) 170 is a magnifying reflector, and has a considerable influence on the shape of the virtual image display surface. Further, if the shape of the reflecting surface 179 of the curved mirror (concave mirror) 170 is different, the shape of the virtual image display surface actually changes.
  • the virtual image display surface PS1 extending integrally from the near end portion U1 to the far end portion U3 has a display surface 164 of the display unit 160 with respect to the optical axis of the optical system (the main optical axis corresponding to the main light ray). It can be formed by arranging it diagonally at an intersection angle of less than 90 degrees.
  • the shape of the curved surface of the virtual image display surface PS1 is such that the optical characteristics of all or a part of the optical system can be adjusted, the arrangement of the optical member and the display surface 164 can be adjusted, and the shape of the display surface 164. Or may be adjusted by a combination thereof. In this way, the shape of the virtual image display surface can be adjusted in various ways. Thereby, the virtual image display surface PS1 having the standing image HUD region Z1 and the tilted image HUD region Z2 can be realized.
  • the mode and degree of the overall inclination of the virtual image display surface PS are adjusted according to the mode and degree of inclination of the display surface 164 of the display unit 160.
  • the distortion of the virtual image display surface due to the curved surface of the windshield (reflection and translucent member 2) is corrected by the curved surface shape of the reflecting surface 179 of the curved mirror (concave mirror or the like) 170, and the result is It is assumed that a flat virtual image display surface PS is generated.
  • the display surface 164 is rotated.
  • the degree to which the virtual image display surface PS, which is an inclined surface, is separated from the road surface 40 is adjusted.
  • the shape of the reflecting surface of the curved mirror (concave mirror or the like) 170 which is an optical member, is adjusted (or the shape of the display surface 164 of the display unit 160 is adjusted) to display a virtual image.
  • the virtual image display distance near the end (near end) U1 on the side of the surface PS near the vehicle 1 is bent toward the road surface and controlled to stand against the road surface ( In other words, elevation), thereby obtaining the virtual image display surface PS1.
  • the reflective surface 179 of the curved mirror 170 has three portions (Near (nearby display), Center (middle (center) display), and Far (far display)). It can be divided into parts).
  • Near is a part that generates display light E1 (indicated by a alternate long and short dash line in FIGS. 4A and 4B) corresponding to the near end portion U1 of the virtual image display surface PS1
  • the Center is a part.
  • Far is a portion that generates display light E2 (indicated by a broken line) corresponding to the intermediate portion (central portion) U2 of the virtual image display surface PS1, and Far is display light corresponding to the far end portion U3 of the virtual image display surface PS1.
  • the part that produces E3 shown by the solid line).
  • the Center and Far portions are the same as the curved mirror (concave mirror or the like) 170 in the case of generating the virtual image display surface PS of the plane shown in FIG. 4B.
  • the curvature of the Near portion is set smaller than that in FIG. 4B. Then, the magnification corresponding to the Near part becomes large.
  • the magnification (referred to as c) of the HUD device is the distance (referred to as a) from the display surface 164 of the display unit 160 to the window shield 2 and the viewpoint of the light reflected by the wind shield (reflecting translucent member 2).
  • the image is formed at a position farther from the vehicle 1. That is, in the case of FIG. 4C, the virtual image display distance is larger than that in the case of FIG. 4B.
  • the near-end U1 of the virtual image display surface is separated from the vehicle 1, and the near-end U1 bends toward the road surface 40 in a bowing manner, and as a result, the standing image HUD region Z1 is formed.
  • a virtual image display surface PS1 having a standing image HUD region Z1 and an inclined image HUD region Z2 can be obtained.
  • FIG. 5 is a diagram showing an example of a display method of a HUD device (parallax type HUD device or 3D HUD device) that displays a three-dimensional virtual image.
  • the eye point P (C) indicating the viewpoint position of the driver (user) is located at the center of the eye box EB.
  • the virtual image V (C) is located in the center of the overlapping region. ..
  • the convergence angle of the virtual image V (C) is ⁇ d, and the virtual image V (C) is recognized as a three-dimensional image by the driver (user, viewer).
  • This three-dimensional virtual image V (C) can be displayed (formed) as follows. For example, by distributing the light from the image IM displayed in time division with, for example, a MEMS scanner or the like, display lights L10 and R10 for the left eye / right eye can be obtained, and the display lights L10 and R10 can be used. For example, it is reflected by a curved mirror (concave mirror, etc.) 105 included in the optical system (the number of reflections is at least once), thereby projecting the display light K onto the windshield (projected member) 2, and the reflected light is reflected. By reaching both eyes of the driver and forming an image in front of the windshield 2, a three-dimensional virtual image V (C) with a sense of depth is displayed (formed).
  • the display method of an image having a sense of depth is not limited to the above.
  • images for each of the left and right eyes are displayed on a flat panel or the like at the same time, and the light from each image is separated by using a lenticular lens or a parallax barrier to obtain display lights L10 and R10 for each eye. There may be.
  • the present invention can be applied to a parallax type HUD device in which a parallax image (different image) is incident on each of the left and right eyes, but the present invention is not limited to this, and the same image is applied to each of the left and right eyes. It can also be applied to a HUD device that incidents. These points will be described later.
  • FIG. 6 is a diagram showing an example of viewpoint tracking warping control in a HUD device capable of displaying at least one of a virtual image of a standing image (including a pseudo standing image) and a virtual image of an inclined image (depth image).
  • the same reference numerals are given to the parts common to those in FIGS. 1 and 3.
  • the inclined surface shall be interpreted in a broad sense, and may be interpreted as including, for example, those superimposed on the road surface (the one having an inclination angle of zero), if necessary. ..
  • the display example of FIG. 6 is the same as that described with reference to FIG.
  • the virtual image display surface PS1 has an elevation (pseudo-elevation) PS1a including an elevation HUD region Z1 and an inclined surface PS1b including an inclined image HUD region Z2.
  • a vehicle speed display SP as a first virtual image, which is a facing image (standing image) of "120 km / h", is displayed.
  • an arrow AW for navigation as a second virtual image, which is a depth image (tilted image) is displayed.
  • the display method may be either a monocular type in which the display light of the same image is incident on the left and right eyes or a parallax type in which different parallax images are incident, but FIG. 6 shows a display example of the parallax type as an example. ..
  • Each image is preliminarily given a distortion in the direction opposite to the distortion due to the curved surface of the windshield 2.
  • the first projected image G1' is a standing image, and since it is not necessary to express the depth by the parallax image, the same image (common image) is projected.
  • both the driver's viewpoints A1 and A2 are located in the center of the eyebox EB, a perspective arrow extending linearly along the road surface can be seen in front of the driver. Further, on the front side, a vehicle speed display of "120 km / h" can be seen diagonally to the lower right of the image of the arrow. In the example of FIG. 6, this vehicle speed display is displayed at a fixed position in the coordinate system in real space. This vehicle speed display can be displayed at all times, for example.
  • the driver's viewpoints A1 and A2 move (shift) along the width direction (horizontal direction, X direction) of the vehicle.
  • this movement of the viewpoint is indicated by a broken line, bidirectional arrow SE.
  • the display seen by the driver is deformed by the motion parallax, and this deformation is advantageous as giving a natural perspective to the image AW of the arrow, which is a depth image.
  • the vehicle speed display SP which is a facing image, works disadvantageously as it lowers the cognitive ability to read the content of the information. Therefore, it is preferable to individually perform warping by different image corrections without performing uniform warping on both.
  • a specific description will be given.
  • an inclined surface for example, an inclined image HUD region of a virtual image display surface
  • the viewpoint of the driver (monocular) looking at the inclined surface is located in the central divided region of the eye box.
  • Figures 7 (B), (C), and (D) show the case where the vehicle moves along the width direction (horizontal direction) of the vehicle from the above-mentioned state.
  • a figure showing the appearance of an inclined surface (a trapezoidal shape that is the outline of an inclined surface) when located in each of the divided area and the right divided area
  • FIG. 7 (E) displays a virtual image of an arrow on the inclined surface.
  • FIG. 7 (F), 7 (G), (H), is a virtual image of an arrow when the viewpoint is located in each of the left divided area, the central divided area, and the right divided area of the eyebox.
  • FIG. 7 (I) and FIG. 7 (J) are diagrams showing the appearance of the arrow (shape of the virtual image of the arrow)
  • FIGS. 7 (I) and 7 (J) are diagrams showing the appearance of the virtual image of the arrow with / without motion misalignment correction.
  • the entire inclined surface PS1b on the virtual image display surface PS1 is considered as one display area that can be visually recognized by the driver, and the outline (outer shape) of the display area is viewed from the direction perpendicular to the road surface. It is a rectangle when viewed in a plan view. Further, a grid consisting of two orthogonal line segments is drawn inside the inclined surface PS1b. This indicates that each reference point (coordinate point) of the intersection can be grasped as a target of warping.
  • the eyebox EB is divided into left, center, and right partial regions ZL, ZC, and ZR.
  • the codes AL, AC, and AR attached to the eye (viewpoint) correspond to the divided regions ZL, ZC, and ZR of the above-mentioned eye box EB.
  • the driver's eye (viewpoint) is initially located in the center of the eyebox EB (divided region ZC).
  • X0 be the coordinate value in this state.
  • the coordinate values when moved to the left and right are + X1 and ⁇ X1.
  • the image seen by the driver corresponding to the viewpoints AL, AC, and AR is appropriately changed under the influence of motion parallax as shown in FIGS. 7 (B), (C), and (D). do.
  • the lateral direction of the viewpoint position (the apparent amount of position movement due to the movement in the left-right direction is smaller as it is farther from the vehicle 1 and larger as it is closer to the vehicle 1.
  • the image displayed in the back is sufficiently large.
  • the upper side of the trapezoid is fixed, and the lower side is visually recognized as moving with the movement of the viewpoint position, whereby the trapezoid (isosceles trapezoid seen in the front, etc.) is deformed.
  • the outline of the virtual image display surface PS1b of the inclined surface arranged in front of the vehicle 1 is, when viewed from the viewpoint located in the center of the eyebox EB (in other words, when viewed from the front by the driver). It looks like a trapezoid (including a parallelogram) because it is perceived with a sense of perspective so that the width becomes narrower (Fig. 7 (C)).
  • the upper base and the lower base are parallel, the length of the upper base is shorter than the length of the lower base, the inner angles of both ends of the upper base are equal, and the inner angles of both ends of the lower base are also equal. It is an equal trapezoid (so-called isosceles trapezoid).
  • a trapezoid is a concept that includes a parallelogram.
  • the basic shape of the outline (outer shape) of the virtual image display surface (depth HUD region Z2) of the slope is "trapezoid".
  • an "isosceles trapezoid” can be seen as shown in Fig. 7 (C), and when the viewpoint shifts to the left, it becomes a “left-tilted trapezoid (Fig. 7 (A))” and the viewpoint shifts to the right. And, it becomes a trapezoid tilted to the right (Fig. 7 (D)).
  • the contour (outer shape) is "rectangular” rather than “rectangular” as in the conventional case. It means that the image (virtual image) with a more natural sense of depth can be seen after the distortion caused by the optical system is removed by implementing it as a "trapezoid". Therefore, the warping control for deforming the trapezoid is performed so that the deformation of the trapezoid caused by the movement of the viewpoint is reflected.
  • FIGS. 7 (E) to 7 (G) show changes in the appearance (visual sense) of the arrow figure AW (individual figure) displayed on the depth HUD area Z2 itself, not on the depth HUD area Z2 itself.
  • FIGS. 7 (E) to 7 (H) correspond to FIGS. 7 (A) to 7 (D).
  • the arrow AW seen from the front looks like FIG. 7 (G), but when the viewpoint shifts to the left, the arrow AW looks tilted to the left as shown in FIG. 7 (F). Further, when the viewpoint is shifted to the right, the arrow AW appears to be tilted to the right as shown in FIG. 7 (H).
  • the image area of each image is the same as in the cases of FIGS. 7 (B) to 7 (D).
  • warping is performed based on a trapezoidal shape, an image (virtual image) with a more natural sense of depth can be seen after the distortion caused by the optical system is removed.
  • the display of the arrow used in the route guide or the like is designed so as to be visually recognized substantially in accordance with the road surface. The same apparent shape of the road surface on which the display is superimposed changes due to the above-mentioned viewpoint movement.
  • this shape change is a person for an image intended to match (superimpose) on the road surface, or for a display intentionally arranged in the depth direction (an image that is understood to be distant). It can be said that it assists the recognition of.
  • an image area of an image with a sense of depth meaning the entire display area, and also a display area of individual images included in the display area, and interpretation is possible as appropriate.
  • the image correction is performed by using the contour when the contour is correctly recognized as a virtual image as a trapezoid.
  • an image (virtual image) having a more natural sense of depth is visually recognized by the driver (user, viewer).
  • the depth image (tilted image) is an image generated so as to generate a stereoscopic effect due to parallax, it is naturally assumed from the beginning that the image looks distorted due to the influence of motion parallax. What you get.
  • a natural (close to nature) three-dimensional effect (perspective) is generated, and therefore, the trapezoidal deformation (distortion) as shown in FIGS. 7 (B) and 7 (D). Will assist the natural vision of the depth image.
  • the isosceles trapezoid is corrected so that it can be visually recognized by tilting it to the left. Is made closer to a more natural one, and the corrected image area is corrected (conventional warping) in which distortion having the opposite characteristics to the distortion caused by the optical member is given in advance.
  • the display as shown in FIG. 7B is reproduced. In other words, it is possible to display a depth image with a more natural perspective (three-dimensional effect).
  • the degree of inclination of the arrow in other words, the degree of distortion of the outline (trapezoid) of the image area displaying the arrow
  • the image has little discomfort.
  • the displayed image is a road surface superimposed image (for example, an image of an arrow figure as shown in FIGS. 7 (F) and 7 (H)) displayed so as to be superimposed on the road surface, and the virtual image of the road surface superimposed image is.
  • a road surface superimposed image for example, an image of an arrow figure as shown in FIGS. 7 (F) and 7 (H)
  • the virtual image of the road surface superimposed image is.
  • it is displayed on an inclined surface that is inclined with respect to the road surface and the virtual image appears to be lifted from the road surface (see the display examples in FIGS. 3 and 6), it is not visible when it is lifted from the road surface (superimposition on the road surface).
  • It is preferable to perform motion misalignment correction so as to reduce the degree of deformation of the arrow (in other words, the trapezoidal shape as the outline of the display area of the arrow image) as compared with the case where the degree is relatively high).
  • the position of the virtual image of the actual arrow figure (virtual image of the road surface superimposed image) is closer to the driver (viewer) than the position of the road surface where the arrow is superimposed. be.
  • kinetic parallax occurs in the virtual image of the arrow seen in the foreground
  • kinetic parallax also occurs in the road surface seen in the back. Since the road surface seen in the back is displayed on the distant side, the motion parallax is smaller, and the virtual image of the arrow seen in the foreground is displayed on the front side, so that the motion parallax is more perceived.
  • the motion parallax of the virtual image of the arrow is felt to be larger than that of the case where the virtual image of the arrow is in close contact with the road surface, for example. Therefore, as in the example of FIG. 7 (I), the inclination of the arrow to the left side of the virtual image is emphasized. Therefore, in this case, the motion parallax is such that the inclination is reduced while leaving the inclination to the left side, in other words, the deformation of the trapezoid which is the outline of the display area of the arrow image is suppressed. By performing the correction, the display with appropriate motion parallax as shown in FIG. 7 (J) is obtained.
  • the motion parallax is not offset, and a certain amount of motion parallax remains. Therefore, the amount of suppression of the motion parallax of the depth image is smaller than the amount of suppression of the motion parallax of the facing image.
  • an elevation including a pseudo-elevation, for example, a standing image HUD region of a virtual image display surface
  • the viewpoint (monocular) of the driver looking at it is the center of the eyebox.
  • 8 (B), (C), and (D) are views showing the case where the vehicle moves along the width direction (horizontal direction) of the vehicle from the state of being located in the divided area of the above.
  • Figures 8 (E), (F), (FIG. 8), FIGS. G) is a diagram showing the contents of image correction applied to the display image in order to suppress deformation of the rectangle which is the contour of the elevation
  • FIGS. 8 (H), (I), and (J) have the viewpoint of the eye box. It is a figure which shows the appearance of the virtual image of an arrow (the shape of the virtual image of an arrow) when it is located in each of the left division area, the center division area, and the right division area of.
  • FIGS. 8 (A) to 8 (D) correspond to FIGS. 7 (A) to 7 (D) shown above.
  • a vehicle speed display SP of “120 km / h” is displayed on the virtual image display surface PS1a on the elevation surface.
  • the vehicle speed display seen from the front looks like FIG. 8 (C), but when the viewpoint shifts to the left, the vehicle speed display SP appears tilted to the left as shown in FIG. 8 (B). Further, when the viewpoint is shifted to the right, the vehicle speed display SP appears to be tilted to the right as shown in FIG. 8 (D).
  • the contour (outer shape) of the image area is rectangular (rectangular or rectangular) as in the conventional case.
  • Square The vehicle speed display SP, etc. displayed on the elevation is often always displayed at a fixed position, and it is important to be able to read numbers, symbols, characters, etc. accurately, and no particular perspective is required. Since there is no advantage of having a trapezoid as in the case of an inclined image, a rectangular outer shape is adopted here as in the conventional case.
  • the display will be deformed as shown in FIGS. 8 (B) and 8 (D).
  • the reason for the deformation is that even if the distortion of the optical system is removed by warping and the display light of the rectangular image reaches the viewpoint (eye), the position of the vehicle speed display SP does not change, so before and after the viewpoint shift occurs. If the arrival direction (incident direction) of the display light is different and a distortion-free rectangle as shown in Fig. 8 (C) is visible before the viewpoint shift, then after the viewpoint shift occurs, from a different angle. This is because when the brain judges an image by receiving the incoming light with the eyes, the image looks distorted due to motion disparity as shown in FIGS. 8 (B) and 8 (D).
  • Deformation of the image due to motion parallax as shown in FIGS. 8 (B) and 8 (D) is not preferable in terms of display cognition. Therefore, in the present embodiment, such deformation (deformation of a rectangle such as a rectangle) is suppressed (cancelled) during the warping process of the vehicle speed display SP (in a broad sense, a facing image (standing image)).
  • Warping conventional warping that deforms an image (outline of an image area: rectangle) in the direction opposite to the actual deformation due to the influence of motion disparity, and then gives distortion with the opposite characteristics to the distortion generated in the optical system.
  • FIGS. 8 (E) to 8 (G) The images in the state where the distortion having the opposite characteristic to the actual distortion is applied are shown in FIGS. 8 (E) to 8 (G).
  • FIGS. 8 (H) to 8 (J) The images at this time are shown in FIGS. 8 (H) to 8 (J).
  • the outer shape (contour) of the image area of the vehicle speed display SP is rectangular even when the viewpoint shift occurs.
  • the outer shape of the image area is maintained in a rectangular shape, and an easy-to-see facing image is always displayed.
  • the decrease in cognition is suppressed, and since a facing image is always obtained, there is no sense of discomfort, and therefore the visual sense is improved.
  • the contour of the image area of the depth image (tilted image) and the contour of the image area of the facing image (standing image) are individually set and warped, so that each image is substantially. , It means that different methods of warping are being carried out.
  • a display with improved visibility that gives a natural perspective is realized, and for facing images (standing images), it is easier to see and has improved cognition. (Display suitable for grasping accurate information, etc.) is realized.
  • FIG. 9 (A) to 9 (D) are diagrams for explaining two examples (an example of deforming the lower side or the upper side) of image correction applied to a display image while suppressing deformation of a rectangle which is an elevation contour. Is.
  • FIGS. 8 (E) and 8 (G) described above the point that the process of deforming the rectangle is performed at the time of warping so as to cancel the deformation of the rectangle actually caused by the motion parallax has been described. There are two possible methods. Hereinafter, this point will be specifically described.
  • FIG. 9A the viewpoint is shifted to the right while the vehicle speed display SP is displayed.
  • the vehicle speed display SP is distorted so as to incline to the left as described above.
  • the lower side BC of the rectangle may be fixed and the upper side AD may be moved to distort the image.
  • the upper side AD is displaced to the upper side A'D'.
  • the upper side AD may be fixed and the lower side BC may be moved to distort.
  • the lower side BD is displaced to the lower side B'D'.
  • the amount of movement can be reduced when the upper side is moved to deform the rectangle, as compared with the case where the lower side is moved to deform the rectangle. Then there is a difference.
  • FIG. 10A is a diagram showing the contents of warping control for an inclined image (depth image) displayed on an inclined surface
  • FIG. 10B is a warping for an standing image (facing image) displayed on an elevation surface
  • 10 (C) to 10 (E) are diagrams showing the contents of control
  • FIGS. 10 (C) to 10 (E) are diagrams showing an example of generating an image to be displayed on the display surface by image rendering
  • FIGS. 10 (F) to 10 (H) are views according to the viewpoint position.
  • it is a figure which shows the example of the image (visual-viewing image, visual-viewing virtual image) visually recognized by a driver.
  • the image display area 45 of the display surface 164 of the display unit 160 is based on the boundary position LN'which is the boundary between the near side and the far side when viewed from the user.
  • the navigation arrow figure AW'as the second image RG2 is displayed in the second display area Z2'.
  • the arrow figure AW' is shown by a vertically long ellipse for convenience.
  • the outer shape (contour) of the image area of the arrow figure AW' is corrected according to the position of the viewpoint during the warping process. do. That is, when the viewpoint is located in the center of the eyebox EB, the isosceles trapezoid is formed, for example, as in a2, and when the viewpoint shifts to the left, the isosceles trapezoid is tilted to the right and distorted as in a1. If the viewpoint shifts to the right, the isosceles trapezoid is tilted to the left and distorted, as in a3.
  • the motion parallax and perform warping so as to obtain an appropriate trapezoidal deformation.
  • the upper side of the image is deformed so as to be greatly tilted to the left due to motion parallax, but in the warping control, the upper side is slightly tilted to the right. This is because, as described above, since the virtual image surface is in front of the road surface, the motion parallax deformation of the virtual image surface becomes larger than the motion parallax of the road surface, and it is necessary to weaken the influence (deformation) of this motion parallax. Is.
  • each image is distorted with the opposite characteristics to the distortion caused by the windshield or the optical system of the HUD device (collectively referred to as an optical member). Image area). In this way, the viewpoint-following warping for the facing image (tilted image) is performed.
  • the image SP'of the vehicle speed display as the first image RG1 is displayed in the first display area Z1'.
  • the image SP'of the vehicle speed display is shown by a horizontally long ellipse for convenience.
  • the outer shape (contour) of the image area is corrected according to the viewpoint position.
  • distortion having a characteristic opposite to the distortion caused by the optical member is applied to each image (image area of each image) in correspondence with the viewpoint position. In this way, the viewpoint-following warping for the depth image (tilted image) is performed.
  • each image (region of each image) is individually subjected to image correction peculiar to each image as described with reference to FIG. 10A or FIG. 10B. That is, different image correction methods are adopted depending on the type of image, and image correction is performed individually.
  • each image (image after warping processing) corresponding to the viewpoint position is combined by, for example, image rendering to generate one image.
  • the process is carried out.
  • one image obtained by synthesis is designated by the reference numerals Q1 to Q3.
  • this image composition process it can also be referred to as a warping process.
  • Depth images (or image areas of depth images) and facing images (or image areas of facing images) are handled separately, image corrections are performed individually, and then they are combined, which is desired for each image. Image correction can be performed quickly.
  • the image processing itself can be simplified. Therefore, the load on the image processing unit (which can include the image generation unit, the image rendering unit, and the like) associated with the image correction processing is also reduced.
  • FIGS. 10 (F), (G), and (H) are diagrams showing how the virtual image looks when the viewpoint is located in each of the left divided area, the central divided area, and the right divided area of the eyebox. be.
  • the region surrounded by a trapezoid for example, an isosceles trapezoid
  • the first contour Za from the center to the left.
  • the isosceles trapezoid tends to tilt to the left due to the influence of motion parallax (in other words, the upper side of the first contour Za tends to be on the left side relative to the lower side).
  • the warping control that tilts the isosceles trapezoid to the right as in a1 the deformation that the isosceles trapezoid tends to tilt to the left due to the motion parallax is reduced as shown in FIG. 10 (F).
  • the virtual image displayed on the first contour Za is visually recognized by the driver (the driver whose viewpoint is shifted to the left from the center).
  • the isosceles trapezoid tends to tilt to the right due to the influence of motion parallax (in other words, the upper side of the first contour Za tends to be on the right side relative to the lower side. ).
  • the warping control that tilts the isosceles trapezoid to the left as in a3
  • the deformation that the isosceles trapezoid tends to tilt to the right due to the motion parallax is reduced as shown in FIG. 10 (J).
  • the virtual image displayed on the first contour Za is visually recognized by the driver (the driver whose viewpoint is shifted to the right from the center).
  • the first contour Za is a part of the tilted image HUD region Z2 whose viewpoint is viewed from the center of the eyebox EB and is surrounded by a trapezoid (for example, an isosceles trapezoid) in which the upper side and the lower side are parallel to each other. If the tilted image HUD region Z2 itself is trapezoidal, it may be the entire region.
  • the second contour Zb is the region surrounded by the rectangle or the rectangle including the rectangle in the standing image HUD region Z1 when the viewpoint is viewed from the center of the eyebox EB
  • the motion parallax The rectangle tends to tilt to the left due to the influence of (in other words, the upper side of the second contour Zb tends to be on the left side relative to the lower side).
  • the rectangle due to the motion parallax is executed by executing the warping control that inclines the rectangle to the right so as to cancel the deformation of the second contour Zb due to the motion parallax as in b1.
  • the virtual image displayed on the second contour Zb in which the deformation that tends to incline to the left is canceled is visually recognized by the driver (the driver whose viewpoint is shifted to the left from the center).
  • the rectangle tends to tilt to the right due to the influence of motion parallax (in other words, the upper side of the second contour Zb tends to be on the right side relative to the lower side).
  • the rectangle due to the motion parallax is executed by executing the warping control that inclines the rectangle to the left so as to cancel the deformation of the second contour Zb due to the motion parallax as in b3.
  • the virtual image displayed on the second contour Zb in which the deformation that tends to incline to the right is canceled is visually recognized by the driver (the driver whose viewpoint is shifted from the center to the left).
  • the second contour Zb may be a part of the standing image HUD region Z1 seen from the center of the eyebox EB, which is surrounded by a rectangle, and if the standing image HUD region Z1 itself is a rectangle, the second contour Zb may be a part of the standing image HUD region Z1. It may be the entire area.
  • the display layers are separated according to the use / non-use of depth, and after performing separate shape correction processing for each layer, the images are superimposed and finally displayed.
  • It may be an image. That is, the display layer of the tilted image HUD area Z2 that expresses the depth and the display layer of the standing image HUD area Z1 that does not express the depth are separated, and after performing separate shape correction processing for each layer, the images are superimposed. Together, it may be used as the final display image.
  • the tilted image HUD region Z2 that expresses the depth may be composed of a plurality of display layers that are different for each region. Specifically, the tilted image HUD region Z2 may be composed of a plurality of display layers divided into a plurality of regions in the depth direction. Further, the shape may be corrected individually according to the display attribute of each content.
  • FIG. 11 is a diagram showing a configuration example of the HUD device.
  • the upper view of FIG. 11 is the same as that of FIG. 4 (A).
  • the configuration of the display control unit 190 will be described.
  • the display control unit 190 has a viewpoint position detection unit 192 and a warping processing unit 194.
  • the warping processing unit 194 stores a warping control unit 192, a ROM 198 (having the first and first image tables 199 and 200), and a VRAM (for example, image data 196, post-warping data 197, and the like). It has 201 and an image generation unit (image rendering unit) 202.
  • the warping control unit 195 may be provided outside the warping processing unit 194. Further, the viewpoint position detection unit 192 can also be provided outside the HUD device 100.
  • the first image conversion table 199 stores warping parameters for facing images (standing images).
  • the second image conversion table 200 stores warping parameters for a depth image (tilted image).
  • the warping control unit 195 uses an image generation unit (image rendering) so that the viewpoint position tracking warping as described above is performed using the warping parameters corresponding to the viewpoint position information supplied from the viewpoint position detection unit 192. Part) 202, ROM198, VRAM201, etc. are controlled.
  • the original image data 196 of the image to be displayed is stored (stored) in the VRAM.
  • the original image data 196 is read out, and by applying the warping parameter corresponding to the viewpoint position to the read out original image data, the deformation correction of the image area and the optical system described above can be performed. Image correction or the like is performed in which distortion having the opposite characteristics to the distortion of the above is applied in advance.
  • the post-warping data 197 is temporarily stored in the VRAM 201 and then supplied to the image generation unit (image rendering) 202, and for example, an image composition process as shown in FIGS. 10 (C) to 10 (E) is performed. Will be done. As a result, one image (display image) is generated.
  • the generated image is supplied to a display unit (for example, a flat panel display such as a liquid crystal panel) 160 and displayed on a display surface (reference numeral 164 in FIG. 3C).
  • FIG. 12A is a diagram showing an example of a configuration of a main part of the HUD device when a standing image (pseudo standing image) is fixedly displayed at a predetermined position in the viewpoint coordinate system regardless of the viewpoint position
  • FIG. 12B Is a diagram showing that the display (virtual image) appears to move following the movement of the viewpoint.
  • the problem that the facing image is distorted due to the viewpoint position shift can be caused by the fixed display position of the facing image (virtual image) in the coordinate system set in the real space. ..
  • the viewpoint coordinate system (a coordinate system in which axes are set along each of the front-rear direction, the left-right direction, and the up-down direction of the vehicle with the person's viewpoint as the center, and if the person's viewpoint (including the face, etc.) moves,
  • the coordinate system also moves according to the movement
  • the position (or image generation) of the image (rectangular image area) on the display surface of the display unit so that the imaginary image of the facing image is always at the same position. Control is performed to appropriately move the image generation surface or the position in the image generation space in the unit in correspondence with the movement of the viewpoint.
  • the position of the virtual image moves appropriately according to the viewpoint shift.
  • the relative positional relationship between the virtual image of the facing image and the viewpoint of the person is always constant, it is not necessary to consider the motion parallax, and there is no change in the appearance. Therefore, the above-mentioned problem that the image is deformed by the motion parallax and the cognition is lowered can be solved by this method as well.
  • the fact that the display position of the image (virtual image) does not change has the effect of giving a sense of security to the driver or the like who is the viewer.
  • FIG. 12A a movement amount (rotation amount) calculation unit 193 of the viewpoint coordinate system is added to the configuration shown on the lower side of FIG. Since the other parts are the same as those in FIG. 11, FIG. 12 (A) is simplified and only the main parts are shown.
  • the viewpoint position detection unit 192 detects a change in the viewpoint position (displacement of the viewpoint) in the coordinate system (XYZ coordinate system) set in the real space
  • the change (displacement of the viewpoint) is detected.
  • Displacement) is detected as the amount of movement (including the amount of rotation) of the viewpoint coordinate system (X'Y'Z' coordinate system).
  • the warping control unit 195 moves the viewpoint coordinate system based on the detected movement amount, and controls so that the image is displayed at a predetermined position (coordinate point indicating the viewpoint position before the movement) in the viewpoint coordinate system after the movement. do.
  • the vehicle speed display SP when the viewpoint A moves from the coordinate point X0 to + X1, the vehicle speed display SP also follows the left-right direction and the same direction as the coordinate point movement direction. Move the same distance to. For example, if the vehicle speed display SP is visible in front of the driver before the viewpoint A is moved, that state is maintained even after the viewpoint is moved. Therefore, motion parallax does not occur. The driver can always see a virtual image of a front-facing image (standing image) without distortion. Therefore, the visibility is improved.
  • FIG. 13 is a diagram showing an example of a procedure in viewpoint tracking warping control.
  • step S1 the viewpoint position is detected and the warping process is started.
  • step S2 the type of image such as whether the image to be displayed is a depth image (tilted image, tilted image) or a facing image (standing image, pseudo standing image) is determined.
  • step S2 If it is determined to be a depth image in step S2, the process proceeds to step S3.
  • step S3 warping is performed with the outer shape (contour) of the image area correctly visually recognized as a virtual image as a trapezoid (including a parallelogram).
  • the viewpoint position shifts in the width direction of the vehicle, what is the change in the appearance due to the motion parallax in order to reduce the degree of deformation while maintaining the direction of the change in the appearance of the virtual image due to the motion parallax?
  • Image processing is performed to give the trapezoidal shape a deformation of the reverse characteristic in advance.
  • the virtual image of the road surface superimposed image is displayed on the virtual image display surface of the inclined surface, for example, and is displayed floating from the road surface, it is corrected so as to suppress the deformation of the virtual image due to the motion parallax (however, the appropriate motion parallax is Leave it, and do not offset the motor parallax).
  • step S2 If it is determined in step S2 that the image is a facing image, the process proceeds to step S4.
  • step S4 warping is performed with the outer shape (contour) of the image area correctly visually recognized as a virtual image as a rectangle or a square (collectively referred to as a "rectangle").
  • the viewpoint position shifts in the width direction of the vehicle, the image (image area) is preliminarily distorted by the characteristics opposite to the change in the appearance of the virtual image due to motion parallax.
  • Rectangle maintenance correction motion parallax cancellation, suppression correction
  • rectangle maintenance correction the lower side (lower base) of the rectangle is fixed and the upper side (upper base) is moved to distort it.
  • the upper side (upper base) is fixed and the lower side (lower base) is moved.
  • step S5 it is determined that the image correction is completed.
  • step S5 the viewpoint tracking warping process is terminated.
  • step S5 the process returns to step S2.
  • a depth image tilt image premised on stereoscopic vision
  • a facing image not premised on stereoscopic vision in principle. It is possible to realize warping control that can ensure more natural visibility for (standing image, pseudo standing image).
  • Recent HUD devices tend to be developed on the assumption that a virtual image is displayed over a fairly wide area in front of the vehicle, for example. In this case, the virtual image display area on the windshield is expanded and various displays are possible. Become.
  • the required warping method will differ depending on the type of image.
  • the warping method can be changed according to the type of the image, the visibility of various displays is not deteriorated. Therefore, it is possible to realize high functionality and high performance of the HUD device.
  • the present invention can be used in either a monocular HUD device in which the display light of the same image is incident on each of the left and right eyes, or a parallax type HUD device in which an image having parallax is incident on each of the left and right eyes.
  • the term vehicle can be broadly interpreted as a vehicle.
  • terms related to navigation shall be interpreted in a broad sense in consideration of, for example, the viewpoint of navigation information in a broad sense useful for vehicle operation.
  • the HUD device shall include a device used as a simulator (for example, a simulator as an aircraft simulator, a simulator as a game device, etc.).
  • FIG. 14 (A) and 14 (B) are views showing a modified example of the virtual image display surface.
  • the cross-sectional shape of the virtual image display surface PS1 as seen from the width direction (horizontal direction, X direction) of the vehicle is not limited to the one having a convex shape on the driver side.
  • the virtual image display surface PS1 may have a concave shape on the driver side.
  • the virtual image display surface PS1 does not have to be curved as shown in FIG. 14 (B).
  • Optical system including optical member, 150 ... Light projecting unit (image projection unit), 160 ... Display unit (for example, liquid crystal display device) Screen, etc.) 164 ... Display surface, 170 ... Curved mirror (concave mirror, etc.), 179 ... Reflective surface, 188 ... Imaging unit (pupil detection unit, face imaging unit, etc.), 190 ... Display control unit, 192 ... viewpoint position detection unit, 193 ...

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Instrument Panels (AREA)

Abstract

An object of the present invention is to realize warping control whereby more natural visibility can be ensured for at least one of a depth image (oblique image) in which three-dimensional vision is assumed, and a direct-facing image (standing image) which is preferably displayed so as to face an operator and in which three-dimensional vision is not assumed in principle. When performing warping processing on a depth image AW displayed on an inclined virtual image display surface PS1a, a control unit 190 (or 195) for performing viewpoint position tracking/warping control performs warping control with the outline of an image area of the depth image as a trapezoid in which at least one set of opposite sides are parallel to each other, and, when performing warping processing on a direct-facing image SP, performs warping control with the outline of the image area of the direct-facing image as a rectangle, including a long rectangle or a square.

Description

ヘッドアップディスプレイ装置Head-up display device
 本発明は、車両のウインドシールドやコンバイナ等の被投影部材に画像の表示光を投影(投射)し、運転者の前方等に虚像を表示するヘッドアップディスプレイ(Head-up Display:HUD)装置等に関する。 The present invention is a head-up display (HUD) device or the like that projects (projects) the display light of an image onto a projected member such as a windshield or a combiner of a vehicle and displays a virtual image in front of the driver or the like. Regarding.
 HUD装置において、投影される画像を、光学系やウインドシールド等の曲面形状等に起因して生じる虚像の歪みとは逆の特性をもつように予め歪ませる画像補正処理(以下、ワーピング処理と称する)が知られている。HUD装置におけるワーピング処理については、例えば特許文献1に記載されている。 In the HUD device, an image correction process (hereinafter referred to as warping process) in which the projected image is pre-distorted so as to have characteristics opposite to the distortion of the virtual image caused by the curved surface shape of the optical system, the windshield, etc. )It has been known. The warping process in the HUD apparatus is described in, for example, Patent Document 1.
 また、運転者の視点位置に基づいてワーピング処理を行うこと(視点追従ワーピング)については、例えば、特許文献2に記載されている。 Further, for example, Patent Document 2 describes that the warping process is performed based on the viewpoint position of the driver (viewpoint following warping).
特開2015-87619号公報Japanese Unexamined Patent Publication No. 2015-87619 特開2014-199385号公報Japanese Unexamined Patent Publication No. 2014-199385
 本発明者は、運転者(操縦者や乗務員等、広く解釈可能である)の視点位置に応じてワーピングパラメータを更新する視点位置追従ワーピング制御を実施することについて検討し、以下に記載する新たな課題を認識した。 The present inventor has considered implementing viewpoint position-following warping control that updates the warping parameter according to the viewpoint position of the driver (which can be widely interpreted by the driver, crew, etc.), and is described below. Recognized the challenge.
 特許文献1、2に記載のHUD装置では、路面に垂直に立設された虚像表示面のみが想定されている。この場合のワーピング制御では、画像領域を矩形(長方形又は正方形)として、光学部材(ウインドシールド等を含む広義の光学系)により生じる歪みとは逆特性の歪みを与える画像補正が実施される。 In the HUD apparatus described in Patent Documents 1 and 2, only a virtual image display surface erected perpendicular to the road surface is assumed. In the warping control in this case, the image region is set to a rectangle (rectangle or square), and image correction is performed to give distortion having the opposite characteristics to the distortion caused by the optical member (optical system in a broad sense including a windshield or the like).
 一方、近年、虚像表示面が奥行方向(車両の前方方向)に傾斜して配置されているHUD装置(傾斜面HUD)が提案されている。 On the other hand, in recent years, a HUD device (inclined surface HUD) in which the virtual image display surface is inclined in the depth direction (front direction of the vehicle) has been proposed.
 本発明者は、その傾斜面HUDをさらに改良して、1つの虚像表示面に傾斜面と立面(疑似立面を含む)を含ませ、例えば、遠方の傾斜面には、路面上で長く前方に伸びる矢印等の奥行き画像(虚像)を表示し、近方の立面には、車速表示等の背景に重畳されない非重畳コンテンツ(例えば、常時表示される数字や文字等で構成される虚像)を表示することを提案している。 The present inventor further improves the inclined surface HUD to include an inclined surface and an elevation surface (including a pseudo elevation surface) in one virtual image display surface, for example, a distant inclined surface is long on the road surface. A depth image (virtual image) such as an arrow extending forward is displayed, and non-superimposed content (for example, a virtual image composed of numbers and characters that are always displayed) that is not superimposed on the background such as a vehicle speed display is displayed on a nearby elevation. ) Is proposed to be displayed.
 この場合、例えば、路面に重畳されたような視覚を与える、奥行き感(立体感)のある表示についても、ワーピング処理を行う必要がある。例えば、視差を考慮したワーピング処理を行う必要がある。 In this case, for example, it is necessary to perform warping processing even for a display with a sense of depth (three-dimensional effect) that gives a visual sense that is superimposed on the road surface. For example, it is necessary to perform warping processing in consideration of parallax.
 また、車両を運転しているときに、運転者の視点位置が、車両の幅方向(左右方向)に沿って移動する(ずれる)場合があり、このとき、運動視差によって視認性が変化し、これが原因で視覚的な悪影響が生じる場合には、ワーピング処理を工夫して、その点についても対策することが好ましい。 Further, when driving a vehicle, the viewpoint position of the driver may move (shift) along the width direction (left-right direction) of the vehicle, and at this time, the visibility changes due to the motion parallax. If this causes a visual adverse effect, it is preferable to devise a warping process and take measures against that point as well.
 なお、「運動視差」とは、観察者の視点(又は観察対象)が移動することによって生じる視差のことであり、視線方向を同じだけ変化させても、近くにあるものほど視野中の位置が大きく変わり、遠くにあるものほど位置があまり変わらず、この位置の変化量の差によって遠近感を感得し易いとされるものである。 The "motion parallax" is the parallax caused by the movement of the observer's viewpoint (or observation target), and even if the line-of-sight direction is changed by the same amount, the closer the object is, the more the position in the visual field is. It changes greatly, and the farther it is, the less the position changes, and it is said that the difference in the amount of change in this position makes it easier to feel the perspective.
 特許文献1、2では、視差とワーピング処理との関連性については、何ら検討されていない。また、ワーピング処理中に視点移動が生じたときの対処についても考慮されていない。 In Patent Documents 1 and 2, the relationship between parallax and warping processing is not examined at all. In addition, no consideration is given to what to do when the viewpoint moves during the warping process.
 本発明の目的の1つは、例えば、立体的な視覚を前提とした奥行き画像(傾斜像)、あるいは、立体的な視覚を前提としない正対画像(立像)について、より自然な視認性を確保できるようなワーピング制御を実現することである。 One of the objects of the present invention is, for example, to provide a more natural visibility for a depth image (tilted image) premised on three-dimensional vision or a facing image (standing image) not premised on three-dimensional vision. It is to realize warping control that can be secured.
 本発明の他の目的は、以下に例示する態様及び最良の実施形態、並びに添付の図面を参照することによって、当業者に明らかになるであろう。 Other objects of the invention will be apparent to those skilled in the art by reference to the embodiments exemplified below and the best embodiments, as well as the accompanying drawings.
 以下に、本発明の概要を容易に理解するために、本発明に従う態様を例示する。 Hereinafter, in order to easily understand the outline of the present invention, an embodiment according to the present invention will be illustrated.
 第1の態様において、ヘッドアップディスプレイ装置は、車両に搭載され、画像を、前記車両に備わる被投影部材に投影することで、運転者に前記画像の虚像を視認させるヘッドアップディスプレイ(HUD)装置であって、
 前記画像を生成する画像生成部と、
 前記画像を表示する表示部と、
 前記画像の表示光を反射して、前記被投影部材に投影する光学部材を含む光学系と、
 アイボックスにおける運転者の視点位置に応じてワーピングパラメータを更新し、そのワーピングパラメータを用いて前記表示部に表示する画像を補正する視点位置追従ワーピング制御を実施する制御部と、
 を有し、
 前記制御部は、
 奥行き画像に対してワーピング処理を実施する場合には、前記奥行き画像の画像領域の輪郭が虚像として正しく視認されたときの輪郭を、少なくとも1組の対辺が互いに平行である台形としてワーピング制御を実施し、
 正対画像に対してワーピング処理を実施する場合には、前記正対画像の画像領域の輪郭が虚像として正しく視認されたときの輪郭を、長方形又は正方形を含む矩形としてワーピング制御を実施する。
In the first aspect, the head-up display device is a head-up display (HUD) device mounted on a vehicle and projecting an image onto a projected member provided on the vehicle so that the driver can visually recognize a virtual image of the image. And,
An image generation unit that generates the image and
A display unit that displays the image and
An optical system including an optical member that reflects the display light of the image and projects it onto the projected member.
A control unit that updates the warping parameter according to the viewpoint position of the driver in the eyebox and performs viewpoint position tracking warping control that corrects the image displayed on the display unit using the warping parameter.
Have,
The control unit
When performing warping processing on a depth image, warping control is performed with the contour when the contour of the image area of the depth image is correctly recognized as a virtual image as a trapezoid in which at least one set of opposite sides is parallel to each other. death,
When the warping process is performed on the facing image, the warping control is performed by setting the contour when the contour of the image area of the facing image is correctly recognized as a virtual image as a rectangle or a rectangle including a square.
 第1の態様では、ワーピング処理の対象となる画像領域の輪郭(外形)を、奥行き画像(例えば傾斜像)では台形とし、正対画像(例えば立像)では矩形(長方形又は正方形)とする。なお、奥行き画像は、例えば傾斜面の虚像表示面に表示される奥行きのある画像である。単に、奥行き画像、あるいは傾斜像と称することがある。また、正対画像は、運転者に対向するようにして表示するのが好ましい画像である。例えば、路面に対して立設される立面(路面に対して直交して立設される面だけでなく、少なくとも一部が傾斜しているが、全体としては立設されている面として取り扱うことができる「疑似立面」を含む)に表示される画像である。以下の説明では、単に正対画像、あるいは立像と称することがある。 In the first aspect, the contour (outer shape) of the image area to be warped is trapezoidal in the depth image (for example, tilted image) and rectangular (rectangular or square) in the facing image (for example, standing image). The depth image is, for example, a deep image displayed on a virtual image display surface of an inclined surface. It may be simply referred to as a depth image or an inclined image. Further, the facing image is preferably an image displayed so as to face the driver. For example, an elevation that stands up against the road surface (not only a surface that stands orthogonal to the road surface, but at least part of it is inclined, but it is treated as an upright surface as a whole. It is an image displayed on a "pseudo-elevation" that can be). In the following description, it may be simply referred to as a face-to-face image or a standing image.
 また、視点位置追従ワーピング処理は、例えば、アイボックスにおける視点の位置に応じてワーピングパラメータ(例えば、デジタルフィルタを用いたワーピング画像補正のための多項式、乗数、定数等を決定するフィルタ用の関数値等)を更新し、そのワーピングパラメータを用いて、ワーピング対象の画像の画像領域に設定される複数の座標点について座標変換を行うことで、光学部材に起因する歪みとは逆特性の歪みを事前に与えることで実現され得る。 Further, the viewpoint position tracking warping process is, for example, a function value for a filter that determines a warping parameter (for example, a polynomial, a multiplier, a constant, etc. for warping image correction using a digital filter) according to the position of the viewpoint in the eyebox. Etc.) and use the warping parameters to perform coordinate conversion for multiple coordinate points set in the image area of the image to be warped, thereby preliminarily distorting the characteristics opposite to the distortion caused by the optical member. It can be realized by giving to.
 従来は、路面に垂直な立面に画像(虚像)が表示されることを想定して、画像補正の対象となるディスプレイにおける画像領域(全体領域であってもよく、部分領域であってもよい)の輪郭(説明の便宜上、可視であり、形状は例えば矩形とする)に対応する、虚像表示の際の輪郭(言い換えれば、虚像として正しく視認者に視認されたときの輪郭であり、説明の便宜上、可視であるとする)を、一律に矩形(長方形又は正方形)としていたが、本態様では、奥行き画像(例えば傾斜像)/正対画像(例えば立像)の各々に適した輪郭(外形)を採用する。なお、上記の「虚像として正しく視認者に視認されたときの輪郭」は、「HUD装置が表示しようとする『正調の画像(虚像)』、あるいは、『正解の画像(虚像)』としての輪郭」ということもできる。その画像(虚像)の輪郭が台形なら、HUD装置が、台形を正解として、画像領域の輪郭(矩形)の画像にワーピングを施して表示した、ということである。 Conventionally, assuming that an image (virtual image) is displayed on an elevation surface perpendicular to the road surface, an image area (may be an entire area or a partial area) in a display to be image-corrected may be used. ) Corresponding to the contour (visible for convenience of explanation, the shape is, for example, a rectangle) in the virtual image display (in other words, the contour when the viewer correctly sees it as a virtual image). For convenience, it is assumed to be visible), but in this embodiment, the contour (outer shape) suitable for each of the depth image (for example, an inclined image) and the facing image (for example, a standing image) is uniformly set as a rectangle (rectangle or square). Is adopted. In addition, the above-mentioned "outline when the image is correctly viewed by the viewer as a virtual image" is "a contour as a" normal image (virtual image) "that the HUD device intends to display, or a" correct image (virtual image) ". It can also be said. If the outline of the image (virtual image) is a trapezoid, it means that the HUD device takes the trapezoid as the correct answer and warps the image of the outline (rectangle) of the image area to display it.
 例えば、車両の前方に配置される傾斜面の虚像表示面は、アイボックスの中央に位置する視点から見れば(言い換えれば、運転者が真正面から見れば)、その輪郭(ここでは、この輪郭が可視と想定する。また、ここでは虚像表示面全体の輪郭としているが、ある画像についての画像表示領域の輪郭としてもよい)は、遠方ほど幅が狭くなるように遠近感をもって感得されることから、台形(平行四辺形を含む)に見える。視点が車両の幅方向(左右方向)に移動する(ずれる)場合は、表示中の虚像の位置に変化がなければ、視点と虚像との相対的位置が変わることになるため、運動視差の影響を受けて、その台形が、左方向や右方向に歪み、変形する。これらの点を考慮し、傾斜面の虚像表示面に表示される画像(長く伸びる矢印の画像やセンターラインの画像等)の画像領域の輪郭も、台形を基本とすることとしたものである。 For example, the virtual image display surface of the inclined surface arranged in front of the vehicle has its contour (here, this contour is seen from the viewpoint located in the center of the eyebox) (in other words, when the driver sees it from the front). It is assumed that it is visible. In addition, although it is assumed to be the outline of the entire virtual image display surface here, it may be the outline of the image display area for a certain image). It looks like a trapezoid (including a parallelogram). When the viewpoint moves (shifts) in the width direction (horizontal direction) of the vehicle, if there is no change in the position of the virtual image being displayed, the relative position between the viewpoint and the virtual image will change, so the effect of motion parallax In response to this, the trapezoidal shape is distorted and deformed to the left and right. In consideration of these points, the outline of the image area of the image displayed on the virtual image display surface of the inclined surface (the image of the long arrow, the image of the center line, etc.) is also based on the trapezoid.
 これによって、従来のように、画像領域の輪郭に対応する、虚像表示の際に視認される輪郭(虚像として正しく視認されたときの輪郭)を矩形(長方形又は正方形)としていた場合に比べて、ワーピング処理後(ウインドシールド等による歪みが除去された後)の虚像は、遠近感が、より自然に近いものとなる。より自然な立体感が得られることで、視認性が向上する。また、路面等の背景に重畳される画像(虚像)の場合、視点移動に伴う画像(虚像)の見え方の変化が、背景の路面と同じに感じられる。このことは、運転者に、その画像(虚像)が、路面に重畳されている(路面と一致している)と感得させるのに有利であり、この点でも、路面重畳HUDの視認性向上につながる。 As a result, as compared with the conventional case where the contour corresponding to the contour of the image area and visually recognized at the time of displaying a virtual image (the contour when correctly recognized as a virtual image) is a rectangle (rectangle or square). The virtual image after the warping process (after the distortion caused by the windshield or the like is removed) has a more natural perspective. Visibility is improved by obtaining a more natural three-dimensional effect. Further, in the case of an image (virtual image) superimposed on the background such as a road surface, the change in the appearance of the image (virtual image) due to the movement of the viewpoint is felt to be the same as that of the road surface in the background. This is advantageous for making the driver feel that the image (virtual image) is superimposed on the road surface (matches the road surface), and in this respect as well, the visibility of the road surface superimposed HUD is improved. Leads to.
 また、立面(疑似立面を含む)に表示される正対画像(立像)については、従来と同様に、画像領域の輪郭(外形)を矩形(長方形又は正方形)とする。立面に表示される車速表示等は、固定位置に常時表示される場合が多く、数字や記号、文字等を、正確に読み取れることが重要であり、特に遠近感は必要ではないことから、傾斜像の場合のように、台形とすることの利点がないため、ここでは従来どおり、長方形の外形を採用する。 For the facing image (standing image) displayed on the elevation (including the pseudo elevation), the outline (outer shape) of the image area is a rectangle (rectangle or square) as in the conventional case. The vehicle speed display displayed on the elevation is often displayed at a fixed position at all times, and it is important to be able to read numbers, symbols, characters, etc. accurately, and since perspective is not particularly necessary, it is tilted. Since there is no advantage of making it trapezoidal as in the case of an image, the rectangular outer shape is adopted here as before.
 本態様では、奥行き画像(傾斜像)の画像領域の輪郭と、正対画像(立像)の画像領域の輪郭とを、個別に設定してワーピングを行うため、各画像について、実質的には、異なる方式のワーピングを実施していることになる。これにより、奥行き画像(傾斜像)については、自然な遠近感を感じられる、視認性が向上された表示が実現され、また、正対画像(立像)については、見易く、認知性が向上した表示(正確な情報の把握に適した表示等)が実現される。 In this embodiment, the contour of the image area of the depth image (tilted image) and the contour of the image area of the facing image (standing image) are individually set and warped. This means that different methods of warping are being implemented. As a result, for depth images (tilted images), a display with improved visibility that gives a natural perspective is realized, and for facing images (standing images), it is easier to see and has improved cognition. (Display suitable for grasping accurate information, etc.) is realized.
 第1の態様に従属する第2の態様において、
 前記制御部は、
 前記奥行き画像に対してワーピング処理を実施しているときに、前記視点が前記車両の幅方向に沿って移動した場合には、その移動に起因して生じる前記台形の変形が反映されるように前記台形を変形させるワーピング制御を実施し、
 前記正対画像に対してワーピング処理を実施しているときに、前記視点が前記車両の幅方向に沿って移動した場合には、その移動に起因して生じる前記矩形の変形を打ち消すように前記矩形を変形させるワーピング制御を実施してもよい。
In the second aspect, which is subordinate to the first aspect,
The control unit
When the viewpoint moves along the width direction of the vehicle while the warping process is performed on the depth image, the deformation of the trapezoid caused by the movement is reflected. Warping control that deforms the trapezoid is performed.
When the viewpoint moves along the width direction of the vehicle while the warping process is performed on the facing image, the deformation of the rectangle caused by the movement is canceled out. Warping control that deforms the rectangle may be performed.
 第2の態様では、視点が左右に移動する(ずれる)場合を想定し、この場合に、奥行き画像(傾斜像)/正対画像(立像)の各々に対して、視差の影響を考慮して、異なる方式の画像領域の変形を加える。第1の態様で説明したように、奥行き画像(傾斜像)については、画像領域を台形としてワーピングが実施される。例えば、視点がアイボックスの中央に位置しているとき、その台形は、等脚台形(上底と下底が平行で、かつ、上底の両端の内角が等しく、下底の両端の内角も等しい台形)に見えているとする。ここで、視点が、車両の幅方向(左右方向)にずれた場合を想定する。 In the second aspect, it is assumed that the viewpoint moves (shifts) to the left or right, and in this case, the influence of parallax is taken into consideration for each of the depth image (tilted image) and the facing image (standing image). , Add different methods of image area transformation. As described in the first aspect, for the depth image (tilted image), warping is performed with the image area as a trapezoid. For example, when the viewpoint is located in the center of the eyebox, the trapezoid is an isosceles trapezoid (the upper base and the lower base are parallel, the internal angles of both ends of the upper base are equal, and the internal angles of both ends of the lower base are also equal. It looks like an isosceles trapezoid). Here, it is assumed that the viewpoint shifts in the width direction (left-right direction) of the vehicle.
 表示されている虚像の位置が固定(正確には、実空間に設定された座標系での位置が固定)である場合(例えば、センターラインの虚像は路面の中央に重畳されないと意味がないので、位置は動かすことができない)は、視点が移動すれば、虚像と視点との相対的位置関係が変化する。 If the position of the displayed virtual image is fixed (to be exact, the position in the coordinate system set in the real space is fixed) (for example, the virtual image of the center line is meaningless unless it is superimposed on the center of the road surface). , The position cannot be moved), but if the viewpoint moves, the relative positional relationship between the virtual image and the viewpoint changes.
 視点の移動の前後で、光学部材による歪みが除去されて、同じ形状(ここでは台形)の画像が視点に到来するとしても、視点が移動すれば、人の眼に入射される光の入射角が変化する。よって、眼に映る像が変形し、人の脳は、その変形された画像を基礎として画像を判定することから、視点の移動前に上記の等脚台形が見えていたのならば、視点移動後では、歪んだ台形が見えることになる。 Even if the distortion caused by the optical member is removed before and after the movement of the viewpoint and an image of the same shape (here, trapezoid) arrives at the viewpoint, if the viewpoint moves, the angle of incidence of the light incident on the human eye Changes. Therefore, the image reflected in the eye is deformed, and the human brain judges the image based on the deformed image. Therefore, if the above isosceles trapezoid is visible before the viewpoint is moved, the viewpoint is moved. Later, you will see a distorted trapezoid.
 ここで、奥行き画像(傾斜像)についての、上記の画像の歪みの影響について考察すると、奥行き画像(傾斜像)は、視差による立体感を生じさせるように生成される画像であることから、上記の画像の歪み(運動視差による影響を受けて画像が歪んで見えること)は、当然に、当初から想定され得るものである。上述のとおり、その運動視差があることから、自然な立体感(遠近感)が生じるのであり、従って、上記の歪みは、奥行き画像の自然な視覚を補助することになる。 Here, considering the influence of the distortion of the above image on the depth image (tilted image), since the depth image (tilted image) is an image generated so as to give a stereoscopic effect due to parallax, the above Image distortion (the image looks distorted under the influence of motion parallax) can, of course, be expected from the beginning. As described above, the motion parallax creates a natural stereoscopic effect (perspective), and therefore the above distortion assists the natural vision of the depth image.
 従って、画像生成の段階で、視点追従ワーピング処理を実施している状態で、視点が左方向にずれたときは、上記の等脚台形を左側に傾くように補正して遠近感をより自然なものに近づけ、その補正した画像領域について、光学部材による歪みとは逆特性の歪みを事前に与える補正(従来のワーピング)を実施する。これにより、より自然な遠近感(立体感)をもつ奥行き画像の表示が可能である。 Therefore, when the viewpoint shifts to the left while the viewpoint tracking warping process is being performed at the stage of image generation, the above isosceles trapezoid is corrected to tilt to the left to make the perspective more natural. A correction (conventional warping) is performed in which the corrected image area is brought close to an object and a distortion having the opposite characteristic to the distortion caused by the optical member is given in advance. This makes it possible to display a depth image with a more natural perspective (three-dimensional effect).
 一方、正対画像(立像)については、視点移動に伴う運動視差は、視覚に悪影響を与える。車速表示等は、常に、変形のない立像として見えるのが、表示の認知性を低下させないために必要となる。 On the other hand, for the facing image (standing image), the motion parallax associated with the movement of the viewpoint adversely affects the visual sense. It is necessary for the vehicle speed display and the like to always be seen as a standing image without deformation so as not to reduce the cognitive recognition of the display.
 ここで、アイボックスの中央の位置にある視点では、例えば長方形の画像(虚像)が見えていたとして、視点が左右方向に移動した場合を想定する。このとき、その虚像の、実空間に設定された座標系での位置が固定であれば(例えば、車速表示は、ウインドシールドの右下の端部付近に常時表示されることが想定され得る)、光学系による歪みが除去されて長方形の画像の光が到来するとしても、虚像と視点との相対位置関係が変化し、光の入射角が異なることから、視点移動前に見えていた長方形は、視点が左に移動すれば左に傾くように変形され、右に移動すれば右に傾くように変形される。このような変形は、上記のとおり、その表示の認知性を低下させる。 Here, it is assumed that the viewpoint moves in the left-right direction, for example, assuming that a rectangular image (virtual image) is visible from the viewpoint located at the center of the eye box. At this time, if the position of the virtual image in the coordinate system set in the real space is fixed (for example, it can be assumed that the vehicle speed display is always displayed near the lower right end of the windshield). Even if the distortion caused by the optical system is removed and the light of the rectangular image arrives, the relative positional relationship between the virtual image and the viewpoint changes, and the incident angle of the light differs, so the rectangle that was visible before the viewpoint moved , If the viewpoint moves to the left, it is transformed to tilt to the left, and if it moves to the right, it is transformed to tilt to the right. Such a modification reduces the cognition of the display, as described above.
 よって、第2の態様では、例えば、正対画像(立像)のワーピング処理に際しては、そのような運動視差に基づく変形(長方形等の矩形の変形)が抑制(好ましくは相殺)されるように、画像(画像領域の輪郭:矩形)を、運動視差の影響による実際の変形とは逆の方向に変形させる運動視差補正を行い、その変形された画像領域に対して、光学系で生じる歪みとは逆特性の歪みを与える歪み補正を行うことを含むワーピングを実施する。本発明の「ワーピング」は、単に歪みを補正するということよりも、視差や運動視差を考慮した虚像の見え方を適正化する補正処理を含む意味合いがあり、特に、「運動視差による画像(虚像)の変形を制御する補正」という意味合いをもつ。具体的な一例では、上記のように、運動視差補正と、歪み補正と、をセットにして実施する。 Therefore, in the second aspect, for example, in the warping process of the facing image (standing image), the deformation based on such motion parallax (deformation of a rectangle such as a rectangle) is suppressed (preferably offset). Motion parallax correction is performed to deform the image (outline of the image area: rectangular) in the direction opposite to the actual deformation due to the influence of motion parallax, and what is the distortion that occurs in the optical system for the deformed image area? Warping is performed, including distortion correction that gives distortion of the opposite characteristic. The "warping" of the present invention has a meaning including a correction process for optimizing the appearance of a virtual image in consideration of parallax and motion parallax, rather than simply correcting the distortion. In particular, the "image by motion parallax (virtual image)" has a meaning. ) Has the meaning of "correction to control the deformation". In a specific example, as described above, the motion parallax correction and the distortion correction are performed as a set.
 これによって、視点がずれると、その視点には、運動視差による変形とは逆に変形された画像の光が人の眼に入射することとなる。このとき、運動視差による変形が、事前に施されている逆の変形と打ち消されることになり、人の脳が画像を判定すると、矩形(長方形や正方形)の画像に見える。言い換えれば、視点移動の前後で、画像は長方形等の矩形が維持され、変化がないため、認知性が低下することはない。常に、正確な正対画像が得られることから、視覚性が向上する。 As a result, when the viewpoint shifts, the light of the deformed image is incident on the human eye, which is the opposite of the deformation due to the motion parallax. At this time, the deformation due to the motion parallax is canceled out by the reverse deformation applied in advance, and when the human brain judges the image, it looks like a rectangular (rectangular or square) image. In other words, before and after the viewpoint movement, the image maintains a rectangle such as a rectangle and does not change, so that the cognition does not deteriorate. Visuality is improved because an accurate facing image is always obtained.
 第2の態様に従属する第3の態様において、
 前記制御部は、前記正対画像に対するワーピング制御において、前記矩形の変形を打ち消すように前記矩形を変形させる場合に、
 前記矩形の下辺を固定して、上辺を移動させて歪ませてもよく、
 又は、
 上辺を固定して、下辺を移動させて歪ませてもよい。
In the third aspect, which is subordinate to the second aspect,
When the control unit deforms the rectangle so as to cancel the deformation of the rectangle in the warping control for the facing image.
The lower side of the rectangle may be fixed and the upper side may be moved to distort it.
Or,
The upper side may be fixed and the lower side may be moved to distort.
 第3の態様では、正対画像の画像領域を事前に変形させる方法として、矩形(長方形又は正方形)の下辺を固定して上辺を移動させて変形する方法、あるいは、その逆の方法の何れかが採用される。 In the third aspect, as a method of preliminarily deforming the image area of the facing image, either a method of fixing the lower side of a rectangle (rectangle or square) and moving the upper side to deform the image, or vice versa. Is adopted.
 何れの方法でも、運動視差によって生じる実際の変形を打ち消すような変形(歪み)を画像領域に与えることができ、この点では、効果は共通である。 With either method, deformation (distortion) that cancels out the actual deformation caused by motion parallax can be given to the image area, and in this respect, the effects are common.
 但し、運転者から見て、前方に表示される矩形の上辺は遠方にあり、下辺は近方にあり、運動視差を考慮すると、遠方ほど位置の変化が小さく感得されるのであり、よって、上辺を動かして矩形を変形する場合には、下辺を動かして矩形を変形する場合に比べて、動かす量(実質的な変形量)を減らすことができるものと考えられ、この点では差異がある。 However, when viewed from the driver, the upper side of the rectangle displayed in front is far away, and the lower side is near, and considering the motion parallax, the farther the rectangle is, the smaller the change in position is perceived. When the upper side is moved to deform the rectangle, it is considered that the amount of movement (substantial deformation amount) can be reduced as compared with the case where the lower side is moved to deform the rectangle, and there is a difference in this respect. ..
 第1の態様に従属する第4の態様において、
 前記制御部は、
 前記正対画像に対してワーピング処理を実施しているときに、前記視点が前記車両の幅方向に沿って移動した場合には、
 前記正対画像の虚像が、前記視点を基準とする視点座標系の所定位置に固定されるように、前記正対画像の位置を変更してもよい。
In the fourth aspect, which is subordinate to the first aspect,
The control unit
If the viewpoint moves along the width direction of the vehicle while the warping process is being performed on the facing image,
The position of the facing image may be changed so that the virtual image of the facing image is fixed at a predetermined position in the viewpoint coordinate system with respect to the viewpoint.
 上記の第2、第3の態様では、視点位置ずれに伴って正対画像が歪むという課題に対して、その運動視差による歪み(矩形の変形)とは逆の歪み(逆の矩形の変形)を事前に与えておくことで対処しているが、第4の態様では、視点移動に合わせて、正対画像(の虚像)の表示位置をずらすことによって対処する。 In the second and third aspects described above, for the problem that the facing image is distorted due to the displacement of the viewpoint position, the distortion opposite to the distortion due to the motion parallax (deformation of the rectangle) (deformation of the inverse rectangle). However, in the fourth aspect, the display position of the facing image (virtual image) is shifted according to the movement of the viewpoint.
 上述のとおり、視点位置ずれに伴って正対画像が歪むという課題は、実空間に設定される座標系において、正対画像(の虚像)の表示位置が固定であることによって生じ得るものである。 As described above, the problem that the facing image is distorted due to the viewpoint position shift can be caused by the fixed display position of the facing image (virtual image) in the coordinate system set in the real space. ..
 本態様では、実空間に設定される座標系における表示位置が固定である、という前提をなくすことで、上記課題に対処する。 In this embodiment, the above problem is dealt with by eliminating the premise that the display position in the coordinate system set in the real space is fixed.
 言い換えれば、本態様では、視点座標系(人の視点を中心として、車両の前後方向、左右方向、上下方向の各々に沿う軸を設定した座標系であり、人の視点(顔等も含む)が動けば、その動きに応じて座標系も動く)において、正対画像の虚像が、常に同じ位置にあるように、画像(矩形の画像領域)の、表示部の表示面上での位置(あるいは、画像生成部における画像生成面や画像生成空間での位置)を、視点移動に対応させて、適宜、移動させる制御を実施する。 In other words, in this embodiment, the viewpoint coordinate system (a coordinate system in which axes along each of the front-rear direction, the left-right direction, and the vertical direction of the vehicle are set with the person's viewpoint as the center, and the person's viewpoint (including the face, etc.)). If the image moves, the coordinate system also moves according to the movement), the position of the image (rectangular image area) on the display surface of the display unit (so that the virtual image of the opposite image is always in the same position). Alternatively, control is performed to appropriately move the image generation surface in the image generation unit or the position in the image generation space) in correspondence with the movement of the viewpoint.
 このことは、実空間に設定される座標系では、虚像の位置が、視点ずれに応じて、適宜、移動するということである。この場合、例えば、正対画像の虚像と人の視点との相対位置関係は、常に一定であり、運動視差は考慮する必要がなくなり、見え方に変化がない。よって、運動視差によって画像が変形されて認知性が低下するという上記の課題が、この方法によっても解消され得る。 This means that in the coordinate system set in the real space, the position of the virtual image moves appropriately according to the viewpoint shift. In this case, for example, the relative positional relationship between the virtual image of the facing image and the viewpoint of the person is always constant, it is not necessary to consider the motion parallax, and there is no change in the appearance. Therefore, the above-mentioned problem that the image is deformed by the motion parallax and the cognition is lowered can be solved by this method as well.
 第1乃至第4の態様の何れか1つに従属する第5の態様において、
 前記制御部は、
 前記奥行き画像と前記正対画像が混在する表示を行う場合において、前記奥行き画像と前記正対画像とに、個別に画像処理を実施した後、画像処理後の各画像を合成して1つの画像としてもよい。
In the fifth aspect, which is subordinate to any one of the first to fourth aspects.
The control unit
In the case of displaying a mixture of the depth image and the facing image, the depth image and the facing image are individually subjected to image processing, and then the images after the image processing are combined to form one image. May be.
 第5の態様では、1つの画像に、奥行き画像と正対画像が混在する場合に、各画像(各画像の領域ということもできる)について、個別に、例えば、各画像に固有の画像補正を行い、その後に、画像を合成して、1つの画像を生成する方法が採用される。 In the fifth aspect, when a depth image and a facing image are mixed in one image, each image (which can also be referred to as an area of each image) is individually corrected, for example, an image correction unique to each image. Then, a method of synthesizing images to generate one image is adopted.
 この方法は、例えば、画像レンダリングにより実現が可能である。奥行き画像(あるいは奥行き画像の画像領域)と、正対画像(あるいは正対画像の画像領域)とを区別して取り扱うことで、各画像に異なる画像補正を実施する必要がある場合でも、特に問題が生じず、処理が簡素化され得る。 This method can be realized by, for example, image rendering. By treating the depth image (or the image area of the depth image) and the facing image (or the image area of the facing image) separately, there is a particular problem even when it is necessary to perform different image corrections for each image. It does not occur and the process can be simplified.
 第1乃至第5の態様の何れか1つに従属する第6の態様において、
 前記制御部は、
 前記アイボックスの左右方向における所定の第1の位置から見た前記奥行き画像の画像領域のうち、少なくとも1組の対辺が互いに平行である台形で囲まれる領域を第1輪郭とするとき、
 前記第1の位置より左側から見た場合、その移動に起因して生じる前記台形の変形に対し、前記第1輪郭の上辺が下辺に対して相対的に右側になり、
 前記第1の位置より右側から見た場合、その移動に起因して生じる前記台形の変形に対し、前記第1輪郭の上辺が下辺に対して相対的に左側になる、ように前記ワーピング制御を実施し、
 前記アイボックスの左右方向における前記第1の位置から見た前記正体画像の画像領域のうち、長方形又は正方形を含む矩形で囲まれる領域を第2輪郭とするとき、前記第1の位置より左側及び右側から見た場合、前記第2輪郭の上辺の位置が下辺の位置に対して相対的に変化しないように、前記ワーピング制御を実施する。
In the sixth aspect, which is subordinate to any one of the first to fifth aspects.
The control unit
When the region surrounded by a trapezoid whose opposite sides are parallel to each other is defined as the first contour in the image region of the depth image viewed from a predetermined first position in the left-right direction of the eye box.
When viewed from the left side of the first position, the upper side of the first contour is on the right side relative to the lower side with respect to the deformation of the trapezoid caused by the movement.
When viewed from the right side of the first position, the warping control is performed so that the upper side of the first contour is on the left side relative to the lower side with respect to the deformation of the trapezoid caused by the movement. Carry out,
Of the image area of the true image viewed from the first position in the left-right direction of the eye box, when the area surrounded by a rectangle or a rectangle including a square is used as the second contour, the left side of the first position and When viewed from the right side, the warping control is performed so that the position of the upper side of the second contour does not change relative to the position of the lower side.
 虚像表示面が路面より上側に配置されている(換言すると、虚像表示面が路面よりも運転者側に配置されている)場合、奥行き画像(傾斜像)の運動視差と、奥行き画像(傾斜像)が重なる路面等の背景の運動視差と、が異なる。具体的には、奥行き画像(傾斜像)の運動視差が、路面等の背景の運動視差より大きくなり、奥行き画像と路面等の背景との重畳性(一致性)が低下することも想定される。 When the virtual image display surface is arranged above the road surface (in other words, the virtual image display surface is arranged on the driver side of the road surface), the motion parallax of the depth image (tilted image) and the depth image (tilted image) ) Is different from the motion parallax of the background such as the road surface where it overlaps. Specifically, it is assumed that the motion parallax of the depth image (tilted image) becomes larger than the motion parallax of the background such as the road surface, and the superposition (coincidence) between the depth image and the background such as the road surface decreases. ..
 第6の態様では、アイボックスの左右方向における中心である第1の位置から見た奥行き画像の画像領域の台形で囲んだ全部(又は一部)の領域を第1輪郭とするとき、運動視差で生じる第1輪郭の歪みを弱めて運転者に視認させるようにワーピングを実施する。第1の位置より左側から見た場合、運動視差により第1輪郭の上辺の位置が下辺の位置に対して左側にずれるように歪むが、この方法では、第1輪郭の上辺と下辺のずれを低減する補正を実施する。換言すると、第1輪郭に対する補正と、第2輪郭に対する補正とは、いずれも運動視差で生じる歪みを弱める方向の補正であるが、第1輪郭に対する補正は、第2輪郭に対する補正よりも小さい。これにより、奥行き画像の運動視差が、路面などの背景との運動視差に近づくため、自然な遠近感を感じられる、視認性が向上された表示が実現される。 In the sixth aspect, when the entire (or part) area surrounded by the trapezoid of the image area of the depth image viewed from the first position which is the center in the left-right direction of the eyebox is used as the first contour, the motion parallax is used. Warping is performed so that the distortion of the first contour caused by the above is reduced and the driver can visually recognize the distortion. When viewed from the left side of the first position, the position of the upper side of the first contour is distorted to the left with respect to the position of the lower side due to motion parallax. Make corrections to reduce. In other words, the correction for the first contour and the correction for the second contour are both corrections in the direction of weakening the distortion caused by the motion parallax, but the correction for the first contour is smaller than the correction for the second contour. As a result, the motion parallax of the depth image approaches the motion parallax with the background such as the road surface, so that a display with improved visibility is realized, in which a natural perspective can be felt.
 第2の態様に従属する第7の態様において、
 前記制御部が、前記奥行き画像に対してワーピング処理を実施しているときに、前記視点が前記車両の幅方向に沿って移動した場合に、その移動に起因して生じる前記台形の変形が反映されるように前記台形を変形させるワーピング制御を実施する際、
 表示画像が、路面に重畳するように表示される路面重畳画像であり、
 前記路面重畳画像の虚像が、路面に対して傾斜している傾斜面に表示され、かつ前記虚像が路面から浮き上がって見える場合には、路面から浮き上がって見えない場合に比べて、前記台形の変形の程度を軽減する運動視差補正を伴うワーピング制御を実施してもよい。
In the seventh aspect, which is subordinate to the second aspect,
When the control unit performs warping processing on the depth image and the viewpoint moves along the width direction of the vehicle, the deformation of the trapezoid caused by the movement is reflected. When performing warping control that deforms the trapezoid so as to be
The displayed image is a road surface superimposed image displayed so as to be superimposed on the road surface.
When the virtual image of the road surface superimposed image is displayed on the inclined surface inclined with respect to the road surface and the virtual image appears to be lifted from the road surface, the trapezoid is deformed as compared with the case where the virtual image is lifted from the road surface and cannot be seen. Warping control with motion parallax correction may be performed to reduce the degree of.
 第7の態様では、例えば、表示画像が、路面に重畳するように表示される路面重畳画像(例えば矢印の図形の画像)であり、その路面重畳画像の虚像が、路面に対して傾斜している傾斜面に表示され、かつ虚像が路面から浮き上がって見える場合には、路面から浮き上がって見えない場合(路面への重畳度が比較的高い場合)に比べて、矢印(言い換えれば、矢印の画像の表示領域の輪郭としての台形)の変形の程度を軽減するような運動視差補正を実施する。 In the seventh aspect, for example, the displayed image is a road surface superimposed image (for example, an image of a figure of an arrow) displayed so as to be superimposed on the road surface, and the virtual image of the road surface superimposed image is inclined with respect to the road surface. When the image is displayed on an inclined surface and the virtual image appears to be lifted from the road surface, the image of the arrow (in other words, the image of the arrow) is compared to the case where the virtual image is lifted from the road surface and cannot be seen (when the degree of superimposition on the road surface is relatively high). The motion parallax correction is performed so as to reduce the degree of deformation of the trapezoidal shape as the outline of the display area of.
 その理由は、実際の矢印の図形の虚像(路面重畳画像の虚像)の位置が、その矢印が重畳されて見える路面の位置(矢印の視覚的な着地位置)よりも、運転者(視認者)から見て手前側にあるからである。このときは、手前に見える矢印の虚像について運動視差が生じ、一方、その奥に見える路面についても運動視差が生じる。奥に見える路面は遠方側の表示であるため、運動視差はより小さく、手前に見える矢印の虚像は手前側の表示であるため、運動視差はより大きく感得される。 The reason is that the position of the virtual image of the actual arrow figure (virtual image of the road surface superimposed image) is more than the position of the road surface where the arrow is superimposed (visual landing position of the arrow), the driver (visual person). This is because it is on the front side when viewed from the front. At this time, kinetic parallax occurs in the virtual image of the arrow seen in the foreground, while kinetic parallax also occurs in the road surface seen in the back. Since the road surface seen in the back is displayed on the distant side, the motion parallax is smaller, and the virtual image of the arrow seen in the foreground is displayed on the front side, so that the motion parallax is more perceived.
 この相対的な差を人の視覚が感知してしまうため、矢印の虚像についての運動視差は、矢印の虚像が例えば路面に密着しているような場合に比べて大きく感じられる。よって、矢印の虚像の変形(傾斜)が強調されてしまう。この場合には、その変形(傾斜)が軽減されるような、言い換えれば、矢印の画像の表示領域の輪郭である台形の変形が、ある程度、抑制されるような(但し変形は相殺されることなく残る)、運動視差補正を実施する。これによって適正な運動視差を伴う表示が得られる。 Because human vision perceives this relative difference, the motion parallax for the virtual image of the arrow feels larger than when the virtual image of the arrow is in close contact with the road surface, for example. Therefore, the deformation (tilt) of the virtual image of the arrow is emphasized. In this case, the deformation (tilt) is reduced, in other words, the deformation of the trapezoid, which is the outline of the display area of the arrow image, is suppressed to some extent (however, the deformation is offset). (Remains without), perform motion parallax correction. As a result, a display with appropriate motion parallax can be obtained.
 当業者は、例示した本発明に従う態様が、本発明の精神を逸脱することなく、さらに変更され得ることを容易に理解できるであろう。 Those skilled in the art will easily understand that the embodiments according to the present invention may be further modified without departing from the spirit of the present invention.
図1(A)は、路面に対して立設された虚像表示面に虚像を表示する従来のHUD装置におけるワーピング制御(従来のワーピング制御)の概要、及びワーピング制御を経て表示される虚像(及び虚像表示面)の歪みの態様を説明するための図、図1(B)は、ウインドシールドを介して運転者が視認する虚像の一例を示す図である。FIG. 1A shows an outline of warping control (conventional warping control) in a conventional HUD device that displays a virtual image on a virtual image display surface erected with respect to the road surface, and a virtual image (and) displayed through the warping control. FIG. 1B is a diagram for explaining a mode of distortion of the virtual image display surface), and is a diagram showing an example of a virtual image visually recognized by a driver through a windshield. 図2(A)は、視点位置追従ワーピング制御の概要を説明するための図、図2(B)は、内部を複数の部分領域に分割したアイボックスの構成例を示す図である。FIG. 2A is a diagram for explaining an outline of viewpoint position tracking warping control, and FIG. 2B is a diagram showing a configuration example of an eyebox whose inside is divided into a plurality of partial regions. 図3(A)は、ウインドシールドを介してユーザーが視認する虚像の一例を示す図、図3(B)は、虚像表示面上における虚像表示の様子を示す図、図3(C)は、表示部の表示面における画像表示の一例を示す図である。FIG. 3A is a diagram showing an example of a virtual image visually recognized by a user through a windshield, FIG. 3B is a diagram showing a state of virtual image display on a virtual image display surface, and FIG. 3C is a diagram showing a state of virtual image display on a virtual image display surface. It is a figure which shows an example of the image display on the display surface of a display part. 図4(A)は、車両に搭載されたHUD装置の構成、及び虚像表示面の一例を示す図、図4(B)、(C)は、図4(A)に示される虚像表示面を実現する手法の例を示す図である。4 (A) is a diagram showing a configuration of a HUD device mounted on a vehicle and an example of a virtual image display surface, and FIGS. 4 (B) and 4 (C) are views of the virtual image display surface shown in FIG. 4 (A). It is a figure which shows the example of the method to realize. 図5は、立体的な虚像を表示するHUD装置(視差式HUD装置あるいは3DHUD装置)の表示方式の一例を示す図である。FIG. 5 is a diagram showing an example of a display method of a HUD device (parallax type HUD device or 3D HUD device) that displays a three-dimensional virtual image. 図6は、立像(疑似立像を含む)の虚像、及び傾斜像(奥行き像)の虚像の少なくとも一方を表示可能なHUD装置における、視点追従ワーピング制御の一例を示す図である。FIG. 6 is a diagram showing an example of viewpoint tracking warping control in a HUD device capable of displaying at least one of a virtual image of a standing image (including a pseudo standing image) and a virtual image of an inclined image (depth image). 図7(A)は、前方に傾斜面(例えば虚像表示面の傾斜像HUD領域)が配置され、それを見ている運転者の視点(単眼とする)がアイボックスの中央の分割領域に位置した状態から、車両の幅方向(左右方向)に沿って移動した場合を示す図、図7(B)、(C)、(D)は、視点が、アイボックスの左の分割領域、中央の分割領域、右の分割領域の各々に位置するときの傾斜面の見え方(傾斜面の輪郭である台形の形状)を示す図、図7(E)は、傾斜面に矢印の虚像を表示する例を示す図、図7(F)、(G)、(H)は、視点が、アイボックスの左の分割領域、中央の分割領域、右の分割領域の各々に位置するときの矢印の虚像の見え方(矢印の虚像の形状)を示す図、図7(I)、図7(J)は、運動視差補正の無/有のときの矢印の虚像の見え方を示す図である。In FIG. 7A, an inclined surface (for example, an inclined image HUD region of a virtual image display surface) is arranged in front, and the viewpoint of the driver (monocular) looking at the inclined surface is located in the central divided region of the eye box. Figures 7 (B), (C), and (D) show the case where the vehicle moves along the width direction (horizontal direction) of the vehicle from the above-mentioned state. A figure showing the appearance of an inclined surface (a trapezoidal shape that is the outline of an inclined surface) when located in each of the divided area and the right divided area, FIG. 7 (E) displays a virtual image of an arrow on the inclined surface. An example diagram, FIGS. 7 (F), 7 (G), (H), is a virtual image of an arrow when the viewpoint is located in each of the left divided area, the central divided area, and the right divided area of the eyebox. FIG. 7 (I) and FIG. 7 (J) are diagrams showing the appearance of the arrow (shape of the virtual image of the arrow), and FIGS. 7 (I) and 7 (J) are diagrams showing the appearance of the virtual image of the arrow with / without motion misalignment correction. 図8(A)は、前方に立面(疑似立面を含み、例えば虚像表示面の立像HUD領域)が配置され、それを見ている運転者の視点(単眼とする)がアイボックスの中央の分割領域に位置した状態から、車両の幅方向(左右方向)に沿って移動した場合を示す図、図8(B)、(C)、(D)は、視点が、アイボックスの左の分割領域、中央の分割領域、右の分割領域の各々に位置するときの立面の見え方(立面の輪郭である矩形の形状)を示す図、図8(E)、(F)、(G)は、立面の輪郭である矩形の変形を抑制するために表示画像に施す画像補正の内容を示す図、図8(H)、(I)、(J)は、視点が、アイボックスの左の分割領域、中央の分割領域、右の分割領域の各々に位置するときの矢印の虚像の見え方(矢印の虚像の形状)を示す図である。In FIG. 8A, an elevation (including a pseudo-elevation, for example, a standing image HUD region of a virtual image display surface) is arranged in front, and the viewpoint (monocular) of the driver looking at it is the center of the eyebox. 8 (B), (C), and (D) are views showing the case where the vehicle moves along the width direction (horizontal direction) of the vehicle from the state of being located in the divided area of the above. Figures 8 (E), (F), (FIG. 8), FIGS. G) is a diagram showing the contents of image correction applied to the display image in order to suppress deformation of the rectangle which is the contour of the elevation, and FIGS. 8 (H), (I), and (J) have the viewpoint of the eye box. It is a figure which shows the appearance of the virtual image of an arrow (the shape of the virtual image of an arrow) when it is located in each of the left division area, the center division area, and the right division area of. 図9(A)~(D)は、立面の輪郭である矩形の変形を抑制するたに表示画像に施す画像補正の2つの例(下辺又は上辺を変形させる例)について説明するための図である。9 (A) to 9 (D) are diagrams for explaining two examples (an example of deforming the lower side or the upper side) of image correction applied to a display image while suppressing deformation of a rectangle which is an elevation contour. Is. 図10(A)は、傾斜面に表示される傾斜像(奥行き画像)についてのワーピング制御の内容を示す図、図10(B)は、立面に表示され立像(正対画像)についてのワーピング制御の内容を示す図、図10(C)~(E)は、画像レンダリングにより表示面に表示する画像を生成する例を示す図、図10(F)~(H)は、視点位置に応じた、運転者によって視認される画像(視認画像、視認虚像)の例を示す図である。FIG. 10A is a diagram showing the contents of warping control for an inclined image (depth image) displayed on an inclined surface, and FIG. 10B is a warping for an standing image (facing image) displayed on an elevation surface. 10 (C) to 10 (E) are diagrams showing the contents of control, FIGS. 10 (C) to 10 (E) are diagrams showing an example of generating an image to be displayed on the display surface by image rendering, and FIGS. 10 (F) to 10 (H) are views according to the viewpoint position. In addition, it is a figure which shows the example of the image (visual-viewing image, visual-viewing virtual image) visually recognized by a driver. HUD装置の構成例を示す図である。It is a figure which shows the configuration example of the HUD apparatus. 図12(A)は、立像(疑似立像)を、視点位置に関係なく視点座標系の所定位置に固定して表示する場合のHUD装置の要部構成の例を示す図、図12(B)は、表示(虚像)が、視点の移動に追従して移動したように見えることを示す図である。FIG. 12A is a diagram showing an example of a configuration of a main part of the HUD device when a standing image (pseudo standing image) is fixedly displayed at a predetermined position in the viewpoint coordinate system regardless of the viewpoint position, FIG. 12B. Is a diagram showing that the display (virtual image) appears to move following the movement of the viewpoint. 視点追従ワーピング制御における手順の一例を示す図である。It is a figure which shows an example of the procedure in viewpoint tracking warping control. 図14(A)、図14(B)は、虚像表示面の変形例を示す図である。14 (A) and 14 (B) are views showing a modified example of the virtual image display surface.
 以下に説明する最良の実施形態は、本発明を容易に理解するために用いられている。従って、当業者は、本発明が、以下に説明される実施形態によって不当に限定されないことを留意すべきである。 The best embodiments described below are used for easy understanding of the present invention. Accordingly, one of ordinary skill in the art should note that the invention is not unreasonably limited by the embodiments described below.
 まず、図1~図5を参照して、本発明の概要(基礎的事項や、HUD装置の概要等を含む)について、順をおって説明する。なお、本発明の内容の詳細は、図6以降に示されている。また、以下の説明において、奥行き画像は、例えば傾斜面の虚像表示面に表示される奥行きのある画像である。但し、傾斜角が路面に対して零である場合(つまり路面重畳画像である場合)も含むものとする。単に、奥行き画像、あるいは傾斜像と称する場合がある。また、正対画像は、運転者に対向するようにして表示するのが好ましい画像である。例えば、路面に対して立設される立面(路面に対して直交して立設される面だけでなく、少なくとも一部が傾斜しているが、全体としては立設されている面として取り扱うことができる「疑似立面」を含む)に表示される画像である。単に正対画像、あるいは立像と称する場合がある。 First, the outline of the present invention (including basic matters, an outline of the HUD apparatus, etc.) will be described in order with reference to FIGS. 1 to 5. The details of the contents of the present invention are shown in FIGS. 6 and later. Further, in the following description, the depth image is, for example, a deep image displayed on a virtual image display surface of an inclined surface. However, the case where the inclination angle is zero with respect to the road surface (that is, the case where the image is superimposed on the road surface) is also included. It may be simply referred to as a depth image or an inclined image. Further, the facing image is preferably an image displayed so as to face the driver. For example, an elevation that stands up against the road surface (not only a surface that stands orthogonal to the road surface, but at least part of it is inclined, but it is treated as an upright surface as a whole. It is an image displayed on a "pseudo-elevation" that can be). It may be referred to simply as a face-to-face image or a standing image.
 図1を参照する。図1(A)は、路面に対して立設された虚像表示面に虚像を表示する従来のHUD装置におけるワーピング制御(従来のワーピング制御)の概要、及びワーピング制御を経て表示される虚像(及び虚像表示面)の歪みの態様を説明するための図、図1(B)は、ウインドシールドを介して運転者が視認する虚像の一例を示す図である。 Refer to FIG. FIG. 1A shows an outline of warping control (conventional warping control) in a conventional HUD device that displays a virtual image on a virtual image display surface erected with respect to the road surface, and a virtual image (and) displayed through the warping control. FIG. 1B is a diagram for explaining a mode of distortion of the virtual image display surface), and is a diagram showing an example of a virtual image visually recognized by a driver through a windshield.
 図1(A)に示されるように、HUD装置100は、車両(広義に解釈可能とする)1に搭載されている。HUD装置100は、表示部(例えば、光透過型のスクリーン)101と、反射鏡103と、表示光を投影する光学部材としての曲面ミラー(例えば凹面鏡であり、反射面は自由曲面である場合もある)105と、を有する。表示部101に表示された画像は、反射鏡103、曲面ミラー105を経て、被投影部材としてのウインドシールド2の虚像表示領域5に投影される。なお、符号4は投影領域を示す。なお、HUD装置100では、曲面ミラーを複数設けてもよい。また、本実施形態のミラー(反射光学素子)に加えて、又は本実施形態のミラー(反射光学素子)の一部(又は全部)に代えて、レンズなどの屈折光学素子、回折光学素子などの機能性光学素子を含む構成を採用してもよい。 As shown in FIG. 1 (A), the HUD device 100 is mounted on the vehicle (which can be interpreted in a broad sense) 1. The HUD device 100 includes a display unit (for example, a light transmission type screen) 101, a reflecting mirror 103, and a curved mirror (for example, a concave mirror) as an optical member for projecting display light, and the reflecting surface may be a free curved surface. There is) 105 and. The image displayed on the display unit 101 is projected onto the virtual image display area 5 of the windshield 2 as a projected member via the reflecting mirror 103 and the curved mirror 105. Reference numeral 4 indicates a projection area. The HUD device 100 may be provided with a plurality of curved mirrors. Further, in addition to the mirror (reflection optical element) of the present embodiment, or in place of a part (or all) of the mirror (reflection optical element) of the present embodiment, a refraction optical element such as a lens, a diffractive optical element, or the like can be used. A configuration including a functional optical element may be adopted.
 画像の表示光の一部は、ウインドシールド2で反射されて、予め設定されたアイボックス(ここでは所定面積の矩形の形状とする)EBの内部に(あるいはEB上に)位置する運転者等の視点(眼)Aに入射され、車両1の前方に結像することで、表示部101の表示面102に対応する仮想的な虚像表示面PS上に虚像Vが表示される。 A part of the display light of the image is reflected by the windshield 2, and the driver or the like located inside (or on the EB) the preset eye box (here, a rectangular shape having a predetermined area) EB. The virtual image V is displayed on the virtual virtual image display surface PS corresponding to the display surface 102 of the display unit 101 by being incident on the viewpoint (eye) A of the above and forming an image in front of the vehicle 1.
 表示部101の画像は、曲面ミラー105の形状やウインドシールド2の形状等の影響を受けて歪む。言い換えれば、HUD装置100の光学系やウインドシールド2を含む光学部材によって歪みが生じる。その歪みを相殺するために、その歪みとは逆特性の歪みが画像に与えられる。このプレディストーション(前置歪み)方式の画像補正を、本明細書ではワーピング処理(ワーピング画像補正処理)という。 The image of the display unit 101 is distorted due to the influence of the shape of the curved mirror 105, the shape of the windshield 2, and the like. In other words, distortion occurs due to the optical system of the HUD device 100 and the optical member including the windshield 2. In order to offset the distortion, a distortion having the opposite characteristic to the distortion is given to the image. This pre-distortion (pre-distortion) type image correction is referred to as warping processing (warping image correction processing) in the present specification.
 ワーピング処理によって、虚像表示面PS上に表示される虚像Vが、湾曲のない平坦な画像となるのが理想ではあるが、例えば、ウインドシールド2上の広い投影領域4に表示光を投影すると共に、虚像表示距離をかなり広範囲に設定する大型のHUD装置100等では、ある程度の歪みが残ることは否めず、これは仕方のないことである。 Ideally, the virtual image V displayed on the virtual image display surface PS by the warping process becomes a flat image without curvature. For example, the display light is projected onto the wide projection area 4 on the windshield 2 and the display light is projected. In a large HUD device 100 or the like in which the virtual image display distance is set in a considerably wide range, it is undeniable that some distortion remains, which is unavoidable.
 図1(A)の左上において、破線で示されるPS’は、歪みが完全には除去されていない虚像表示面を示し、V’は、その虚像表示面PS’に表示される虚像を示している。 In the upper left of FIG. 1 (A), PS'shown by a broken line indicates a virtual image display surface in which the distortion is not completely removed, and V'indicates a virtual image displayed on the virtual image display surface PS'. There is.
 また、歪みが残る虚像V’の、歪みの程度あるいは歪みの態様は、視点AがアイボックスEB上のどの位置にあるかによって異なる。HUD装置100の光学系は、視点Aが中央部付近に位置することを想定して設計されることから、視点Aが中央部付近にあるときは比較的、虚像の歪みは小さく、周辺部になるほど虚像の歪みが大きくなる傾向が生じる。 Further, the degree of distortion or the mode of distortion of the virtual image V'in which distortion remains differs depending on the position of the viewpoint A on the eyebox EB. Since the optical system of the HUD device 100 is designed on the assumption that the viewpoint A is located near the central portion, when the viewpoint A is near the central portion, the distortion of the virtual image is relatively small, and the distortion of the virtual image is relatively small in the peripheral portion. Indeed, the distortion of the virtual image tends to increase.
 図1(B)には、ウインドシールド2を介して運転者が視認する虚像Vの一例が示されている。図1(B)では、矩形の外形を有する虚像Vには、例えば縦に5個、横に5個、合計で25個の基準点(基準となる画素点あるいは座標点)GD(i,j)(ここで、i、jは共に1~5の値をとり得る変数)が設けられる。画像(原画像)における各基準点(各座標点)に対して、ワーピング処理による、虚像Vに生じる歪みとは逆特性の歪みが事前に与えられる。したがって、その事前に与えられる歪みと、現実に生じる歪みとが相殺され、理想的には、図1(B)に示されるような湾曲のない虚像Vが表示される。 FIG. 1B shows an example of a virtual image V visually recognized by the driver through the windshield 2. In FIG. 1B, the virtual image V having a rectangular outer shape has, for example, five vertically and five horizontally, for a total of 25 reference points (reference pixel points or coordinate points) GD (i, j). ) (Here, i and j are variables that can take values of 1 to 5). For each reference point (each coordinate point) in the image (original image), a distortion having the opposite characteristic to the distortion generated in the virtual image V due to the warping process is given in advance. Therefore, the distortion given in advance and the distortion that actually occurs are offset, and ideally, a virtual image V without curvature as shown in FIG. 1 (B) is displayed.
 なお、基準点GD(i,j)の数は、補間処理等によって適宜、増やすことができる。なお、図1(B)において、符号7はステアリングホイールである。 The number of reference points GD (i, j) can be appropriately increased by interpolation processing or the like. In FIG. 1B, reference numeral 7 is a steering wheel.
 次に、図2を参照する。図2(A)は、視点位置追従ワーピング処理の概要を説明するための図、図2(B)は、内部を複数の部分領域に分割したアイボックスの構成例を示す図である。図2において、図1と共通する部分には同じ参照符号を付している(この点は以降の図においても同様である)。 Next, refer to FIG. FIG. 2A is a diagram for explaining an outline of the viewpoint position tracking warping process, and FIG. 2B is a diagram showing a configuration example of an eyebox whose inside is divided into a plurality of partial regions. In FIG. 2, the same reference numerals are given to the parts common to those in FIG. 1 (this point is the same in the following figures).
 図2(A)に示されるように、アイボックスEBは、複数(ここでは9個)の部分領域J1~J9に分割されており、各部分領域J1~J9を単位として、運転者の視点Aの位置が検出される。 As shown in FIG. 2A, the eyebox EB is divided into a plurality of (here, nine) partial regions J1 to J9, and the driver's viewpoint A is set in units of the respective partial regions J1 to J9. The position of is detected.
 HUD装置100の投射光学系118から画像の表示光Kが出射され、その一部がウインドシールド2により反射されて、運転者の視点(眼)Aに入射する。視点Aがアイボックス内にあるとき、運転者は画像の虚像を視認することができる。 The display light K of the image is emitted from the projection optical system 118 of the HUD device 100, and a part of the display light K is reflected by the windshield 2 and incident on the driver's viewpoint (eye) A. When the viewpoint A is in the eye box, the driver can visually recognize the virtual image of the image.
 HUD装置100は、ROM210を有し、ROM210は、画像変換テーブル212を内蔵する。画像変換テーブル212は、例えばデジタルフィルタによる画像補正(ワーピング画像補正)のための多項式、乗数、定数等を決定するワーピングパラメータWPを記憶している。ワーピングパラメータWPは、アイボックスEBにおける各部分領域J1~J9の各々に対応して設けられる。図2(A)では、各部分領域に対応するワーピングパラメータをWP(J1)~WP(J9)が示されている。なお、図中、符号としては、WP(J1)、WP(J4)、WP(J7)のみを示している。 The HUD device 100 has a ROM 210, and the ROM 210 has a built-in image conversion table 212. The image conversion table 212 stores, for example, a warping parameter WP that determines a polynomial, a multiplier, a constant, or the like for image correction (warping image correction) by a digital filter. The warping parameter WP is provided corresponding to each of the partial regions J1 to J9 in the eyebox EB. In FIG. 2A, WP (J1) to WP (J9) are shown as warping parameters corresponding to each partial region. In the figure, only WP (J1), WP (J4), and WP (J7) are shown as reference numerals.
 視点Aが移動した場合、その視点Aが、複数の部分領域J1~J9のうちの、どの位置にあるかが検出される。そして、検出された部分領域に対応するワーピングパラメータWP(J1)~WP(J9)の何れかがROM210から読みだされ(ワーピングパラメータの更新)、そのワーピングパラメータを用いてワーピング処理が実施される。 When the viewpoint A moves, the position of the viewpoint A in the plurality of partial regions J1 to J9 is detected. Then, any one of the warping parameters WP (J1) to WP (J9) corresponding to the detected partial area is read from the ROM 210 (warping parameter update), and the warping process is performed using the warping parameter.
 図2(B)には、部分領域の数を図2(A)の例よりも増やしたアイボックスEBが示されている。アイボックスEBは、縦に6個、横に10個、合計で60個の部分領域に分割されている。各部分領域は、X方向、Y方向の各座標位置をパラメータとして、J(X、Y)と表示されている。 FIG. 2B shows an eyebox EB in which the number of partial regions is increased as compared with the example of FIG. 2A. The eyebox EB is divided into a total of 60 partial regions, 6 in the vertical direction and 10 in the horizontal direction. Each subregion is displayed as J (X, Y) with each coordinate position in the X direction and the Y direction as a parameter.
 次に、図3を参照する。図3(A)は、ウインドシールドを介してユーザーが視認する虚像の一例を示す図、図3(B)は、虚像表示面上における虚像表示の様子を示す図、図3(C)は、表示部の表示面における画像表示の一例を示す図である。 Next, refer to FIG. FIG. 3A is a diagram showing an example of a virtual image visually recognized by a user through a windshield, FIG. 3B is a diagram showing a state of virtual image display on a virtual image display surface, and FIG. 3C is a diagram showing a state of virtual image display on a virtual image display surface. It is a figure which shows an example of the image display on the display surface of a display part.
 図3(A)において、HUD装置100による虚像は、ウインドシールド2内の、虚像表示領域5に表示される。図3(A)では、ユーザー(運転者等)から見て手前側に、車速表示(「120km/h」という表示)の虚像SPが表示されている。この車速表示SPの虚像は、広義には、「立像」の虚像、言い換えれば、ユーザーに正対するように表示される「正対画像」の虚像G1ということができる。 In FIG. 3A, the virtual image produced by the HUD device 100 is displayed in the virtual image display area 5 in the windshield 2. In FIG. 3A, a virtual image SP of a vehicle speed display (display of “120 km / h”) is displayed on the front side when viewed from a user (driver or the like). In a broad sense, the virtual image of the vehicle speed display SP can be said to be a virtual image of a "standing image", in other words, a virtual image G1 of a "face-to-face image" displayed so as to face the user.
 また、「立像(正対画像)」の虚像G1は、ユーザーから見て手前側に常時表示される非重畳コンテンツ(対象への重畳が意図されない、例えば車両1の状態や車両1の周囲の状況等を示すもの)の表示であってもよく、また、文字、図形、記号等の少なくとも1つからなるナビゲーション表示等であってもよい。 Further, the virtual image G1 of the "standing image (face-to-face image)" is a non-superimposed content that is always displayed on the front side when viewed from the user (for example, the state of the vehicle 1 or the situation around the vehicle 1 is not intended to be superimposed on the target). , Etc.), or may be a navigation display consisting of at least one of characters, figures, symbols, and the like.
 一方、奥側に、車両1の前方に直線的に延びる路面40を覆うように、かつ車両1に近い側から遠い側へと緩やかに上昇しつつ延びる曲面状のナビゲーション用矢印AWが表示されている。矢印AWの表示は、立体的な独特の視覚を与えるものであり、臨場感のある美感にも優れた表示となっている。この矢印AWの表示は、広義には、傾斜像として表示される、奥行き情報(距離情報)を備える「奥行き画像」の虚像G2ということができる。なお、本明細書において「曲面」は、その一部において平面である部分が含まれることも許容される。 On the other hand, a curved navigation arrow AW is displayed on the back side so as to cover the road surface 40 extending linearly in front of the vehicle 1 and gradually ascending from the side close to the vehicle 1 to the side far from the vehicle 1. There is. The display of the arrow AW gives a unique three-dimensional vision, and is also an excellent display with a sense of presence and aesthetics. The display of the arrow AW can be broadly referred to as a virtual image G2 of a "depth image" having depth information (distance information) displayed as an inclined image. In addition, in this specification, it is permissible that the "curved surface" includes a part which is a plane in a part thereof.
 「奥行き画像」の虚像G2は、車両1、又は他車両の進行に関する矢印の図形、及び矢印以外の図形(例えば、進行方向を示す三角形の図形、センターラインを示す直線等)を含む情報画像であってもよく、その図形は、例えば、路面40を覆うように、かつ少なくとも主要部が路面40から離れており、さらに路面40に沿って描かれる図形等(言い換えれば、「路面に重なる視覚を与える図形等」、あるいは「路面に沿って延在する図形等」)であってもよい。 The virtual image G2 of the "depth image" is an information image including a figure of an arrow relating to the progress of the vehicle 1 or another vehicle, and a figure other than the arrow (for example, a triangular figure indicating the traveling direction, a straight line indicating the center line, etc.). The figure may be, for example, a figure that covers the road surface 40 and has at least a main part separated from the road surface 40 and is drawn along the road surface 40 (in other words, "visual overlap with the road surface". It may be "a figure to be given" or "a figure extending along the road surface").
 また、図3(A)の例では、ステアリングホイール(広義には、ステアリングハンドル)7の近傍に、HUD装置等のオン/オフの切り換えや、動作モード等を設定可能な操作部9が設けられている。また、フロントパネル11の中央には、表示装置(例えば、液晶表示装置)13が設けられている。表示装置13は、例えば、HUD装置による表示の補助用に用いることができる。なお、この表示装置13は、タッチパネル等を有する複合型のパネルであってもよい。 Further, in the example of FIG. 3A, an operation unit 9 capable of switching on / off of the HUD device or the like and setting an operation mode or the like is provided in the vicinity of the steering wheel (in a broad sense, the steering handle) 7. ing. Further, a display device (for example, a liquid crystal display device) 13 is provided in the center of the front panel 11. The display device 13 can be used, for example, for assisting the display by the HUD device. The display device 13 may be a composite panel having a touch panel or the like.
 次に、図3(B)を参照する。車両(自車両)1に搭乗しているユーザー(運転者等)の視点(眼)Aから見て、結像面である虚像表示面PS1は、車両1に近い側の近端部U1から遠い側の遠端部U3へと一体的に延びる曲面であると共に、路面40と近端部U1との間の第1の距離h1よりも、路面40と遠端部U3との間の第2の距離h2が大きく設定されている。 Next, refer to FIG. 3 (B). The virtual image display surface PS1 which is an image plane is far from the near end U1 on the side closer to the vehicle 1 when viewed from the viewpoint (eye) A of the user (driver or the like) on the vehicle (own vehicle) 1. It is a curved surface that extends integrally to the far end U3 on the side, and is a second distance between the road surface 40 and the far end U3 rather than the first distance h1 between the road surface 40 and the near end U1. The distance h2 is set large.
 さらに、虚像表示面PS1は、近端部U1を含み、かつ路面40上の「立像(正対画像)」の虚像G1が表示される立像HUD領域Z1(図中、破線で囲まれて示されている)と、立像HUD領域Z1よりも遠方(車両1の前後方向(Z方向)における前方)に位置し、かつ「立像(正対画像)」の虚像G1よりも、路面40側に傾斜している傾斜像(奥行き画像)の虚像G2が表示される傾斜像HUD領域Z2と、に区別される。なお、中間部U2は、立像HUD領域Z1と傾斜像HUD領域Z2との境界に位置する、虚像表示面PS1上の箇所(あるいは点)である。 Further, the virtual image display surface PS1 includes the near end portion U1 and is shown as a standing image HUD region Z1 (in the figure, surrounded by a broken line) in which the virtual image G1 of the “standing image (face-to-face image)” on the road surface 40 is displayed. (), It is located farther from the standing image HUD region Z1 (front in the front-rear direction (Z direction) of the vehicle 1), and is inclined toward the road surface 40 side from the virtual image G1 of the "standing image (face-to-face image)". It is distinguished from the tilted image HUD region Z2 in which the virtual image G2 of the tilted image (depth image) is displayed. The intermediate portion U2 is a portion (or a point) on the virtual image display surface PS1 located at the boundary between the standing image HUD region Z1 and the tilted image HUD region Z2.
 なお、立像(正対画像)の虚像G1は、第1の虚像G1と言い換えることができる。また、傾斜像(奥行き画像)の虚像G2は、第2の虚像G2と言い換えることができる。 The virtual image G1 of the standing image (opposite image) can be paraphrased as the first virtual image G1. Further, the virtual image G2 of the tilted image (depth image) can be paraphrased as the second virtual image G2.
 また、ユーザーの視点A(あるいは、車両1等に設定される、視点Aに相当する基準点)から虚像表示面PS1の近端部U1までの距離(「結像距離」、言い換えれば「虚像表示距離」)はL1であり、中間部U2までの距離はL2であり、遠端部U3までの距離はL3である。但し、L1<L2<L3の関係が成立する。 Further, the distance from the user's viewpoint A (or the reference point corresponding to the viewpoint A set in the vehicle 1 or the like) to the near end U1 of the virtual image display surface PS1 (“imaging distance”, in other words, “virtual image display”. "Distance") is L1, the distance to the intermediate portion U2 is L2, and the distance to the far end portion U3 is L3. However, the relationship L1 <L2 <L3 is established.
 虚像表示面PS1の路面40に沿う長さ(延在範囲)は、例えば、10m~30m程度とすることができる。 The length (extended range) of the virtual image display surface PS1 along the road surface 40 can be, for example, about 10 m to 30 m.
 また、図1(B)から明らかなように、虚像表示面PS1における傾斜像HUD領域Z2の面積は、立像HUD領域Z1の面積よりも大きく設定されている(但し、これに限定されるものではない)。 Further, as is clear from FIG. 1 (B), the area of the tilted image HUD region Z2 on the virtual image display surface PS1 is set to be larger than the area of the standing image HUD region Z1 (however, it is not limited to this). No).
 傾斜像HUD領域Z2の相対的な面積を大きく設定することで、奥行き感のある表示が可能な領域(言い換えれば、奥行き方向の表現力をもった領域)を広く確保し、臨場感のある奥行き表示を実現し易いようにしている。言い換えれば、傾斜像HUD領域Z2を効果的に活用した表示がし易くなっており、立像HUD領域Z1が存在するからといって、大きな制約を受けることがない。従って、例えば、直感的で臨場感のある虚像にて、誘導情報(例えば矢印)等をユーザーに提示することができる。 By setting a large relative area of the tilted image HUD area Z2, a wide area that can be displayed with a sense of depth (in other words, an area with expressive power in the depth direction) is secured, and the depth is realistic. The display is easy to realize. In other words, it is easy to display by effectively utilizing the tilted image HUD region Z2, and the existence of the standing image HUD region Z1 is not greatly restricted. Therefore, for example, guidance information (for example, an arrow) can be presented to the user in an intuitive and realistic virtual image.
 次に、図3(C)を参照する。図3(C)は、表示制御部(図11の符号190)による表示制御の一例を示している。なお、HUD装置の具体的構成については後述する。 Next, refer to FIG. 3 (C). FIG. 3C shows an example of display control by the display control unit (reference numeral 190 in FIG. 11). The specific configuration of the HUD device will be described later.
 表示制御部(図11の符号190)は、表示部160の表示面164の画像表示領域45を、例えば、ユーザーから見た場合の近傍側と遠方側との境界である境界位置LN’を基準として、立像HUD領域Z1に対応する第1の表示領域Z1’と、傾斜像HUD領域Z2に対応する第2の表示領域Z2’に区分する。 The display control unit (reference numeral 190 in FIG. 11) uses the image display area 45 of the display surface 164 of the display unit 160 as a reference, for example, the boundary position LN'which is the boundary between the near side and the far side when viewed from the user. As a result, it is divided into a first display area Z1'corresponding to the standing image HUD area Z1 and a second display area Z2' corresponding to the tilted image HUD area Z2.
 そして、表示制御部(図11の符号190)は、第1の表示領域Z1’の所定位置に、第1の画像RG1(ここでは、車速表示SP’)を表示させ、第2の表示領域Z2’の所定位置に、第2の画像RG2(ここでは、ナビゲーション用の矢印の図形AW’)を表示させる。 Then, the display control unit (reference numeral 190 in FIG. 11) displays the first image RG1 (here, the vehicle speed display SP') at a predetermined position in the first display area Z1', and displays the second display area Z2. The second image RG2 (here, the figure AW of the arrow for navigation) is displayed at a predetermined position of'.
 なお、光学系の構成によっては、図3(C)における「画角上端」と「画角下端」は、逆になる場合がある。このときは、図3(C)の表示画像は、上下が逆転して表示されることになる。 Note that, depending on the configuration of the optical system, the "upper end of the angle of view" and the "lower end of the angle of view" in FIG. 3C may be reversed. At this time, the display image of FIG. 3C is displayed upside down.
 また、図3(C)において、U1’、U2’、U3’の各点は、図3(B)における、U1、U2、U3の各点に対応する。また、表示面164における横方向は、実空間における車両1の「左右方向(X方向)」に対応し、また、縦方向は、実空間における路面40に垂直な方向である「高さ方向(Y方向)」に対応する。 Further, in FIG. 3C, each point of U1', U2', and U3'corresponds to each point of U1, U2, and U3 in FIG. 3B. Further, the horizontal direction on the display surface 164 corresponds to the "left-right direction (X direction)" of the vehicle 1 in the real space, and the vertical direction is the "height direction (height direction)" which is a direction perpendicular to the road surface 40 in the real space. Y direction) ”.
 また、図3(C)において破線で描かれている、道路の側線51及びセンターライン53は、車両1の前方を撮像した画像に画像処理を施すことにより検出されるものである。 Further, the side line 51 and the center line 53 of the road, which are drawn by the broken lines in FIG. 3C, are detected by performing image processing on the image captured in front of the vehicle 1.
 図3(C)において、第1の画像RG1としては、車両の情報、車両の周囲の情報、及びナビゲーション情報等を例示することができる。これらは、表示の正確性、素早い認知性等が要求され、また、車速情報等は常時表示されることも多いことから、立像HUDによってユーザーの手前側に、路面40に対して、ある程度以上の角度(所定角度、あるいは立像の閾値角度(例えば45度以上))で立つ態様(立設する態様)で見易く表示される。なお、これらの情報としては、具体的には、車速表示、道路の制限速度情報、ターンバイターン情報(例えば、交差点名称情報、POI(地図上の特定地点)情報等)、アイコンや文字による各種表示等が挙げられる。 In FIG. 3C, as the first image RG1, vehicle information, vehicle surrounding information, navigation information, and the like can be exemplified. These are required to have accurate display, quick recognition, etc., and vehicle speed information, etc. are often displayed at all times. It is displayed in an easy-to-see manner in a mode of standing (standing) at an angle (a predetermined angle or a threshold angle of an image (for example, 45 degrees or more)). Specifically, these information include vehicle speed display, road speed limit information, turn-by-turn information (for example, intersection name information, POI (specific point on map) information, etc.), various icons and characters. Display etc. can be mentioned.
 また、第2の画像RG2としては、自車両、又は他車両の進行に関する矢印の図形、及び矢印以外の図形を含む情報等を挙げることができる。これらは、図形を主体としており、直感的な把握性や、違和感なく距離情報を感得できること等が重要である。なお、第2の画像RG2に、例えば文字が含まれることを排除するものではない。奥行きを有する図形による情報画像は、傾斜像HUDによって奥行き感のある虚像として表示されることになる。なお、これらの情報としては、具体的には、ルートガイドとしての矢印情報、センターラインを示す白線、凍結している路面領域を示す着色された図形情報、あるいは、ユーザーである運転者のハンドルや機器の操作等を支援するADAS(先進運転支援システム)の情報等をあげることができる。 Further, as the second image RG2, a figure of an arrow relating to the progress of the own vehicle or another vehicle, information including a figure other than the arrow, and the like can be mentioned. These are mainly figures, and it is important that they can be intuitively grasped and that distance information can be obtained without discomfort. It should be noted that the second image RG2 does not exclude, for example, characters. The information image of the figure having depth is displayed as a virtual image with a sense of depth by the tilted image HUD. Specifically, these information include arrow information as a route guide, white line indicating a center line, colored graphic information indicating a frozen road surface area, or a steering wheel of a driver who is a user. Information on ADAS (advanced driver assistance system) that supports the operation of equipment can be given.
 図3(C)に示すような表示制御が実施されることにより、図3(A)に示すような臨場感のある虚像表示が実現される。言い換えれば、路面40に対してある程度の角度以上で立っている立像である第1の虚像G1と、例えば路面2を覆うようにして、車両1の前後方向に沿って延在する矢印の図形からなる第2の虚像G2と、を併用して、臨場感に溢れた、見易く表現力の高い表示を実現することができる。よって、HUD装置における虚像表示の視認性が向上する。 By implementing the display control as shown in FIG. 3 (C), a virtual image display with a sense of reality as shown in FIG. 3 (A) is realized. In other words, from the first virtual image G1 which is a standing image standing at a certain angle or more with respect to the road surface 40, and the figure of the arrow extending along the front-rear direction of the vehicle 1 so as to cover the road surface 2, for example. In combination with the second virtual image G2, it is possible to realize a display that is full of presence, easy to see, and has high expressive power. Therefore, the visibility of the virtual image display in the HUD device is improved.
 次に、図4を参照する。図4(A)は、車両に搭載されたHUD装置の構成、及び虚像表示面の一例を示す図、図4(B)、(C)は、図4(A)に示される虚像表示面を実現する手法の例を示す図である。なお、図4(A)では、図1(A)とは異なる構成を採用している。よって、同じ部分であっても異なる符号が付されている。 Next, refer to FIG. 4 (A) is a diagram showing a configuration of a HUD device mounted on a vehicle and an example of a virtual image display surface, and FIGS. 4 (B) and 4 (C) are views of the virtual image display surface shown in FIG. 4 (A). It is a figure which shows the example of the method to realize. Note that FIG. 4 (A) adopts a configuration different from that of FIG. 1 (A). Therefore, even if they are the same part, different reference numerals are given.
 また、図4において、車両1の前方に沿う方向(前後方向ともいう)をZ方向とし、車両1の幅(横幅)に沿う方向(左右方向)をX方向とし、車両1の高さ方向(平坦な路面40に垂直な線分の、路面40から離れる方向)をY方向とする。 Further, in FIG. 4, the direction along the front of the vehicle 1 (also referred to as the front-rear direction) is the Z direction, the direction along the width (horizontal width) of the vehicle 1 (horizontal width) is the X direction, and the height direction of the vehicle 1 (also referred to as the front-rear direction). The direction of the line segment perpendicular to the flat road surface 40 away from the road surface 40) is defined as the Y direction.
 また、以下の説明では、虚像表示面の形状の説明等において、上、下、という表現をする。ここでは、説明の便宜上、路面40に垂直な線分(法線)に沿う方向(車両1の高さ方向でもある)を上下方向とする。路面が水平である場合は、鉛直下向きが下方であり、その反対方向が上方である。この点は、前掲の図面の説明にも適用され得る。 In the following explanation, the terms "upper" and "lower" are used in the explanation of the shape of the virtual image display surface. Here, for convenience of explanation, the direction along the line segment (normal line) perpendicular to the road surface 40 (which is also the height direction of the vehicle 1) is taken as the vertical direction. When the road surface is horizontal, the vertical downward direction is downward and the opposite direction is upward. This point can also be applied to the description of the above-mentioned drawings.
 図4(A)に示されるように、車両(自車両)1のダッシュボード41の内部に、本実施例のHUD装置(具体的には、「立像」のみを表示することもでき、「傾斜像」のみを表示することもでき、「立像と傾斜像」を同時に表示することもできるHUD装置)100が搭載されている。 As shown in FIG. 4A, it is also possible to display only the HUD device (specifically, the “standing image”) of the present embodiment inside the dashboard 41 of the vehicle (own vehicle) 1, and it is possible to display “tilt”. A HUD device) 100 that can display only an "image" and can display a "standing image and an inclined image" at the same time is mounted.
 HUD装置100は、画像を表示する表示面164を有する表示部(画像表示部と言う場合もあり、具体的には例えばスクリーン)160と、画像を表示する表示光Kを、反射透光部材2であるウインドシールドに投影する光学部材を含む光学系120と、投光部(画像投射部)150と、を有し、光学部材は、反射面179を有する曲面ミラー(凹面鏡、あるいは拡大反射鏡ともいう)170を有し、その曲面ミラー170の反射面179は、曲率半径が一律である形状ではなく、例えば複数の曲率半径をもつ部分領域の集合からなる形状とすることができ、例えば自由曲面の設計手法を利用することができる(自由曲面そのものであってもよい)。なお、自由曲面とは、単純な数式では表わすことができない曲面であり、空間に交点と曲率をいくつか設定し、高次方程式でそれぞれの交点を補間して曲面を表現するものである。反射面179の形状は、虚像表示面PS1の形状や路面との関係に、かなり大きな影響を与える。 The HUD device 100 has a display unit (sometimes referred to as an image display unit, specifically, a screen) 160 having a display surface 164 for displaying an image, and a display light K for displaying an image. It has an optical system 120 including an optical member that projects onto the windshield, and a light projecting unit (image projection unit) 150, and the optical member is a curved mirror (concave mirror or magnifying mirror) having a reflecting surface 179. The reflective surface 179 of the curved mirror 170 having 170 is not a shape having a uniform radius of curvature, but can be a shape consisting of, for example, a set of partial regions having a plurality of radius of curvature, for example, a free curved surface. (It may be a free curved surface itself). A free curved surface is a curved surface that cannot be expressed by a simple mathematical formula, and a curved surface is expressed by setting some intersections and curvatures in a space and interpolating each intersection with a higher-order equation. The shape of the reflective surface 179 has a considerable influence on the shape of the virtual image display surface PS1 and the relationship with the road surface.
 なお、虚像表示面PS1の形状は、曲面ミラー(凹面鏡)130の反射面179の形状の他、ウインドシールド(反射透光部材2)の曲面形状や、光学系120内に搭載される他の光学部材(例えば補正鏡)の形状にも影響される。また、表示部160の表示面164の形状(一般的には平面だが、全体又は一部が非平面となり得る)や、反射面179に対する表示面164の配置にも影響される。但し、曲面ミラー(凹面鏡)170は拡大反射鏡であり、虚像表示面の形状に与える影響はかなり大きい。また、曲面ミラー(凹面鏡)170の反射面179の形状が異なれば、実際に、虚像表示面の形状が変化する。 The shape of the virtual image display surface PS1 includes the shape of the reflective surface 179 of the curved mirror (concave mirror) 130, the curved shape of the windshield (reflection translucent member 2), and other optics mounted in the optical system 120. It is also affected by the shape of the member (for example, the correction mirror). It is also affected by the shape of the display surface 164 of the display unit 160 (generally a flat surface, but all or part of it may be non-planar) and the arrangement of the display surface 164 with respect to the reflective surface 179. However, the curved mirror (concave mirror) 170 is a magnifying reflector, and has a considerable influence on the shape of the virtual image display surface. Further, if the shape of the reflecting surface 179 of the curved mirror (concave mirror) 170 is different, the shape of the virtual image display surface actually changes.
 また、近端部U1から遠端部U3へと一体的に延びる虚像表示面PS1は、光学系の光軸(主光線に対応する主光軸)に対して、表示部160の表示面164を90度未満の交差角で斜めに配置することにより形成することができる。 Further, the virtual image display surface PS1 extending integrally from the near end portion U1 to the far end portion U3 has a display surface 164 of the display unit 160 with respect to the optical axis of the optical system (the main optical axis corresponding to the main light ray). It can be formed by arranging it diagonally at an intersection angle of less than 90 degrees.
 また、虚像表示面PS1の曲面の形状は、光学系における全領域又は一部の領域の光学的特性を調整すること、光学部材と表示面164との配置を調整すること、表示面164の形状を調整すること、又はこれらの組み合わせにより調整されてもよい。このようにして、虚像表示面の形状を、多様に調整することができる。これによって、立像HUD領域Z1及び傾斜像HUD領域Z2を有する虚像表示面PS1を実現することができる。 Further, the shape of the curved surface of the virtual image display surface PS1 is such that the optical characteristics of all or a part of the optical system can be adjusted, the arrangement of the optical member and the display surface 164 can be adjusted, and the shape of the display surface 164. Or may be adjusted by a combination thereof. In this way, the shape of the virtual image display surface can be adjusted in various ways. Thereby, the virtual image display surface PS1 having the standing image HUD region Z1 and the tilted image HUD region Z2 can be realized.
 以下、具体的に説明する。図4(B)の左側及び左下に示すように、表示部160の表示面164の傾斜の態様及び程度によって、虚像表示面PSの全体的な傾斜の態様及び程度を調整する。なお、図4(B)の例では、ウインドシールド(反射透光部材2)の曲面による虚像表示面の歪みが、曲面ミラー(凹面鏡等)170の反射面179の曲面形状によって補正されて、結果的に平面の虚像表示面PSが生成されるものとする。 The following will be explained in detail. As shown on the left side and the lower left side of FIG. 4B, the mode and degree of the overall inclination of the virtual image display surface PS are adjusted according to the mode and degree of inclination of the display surface 164 of the display unit 160. In the example of FIG. 4B, the distortion of the virtual image display surface due to the curved surface of the windshield (reflection and translucent member 2) is corrected by the curved surface shape of the reflecting surface 179 of the curved mirror (concave mirror or the like) 170, and the result is It is assumed that a flat virtual image display surface PS is generated.
 また、図4(B)の右側及び左下に示すように、光学部材(ここでは曲面ミラー(凹面鏡等)170)と表示面164との位置関係の調整によって、言い換えれば、例えば表示面164を回動させて光学部材(曲面ミラー170)との相対的関係を異なるものとすることによって、傾斜面である虚像表示面PSの、路面40から離れる程度を調整する。 Further, as shown on the right side and the lower left side of FIG. 4B, by adjusting the positional relationship between the optical member (here, the curved surface mirror (concave mirror or the like) 170) and the display surface 164, in other words, the display surface 164 is rotated. By moving the mirror to make the relative relationship with the optical member (curved surface mirror 170) different, the degree to which the virtual image display surface PS, which is an inclined surface, is separated from the road surface 40 is adjusted.
 また、図4(C)に示すように、光学部材である曲面ミラー(凹面鏡等)170の反射面の形状を調整し(あるいは、表示部160の表示面164の形状を調整し)て虚像表示面PSの車両1に近い側の端部(近端部)U1付近の虚像表示距離を変更することで、近近端部U1付近を路面側に曲げて路面に対して立つように制御し(言い換えれば、立面化し)、これによって、虚像表示面PS1を得る。 Further, as shown in FIG. 4C, the shape of the reflecting surface of the curved mirror (concave mirror or the like) 170, which is an optical member, is adjusted (or the shape of the display surface 164 of the display unit 160 is adjusted) to display a virtual image. By changing the virtual image display distance near the end (near end) U1 on the side of the surface PS near the vehicle 1, the vicinity of the near end U1 is bent toward the road surface and controlled to stand against the road surface ( In other words, elevation), thereby obtaining the virtual image display surface PS1.
 図4(C)の上側に示されるように、曲面ミラー170の反射面179は、Near(近傍表示部)、Center(中間(中央)表示部)、Far(遠方表示部)の3つの部分(箇所)に分けることできる。 As shown on the upper side of FIG. 4C, the reflective surface 179 of the curved mirror 170 has three portions (Near (nearby display), Center (middle (center) display), and Far (far display)). It can be divided into parts).
 ここで、Nearは、虚像表示面PS1の近端部U1に対応する表示光E1(図4(A)及び(B)において一点鎖線で示されている)を生成する部分であり、Centerは、虚像表示面PS1の中間部(中央部)U2に対応する表示光E2(破線で示されている)を生成する部分であり、Farは、虚像表示面PS1の遠端部U3に対応する表示光E3(実線で示されている)を生成する部分である。 Here, Near is a part that generates display light E1 (indicated by a alternate long and short dash line in FIGS. 4A and 4B) corresponding to the near end portion U1 of the virtual image display surface PS1, and the Center is a part. Far is a portion that generates display light E2 (indicated by a broken line) corresponding to the intermediate portion (central portion) U2 of the virtual image display surface PS1, and Far is display light corresponding to the far end portion U3 of the virtual image display surface PS1. The part that produces E3 (shown by the solid line).
 図4(C)では、Center及びFarの部分は、図4(B)に示される、平面の虚像表示面PSを生成する場合の曲面ミラー(凹面鏡等)170と同じである。但し、図4(C)では、Nearの部分の曲率が、図4(B)に比べて小さく設定される。そうすると,Nearの部分に対応する倍率が大きくなる。 In FIG. 4C, the Center and Far portions are the same as the curved mirror (concave mirror or the like) 170 in the case of generating the virtual image display surface PS of the plane shown in FIG. 4B. However, in FIG. 4C, the curvature of the Near portion is set smaller than that in FIG. 4B. Then, the magnification corresponding to the Near part becomes large.
 HUD装置の倍率(cとする)は、表示部160の表示面164からウインドトシールド2に至るまでの距離(aとする)と、ウインドシールド(反射透光部材2)で反射した光が視点Aを経由して結像点で結像するまでの距離を(bとする)と、c=b/aで表すことができるが、Nearの部分の曲率が小さくなると、aが小さくなり、倍率が上がり、像は、車両1からより遠い位置に結像するようになる。つまり、図4(C)の場合、図4(B)の場合に比べて、虚像表示距離が大きくなる。 The magnification (referred to as c) of the HUD device is the distance (referred to as a) from the display surface 164 of the display unit 160 to the window shield 2 and the viewpoint of the light reflected by the wind shield (reflecting translucent member 2). The distance to form an image at the image formation point via A can be expressed as c = b / a, but when the curvature of the Near part becomes smaller, a becomes smaller and the magnification becomes smaller. The image is formed at a position farther from the vehicle 1. That is, in the case of FIG. 4C, the virtual image display distance is larger than that in the case of FIG. 4B.
 したがって、虚像表示面の近端部U1は、車両1から引き離されることになり、近端部U1がおじぎをする形で路面40側に湾曲し、この結果として立像HUD領域Z1が形成される。これにより、立像HUD領域Z1と傾斜像HUD領域Z2とを有する虚像表示面PS1が得られる。 Therefore, the near-end U1 of the virtual image display surface is separated from the vehicle 1, and the near-end U1 bends toward the road surface 40 in a bowing manner, and as a result, the standing image HUD region Z1 is formed. As a result, a virtual image display surface PS1 having a standing image HUD region Z1 and an inclined image HUD region Z2 can be obtained.
 次に、図5を参照する。図5は、立体的な虚像を表示するHUD装置(視差式HUD装置あるいは3DHUD装置)の表示方式の一例を示す図である。図5において、運転者(ユーザー)の視点位置を示すアイポイントP(C)は、アイボックスEBの中央に位置している。ウインドシールド2の前方に、左右の各眼に対応する仮想的な結像面PS(L)、PS(R)を設定したとすると、その重なりの領域の中央に虚像V(C)が位置する。虚像V(C)の輻輳角はθdであり、虚像V(C)は、運転者(ユーザー、視認者)には立体的な像として認識されることになる。 Next, refer to FIG. FIG. 5 is a diagram showing an example of a display method of a HUD device (parallax type HUD device or 3D HUD device) that displays a three-dimensional virtual image. In FIG. 5, the eye point P (C) indicating the viewpoint position of the driver (user) is located at the center of the eye box EB. Assuming that virtual image planes PS (L) and PS (R) corresponding to the left and right eyes are set in front of the windshield 2, the virtual image V (C) is located in the center of the overlapping region. .. The convergence angle of the virtual image V (C) is θd, and the virtual image V (C) is recognized as a three-dimensional image by the driver (user, viewer).
 この立体的な虚像V(C)は、以下のようにして表示(形成)され得る。例えば、時分割で表示される画像IMからの光を、例えば、MEMSスキャナー等で振り分けることで、左眼用/右眼用の表示光L10、R10が得られ、その表示光L10、R10を、例えば光学系に含まれる曲面ミラー(凹面鏡等)105にて反射させ(反射の回数は少なくとも1回)、これによって、表示光Kとしてウインドシールド(被投影部材)2に投影し、その反射光が運転者の両眼に至り、ウインドシールド2の前方に像を結ぶことによって、奥行き感のある、立体的な虚像V(C)が表示(形成)される。なお、奥行き感のある画像の表示方式は、上記のものに限定されない。例えば、左右の各眼用の画像を同時にフラットパネル等に表示し、各画像からの光を、レンチキュラレンズや視差バリアを用いて分離して、各眼用の表示光L10、R10を得る方式であってもよい。 This three-dimensional virtual image V (C) can be displayed (formed) as follows. For example, by distributing the light from the image IM displayed in time division with, for example, a MEMS scanner or the like, display lights L10 and R10 for the left eye / right eye can be obtained, and the display lights L10 and R10 can be used. For example, it is reflected by a curved mirror (concave mirror, etc.) 105 included in the optical system (the number of reflections is at least once), thereby projecting the display light K onto the windshield (projected member) 2, and the reflected light is reflected. By reaching both eyes of the driver and forming an image in front of the windshield 2, a three-dimensional virtual image V (C) with a sense of depth is displayed (formed). The display method of an image having a sense of depth is not limited to the above. For example, images for each of the left and right eyes are displayed on a flat panel or the like at the same time, and the light from each image is separated by using a lenticular lens or a parallax barrier to obtain display lights L10 and R10 for each eye. There may be.
 また、本発明は、左右の各眼に、視差画像(異なる画像)を入射させる視差式のHUD装置に適用できるが、但し、これに限定されるものではなく、左右の各眼に、同じ画像を入射させるHUD装置にも適用が可能である。これらの点については、後述する。 Further, the present invention can be applied to a parallax type HUD device in which a parallax image (different image) is incident on each of the left and right eyes, but the present invention is not limited to this, and the same image is applied to each of the left and right eyes. It can also be applied to a HUD device that incidents. These points will be described later.
 次に、図6を参照する。図6は、立像(疑似立像を含む)の虚像、及び傾斜像(奥行き像)の虚像の少なくとも一方を表示可能なHUD装置における、視点追従ワーピング制御の一例を示す図である。図1、図3と共通する部分には、同じ参照符号を付している。また、以下の説明において、傾斜面は広義に解釈するものとし、必要に応じて、例えば、路面に重畳されるもの(傾斜角が零であるもの)も含むものとして解釈することも可能である。 Next, refer to FIG. FIG. 6 is a diagram showing an example of viewpoint tracking warping control in a HUD device capable of displaying at least one of a virtual image of a standing image (including a pseudo standing image) and a virtual image of an inclined image (depth image). The same reference numerals are given to the parts common to those in FIGS. 1 and 3. Further, in the following description, the inclined surface shall be interpreted in a broad sense, and may be interpreted as including, for example, those superimposed on the road surface (the one having an inclination angle of zero), if necessary. ..
 図6の表示例は、図3を用いて説明したものと同じである。虚像表示面PS1は、立像HUD領域Z1を含む立面(疑似立面)PS1aと、傾斜像HUD領域Z2を含む傾斜面PS1bと、を有する。立像HUD領域Z1には、正対画像(立像)である「120km/h」という、第1の虚像としての車速表示SPが表示されている。また、傾斜像HUD領域Z2には、奥行き画像(傾斜像)である、第2の虚像としてのナビゲーション用の矢印AWが表示されている。 The display example of FIG. 6 is the same as that described with reference to FIG. The virtual image display surface PS1 has an elevation (pseudo-elevation) PS1a including an elevation HUD region Z1 and an inclined surface PS1b including an inclined image HUD region Z2. In the standing image HUD region Z1, a vehicle speed display SP as a first virtual image, which is a facing image (standing image) of "120 km / h", is displayed. Further, in the tilted image HUD region Z2, an arrow AW for navigation as a second virtual image, which is a depth image (tilted image), is displayed.
 表示方式としては、左右の眼に同じ画像の表示光を入射させる単眼式、異なる視差画像を入射させる視差式の何れでもよいが、図6では、一例として、視差式の表示例を示している。 The display method may be either a monocular type in which the display light of the same image is incident on the left and right eyes or a parallax type in which different parallax images are incident, but FIG. 6 shows a display example of the parallax type as an example. ..
 ウインドシールド2の投影領域4には、第1の投影画像G1’としての「120km/h」の画像と、第2の投影画像G2L’、G2R’としての、左右の各眼用の矢印の画像が投影されている。各画像には、ウインドシールド2の曲面による歪みとは逆方向の歪みが事前に与えられている。 In the projection area 4 of the windshield 2, an image of "120 km / h" as the first projected image G1'and an image of arrows for each of the left and right eyes as the second projected images G2L'and G2R'. Is projected. Each image is preliminarily given a distortion in the direction opposite to the distortion due to the curved surface of the windshield 2.
 また、第1の投影画像G1’については、立像であり、視差画像による奥行きの表現は不要であることから、同じ画像(共通の画像)が投影されている。 Further, the first projected image G1'is a standing image, and since it is not necessary to express the depth by the parallax image, the same image (common image) is projected.
 運転者の視点A1、A2が、共にアイボックスEBの中央に位置するとき、運転者の前方には、路面に沿って直線的に延びる遠近感のある矢印が見えている。また、手前側には、「120km/h」という車速表示が、矢印の画像の右斜め下方に見えている。図6の例では、この車速表示は、実空間の座標系における固定位置に表示される。この車速表示は、例えば、常時、表示することができる。 When both the driver's viewpoints A1 and A2 are located in the center of the eyebox EB, a perspective arrow extending linearly along the road surface can be seen in front of the driver. Further, on the front side, a vehicle speed display of "120 km / h" can be seen diagonally to the lower right of the image of the arrow. In the example of FIG. 6, this vehicle speed display is displayed at a fixed position in the coordinate system in real space. This vehicle speed display can be displayed at all times, for example.
 この状態で、運転者の視点A1、A2が、車両の幅方向(左右方向、X方向)に沿って移動する(ずれる)場合を想定する。図6では、この視点の移動を、破線の、双方向の矢印SEにて示している。この視点ずれが生じると、運転者に見えている表示は、運動視差によって変形されるのであるが、この変形は、奥行き画像である矢印の画像AWについては、自然な遠近感を与えるものとして有利に働くが、一方、正対画像である車速表示SPについては、情報の内容を読み取る認知性を低下させるものとして不利に働く。従って、両者に一律のワーピングを行わずに、個別に、異なる画像補正によるワーピングを実施するのがよい。以下、具体的に説明する。 In this state, it is assumed that the driver's viewpoints A1 and A2 move (shift) along the width direction (horizontal direction, X direction) of the vehicle. In FIG. 6, this movement of the viewpoint is indicated by a broken line, bidirectional arrow SE. When this viewpoint shift occurs, the display seen by the driver is deformed by the motion parallax, and this deformation is advantageous as giving a natural perspective to the image AW of the arrow, which is a depth image. On the other hand, the vehicle speed display SP, which is a facing image, works disadvantageously as it lowers the cognitive ability to read the content of the information. Therefore, it is preferable to individually perform warping by different image corrections without performing uniform warping on both. Hereinafter, a specific description will be given.
 図7を参照する。図7(A)は、前方に傾斜面(例えば虚像表示面の傾斜像HUD領域)が配置され、それを見ている運転者の視点(単眼とする)がアイボックスの中央の分割領域に位置した状態から、車両の幅方向(左右方向)に沿って移動した場合を示す図、図7(B)、(C)、(D)は、視点が、アイボックスの左の分割領域、中央の分割領域、右の分割領域の各々に位置するときの傾斜面の見え方(傾斜面の輪郭である台形の形状)を示す図、図7(E)は、傾斜面に矢印の虚像を表示する例を示す図、図7(F)、(G)、(H)は、視点が、アイボックスの左の分割領域、中央の分割領域、右の分割領域の各々に位置するときの矢印の虚像の見え方(矢印の虚像の形状)を示す図、図7(I)、図7(J)は、運動視差補正の無/有のときの矢印の虚像の見え方を示す図である。 Refer to FIG. In FIG. 7A, an inclined surface (for example, an inclined image HUD region of a virtual image display surface) is arranged in front, and the viewpoint of the driver (monocular) looking at the inclined surface is located in the central divided region of the eye box. Figures 7 (B), (C), and (D) show the case where the vehicle moves along the width direction (horizontal direction) of the vehicle from the above-mentioned state. A figure showing the appearance of an inclined surface (a trapezoidal shape that is the outline of an inclined surface) when located in each of the divided area and the right divided area, FIG. 7 (E) displays a virtual image of an arrow on the inclined surface. An example diagram, FIGS. 7 (F), 7 (G), (H), is a virtual image of an arrow when the viewpoint is located in each of the left divided area, the central divided area, and the right divided area of the eyebox. FIG. 7 (I) and FIG. 7 (J) are diagrams showing the appearance of the arrow (shape of the virtual image of the arrow), and FIGS. 7 (I) and 7 (J) are diagrams showing the appearance of the virtual image of the arrow with / without motion misalignment correction.
 図7(A)では、虚像表示面PS1における傾斜面PS1bの全体を、便宜上、運転者が視認可能な一つの表示領域として考え、その表示領域の輪郭(外形)を、路面に垂直な方向から見た平面視では長方形としている。また、傾斜面PS1bの内側に、直交する2つの線分による格子が描かれている。これは、交点の各基準点(座標点)がワーピングの対象として把握され得ることを示している。 In FIG. 7A, the entire inclined surface PS1b on the virtual image display surface PS1 is considered as one display area that can be visually recognized by the driver, and the outline (outer shape) of the display area is viewed from the direction perpendicular to the road surface. It is a rectangle when viewed in a plan view. Further, a grid consisting of two orthogonal line segments is drawn inside the inclined surface PS1b. This indicates that each reference point (coordinate point) of the intersection can be grasped as a target of warping.
 図7(A)の例では、アイボックスEBは、左、中央、右の各部分領域ZL、ZC、ZRに区分されている。ここでは、説明を簡素化するために、左右の何れか1つの眼で画像を見る場合を想定する。なお、その眼(視点)に付されている符号AL、AC、ARは、上記のアイボックスEBの各分割領域ZL、ZC、ZRに対応する。 In the example of FIG. 7A, the eyebox EB is divided into left, center, and right partial regions ZL, ZC, and ZR. Here, in order to simplify the explanation, it is assumed that the image is viewed with either the left or right eye. The codes AL, AC, and AR attached to the eye (viewpoint) correspond to the divided regions ZL, ZC, and ZR of the above-mentioned eye box EB.
 なお、先に述べたように、当初、運転者の眼(視点)は、アイボックスEBの中央(分割領域ZC)に位置する。この状態の座標値をX0とする。左右に移動したときの座標値は、+X1、-X1である。 As mentioned earlier, the driver's eye (viewpoint) is initially located in the center of the eyebox EB (divided region ZC). Let X0 be the coordinate value in this state. The coordinate values when moved to the left and right are + X1 and −X1.
 視点AL、AC、ARに対応して、運転者に見えている画像は、図7(B)、(C)、(D)に示されるように、運動視差の影響を受けて、適宜、変化する。これは視点位置の横方向(左右方向の移動による見かけ上の位置移動量が、車両1から遠いほど小さく、車両1に近いほど大きいことに起因する。例えば、奥に表示される画像が十分に遠方である場合には、台形の上辺が固定され、下辺が、視点位置移動に伴い動くように視認され、これによって、台形(正面で見えている等脚台形等)に変形が生じる。 The image seen by the driver corresponding to the viewpoints AL, AC, and AR is appropriately changed under the influence of motion parallax as shown in FIGS. 7 (B), (C), and (D). do. This is because the lateral direction of the viewpoint position (the apparent amount of position movement due to the movement in the left-right direction is smaller as it is farther from the vehicle 1 and larger as it is closer to the vehicle 1. For example, the image displayed in the back is sufficiently large. In the case of a distant place, the upper side of the trapezoid is fixed, and the lower side is visually recognized as moving with the movement of the viewpoint position, whereby the trapezoid (isosceles trapezoid seen in the front, etc.) is deformed.
 車両1の前方に配置される傾斜面の虚像表示面PS1bは、アイボックスEBの中央に位置する視点から見れば(言い換えれば、運転者が真正面から見れば)、その輪郭は、
方ほど幅が狭くなるように遠近感をもって感得されることから、台形(平行四辺形を含む)に見える(図7(C))。図7(C)の台形は、上底と下底が平行であり、上底の長さは、下底の長さよりも短く、上底の両端の内角は等しく、下底の両端の内角も等しい台形(いわゆる等脚台形)である。但し、奥行き感がほとんど無視できるような視覚の場合もあり得るが、このときは、上底と下底の長さは等しくなり、左脚と右脚も平行となって、等脚台形は、長方形あるいは正方形(広義には、二組の対辺が平行である平行四辺形)となる。言い換えれば、台形は、平行四辺形を含む概念である。
The outline of the virtual image display surface PS1b of the inclined surface arranged in front of the vehicle 1 is, when viewed from the viewpoint located in the center of the eyebox EB (in other words, when viewed from the front by the driver).
It looks like a trapezoid (including a parallelogram) because it is perceived with a sense of perspective so that the width becomes narrower (Fig. 7 (C)). In the trapezoid of FIG. 7C, the upper base and the lower base are parallel, the length of the upper base is shorter than the length of the lower base, the inner angles of both ends of the upper base are equal, and the inner angles of both ends of the lower base are also equal. It is an equal trapezoid (so-called isosceles trapezoid). However, there may be cases where the sense of depth is almost negligible, but in this case, the lengths of the upper and lower bases are equal, the left and right legs are also parallel, and the isosceles trapezoid is It can be a rectangle or a square (in a broad sense, a parallelogram in which two sets of opposite sides are parallel). In other words, a trapezoid is a concept that includes a parallelogram.
 視点が車両1の幅方向(左右方向)に移動する(ずれる)場合は、傾斜面PS1bの位置に変化がなければ、視点と虚像との相対的位置が変わることになるため、運動視差の影響を受けて、その台形が、左方向や右方向に歪み、変形する(図7(B)又は図7D)を参照)。 When the viewpoint moves (shifts) in the width direction (horizontal direction) of the vehicle 1, if the position of the inclined surface PS1b does not change, the relative position between the viewpoint and the virtual image changes, so that the influence of motion parallax In response to this, the trapezoidal shape is distorted and deformed to the left or right (see FIG. 7B or FIG. 7D).
 図7(B)~(D)からわかるように、斜面の虚像表示面(奥行HUD領域Z2)の輪郭(外形)の基本形は「台形」である。視点が中央にあるときは、図7(C)のように「等脚台形」が見え、視点が左にずれると「左傾斜の台形(図7(A))」となり、視点が右にずれると、右傾斜の台形(図7(D))」となる。 As can be seen from FIGS. 7 (B) to 7 (D), the basic shape of the outline (outer shape) of the virtual image display surface (depth HUD region Z2) of the slope is "trapezoid". When the viewpoint is in the center, an "isosceles trapezoid" can be seen as shown in Fig. 7 (C), and when the viewpoint shifts to the left, it becomes a "left-tilted trapezoid (Fig. 7 (A))" and the viewpoint shifts to the right. And, it becomes a trapezoid tilted to the right (Fig. 7 (D)).
 ということは、画像補正(ワーピング処理)の際、奥行HUD領域Z2を1つの画像領域としてワーピングを施す場合には、その輪郭(外形)を、従来のように「矩形」とするよりも、「台形」として実施する方が、光学系による歪みが除去された後は、より自然な奥行き感のある画像(虚像)が見える、ということである。よって、視点の移動に起因して生じる台形の変形が反映されるように台形を変形させるワーピング制御を実施する。 That is, when warping is performed using the depth HUD area Z2 as one image area during image correction (warping processing), the contour (outer shape) is "rectangular" rather than "rectangular" as in the conventional case. It means that the image (virtual image) with a more natural sense of depth can be seen after the distortion caused by the optical system is removed by implementing it as a "trapezoid". Therefore, the warping control for deforming the trapezoid is performed so that the deformation of the trapezoid caused by the movement of the viewpoint is reflected.
 図7(E)~(G)は、奥行きHUD領域Z2自体ではなく、その領域上に表示される、矢印の図形AW(個別の図形)についての見え方(視覚)の変化を示す。 FIGS. 7 (E) to 7 (G) show changes in the appearance (visual sense) of the arrow figure AW (individual figure) displayed on the depth HUD area Z2 itself, not on the depth HUD area Z2 itself.
 図7(E)~(H)は、図7(A)~(D)に対応する。正面から見た矢印AWは、図7(G)のように見えるが、視点が左にずれると、図7(F)のように、矢印AWは左に傾斜して見える。また、視点が右にずれると、図7(H)のように、矢印AWは右に傾斜して見える。 FIGS. 7 (E) to 7 (H) correspond to FIGS. 7 (A) to 7 (D). The arrow AW seen from the front looks like FIG. 7 (G), but when the viewpoint shifts to the left, the arrow AW looks tilted to the left as shown in FIG. 7 (F). Further, when the viewpoint is shifted to the right, the arrow AW appears to be tilted to the right as shown in FIG. 7 (H).
 このように、画像補正(ワーピング処理)の際、画像を単位として(言い換えれば、画像毎に)ワーピングを行うときは、各画像の画像領域は、図7(B)~(D)の場合と同様に、台形を基本としてワーピングを実施する方が、光学系による歪みが除去された後は、より自然な奥行き感のある画像(虚像)が見える。具体的には、例えば、ルートガイド等で使用される矢印の表示は、路面に略一致して視認されるようにデザインされている。表示を重畳する先である路面も上述の視点移動によって同様の見かけの形状が変わる。言い換えると、路面と表示が視点移動で同様に形状変化して見えるときに、人は表示が略路面と一致していると認識するとも言える。したがって、この形状変化は、路面に一致させる(重畳させる)ことを意図した画像や、又は、奥行方向に意図的に配置した表示(遠方であることが理解されるような画像)に関しては、人の認識を補助すると言える。 In this way, when warping is performed in units of images (in other words, for each image) during image correction (warping processing), the image area of each image is the same as in the cases of FIGS. 7 (B) to 7 (D). Similarly, when warping is performed based on a trapezoidal shape, an image (virtual image) with a more natural sense of depth can be seen after the distortion caused by the optical system is removed. Specifically, for example, the display of the arrow used in the route guide or the like is designed so as to be visually recognized substantially in accordance with the road surface. The same apparent shape of the road surface on which the display is superimposed changes due to the above-mentioned viewpoint movement. In other words, it can be said that a person recognizes that the display substantially matches the road surface when the road surface and the display appear to change in shape as the viewpoint moves. Therefore, this shape change is a person for an image intended to match (superimpose) on the road surface, or for a display intentionally arranged in the depth direction (an image that is understood to be distant). It can be said that it assists the recognition of.
 従って、本実施形態では、奥行き感のある画像の画像領域(表示領域全体という意味もあり、また、その表示領域に含まれる個々の画像の表示領域という意味もあり、適宜、解釈が可能である)については、その輪郭が虚像として正しく視認されたときの輪郭を台形として画像補正を実施することとする。これによって、光学系による歪みが除去された後は、より自然な奥行き感のある画像(虚像)が、運転者(ユーザー、視認者)に視認されることになる。 Therefore, in the present embodiment, there is an image area of an image with a sense of depth (meaning the entire display area, and also a display area of individual images included in the display area, and interpretation is possible as appropriate. ), The image correction is performed by using the contour when the contour is correctly recognized as a virtual image as a trapezoid. As a result, after the distortion caused by the optical system is removed, an image (virtual image) having a more natural sense of depth is visually recognized by the driver (user, viewer).
 また、奥行き画像(傾斜像)は、視差による立体感を生じさせるように生成される画像であることから、運動視差による影響を受けて画像が歪んで見えることは、当然に、当初から想定され得るものである。上述のとおり、その運動視差があることから、自然な(自然に近い)立体感(遠近感)が生じるのであり、従って、図7(B)、(D)のような台形の変形(歪み)は、奥行き画像の自然な視覚を補助することになる。 Further, since the depth image (tilted image) is an image generated so as to generate a stereoscopic effect due to parallax, it is naturally assumed from the beginning that the image looks distorted due to the influence of motion parallax. What you get. As described above, due to the motion parallax, a natural (close to nature) three-dimensional effect (perspective) is generated, and therefore, the trapezoidal deformation (distortion) as shown in FIGS. 7 (B) and 7 (D). Will assist the natural vision of the depth image.
 従って、画像生成の段階で、視点追従ワーピング処理を実施している状態で、例えば、視点が左方向にずれたときは、等脚台形を左側に傾いて視認されるように補正して遠近感をより自然なものに近づけ、その補正した画像領域について、光学部材による歪みとは逆特性の歪みを事前に与える補正(従来のワーピング)を実施する。これにより、図7(B)のような表示が再現されることになる。言い換えれば、より自然な遠近感(立体感)をもつ奥行き画像の表示が可能である。 Therefore, at the stage of image generation, when the viewpoint tracking warping process is being performed, for example, when the viewpoint shifts to the left, the isosceles trapezoid is corrected so that it can be visually recognized by tilting it to the left. Is made closer to a more natural one, and the corrected image area is corrected (conventional warping) in which distortion having the opposite characteristics to the distortion caused by the optical member is given in advance. As a result, the display as shown in FIG. 7B is reproduced. In other words, it is possible to display a depth image with a more natural perspective (three-dimensional effect).
 但し、図7(F)、(H)では、矢印の傾斜の程度(言い換えれば、矢印を表示する画像領域の輪郭(台形)の歪みの程度)は適正であり、違和感の少ない画像である。この画像を得るために、場合によって、運動視差を考慮して、歪みの程度を適切に設定することが必要となる。これができないときは、例えば、図7(I)に示すように、視点が左側にずれると、矢印の図形が大きく左側に傾斜して、不自然な画像(虚像)となる場合があり、この場合は、運動視差を少し抑制することで、適正な図7(J)(この図7(J)は、図7(F)と同じである)のような画像(虚像)を得ることができる。 However, in FIGS. 7 (F) and 7 (H), the degree of inclination of the arrow (in other words, the degree of distortion of the outline (trapezoid) of the image area displaying the arrow) is appropriate, and the image has little discomfort. In order to obtain this image, it may be necessary to appropriately set the degree of distortion in consideration of motion parallax. If this is not possible, for example, as shown in FIG. 7 (I), if the viewpoint shifts to the left side, the figure of the arrow may be greatly tilted to the left side, resulting in an unnatural image (virtual image). Can obtain an appropriate image (virtual image) as shown in FIG. 7 (J) (this FIG. 7 (J) is the same as FIG. 7 (F)) by suppressing the motion parallax a little.
 例えば、表示画像が、路面に重畳するように表示される路面重畳画像(例えば、図7(F)、(H)のような矢印の図形の画像)であり、その路面重畳画像の虚像が、路面に対して傾斜している傾斜面に表示され、かつ虚像が路面から浮き上がって見える場合(図3や図6の表示例を参照)には、路面から浮き上がって見えない場合(路面への重畳度が比較的高い場合)に比べて、矢印(言い換えれば、矢印の画像の表示領域の輪郭としての台形)の変形の程度を軽減するような運動視差補正を実施するのが好ましい。 For example, the displayed image is a road surface superimposed image (for example, an image of an arrow figure as shown in FIGS. 7 (F) and 7 (H)) displayed so as to be superimposed on the road surface, and the virtual image of the road surface superimposed image is. When it is displayed on an inclined surface that is inclined with respect to the road surface and the virtual image appears to be lifted from the road surface (see the display examples in FIGS. 3 and 6), it is not visible when it is lifted from the road surface (superimposition on the road surface). It is preferable to perform motion misalignment correction so as to reduce the degree of deformation of the arrow (in other words, the trapezoidal shape as the outline of the display area of the arrow image) as compared with the case where the degree is relatively high).
 その理由は、実際の矢印の図形の虚像(路面重畳画像の虚像)の位置が、その矢印が重畳されて見える路面の位置よりも、運転者(視認者)から見て手前側にあるからである。このときは、手前に見える矢印の虚像について運動視差が生じ、一方、その奥に見える路面についても運動視差が生じる。奥に見える路面は遠方側の表示であるため、運動視差はより小さく、手前に見える矢印の虚像は手前側の表示であるため、運動視差はより大きく感得される。この相対的な差を人の視覚が感知してしまうため、矢印の虚像についての運動視差は、矢印の虚像が例えば路面に密着しているような場合に比べて大きく感じられる。よって、図7(I)の例のように、矢印の虚像の左側への傾斜が強調されてしまう。よって、この場合には、その左側への傾斜は残しつつ、その傾斜が軽減されるような、言い換えれば、矢印の画像の表示領域の輪郭である台形の変形が抑制されるような、運動視差補正を実施することで、図7(J)のような適正な運動視差を伴う表示となる。但し、正対画像のように、運動視差が相殺されることはなく、ある程度の運動視差は残る。よって、奥行き画像についての運動視差の抑制量は、正対画像の運動視差の抑制量に比べて小さくなる。 The reason is that the position of the virtual image of the actual arrow figure (virtual image of the road surface superimposed image) is closer to the driver (viewer) than the position of the road surface where the arrow is superimposed. be. At this time, kinetic parallax occurs in the virtual image of the arrow seen in the foreground, while kinetic parallax also occurs in the road surface seen in the back. Since the road surface seen in the back is displayed on the distant side, the motion parallax is smaller, and the virtual image of the arrow seen in the foreground is displayed on the front side, so that the motion parallax is more perceived. Since human vision perceives this relative difference, the motion parallax of the virtual image of the arrow is felt to be larger than that of the case where the virtual image of the arrow is in close contact with the road surface, for example. Therefore, as in the example of FIG. 7 (I), the inclination of the arrow to the left side of the virtual image is emphasized. Therefore, in this case, the motion parallax is such that the inclination is reduced while leaving the inclination to the left side, in other words, the deformation of the trapezoid which is the outline of the display area of the arrow image is suppressed. By performing the correction, the display with appropriate motion parallax as shown in FIG. 7 (J) is obtained. However, unlike the facing image, the motion parallax is not offset, and a certain amount of motion parallax remains. Therefore, the amount of suppression of the motion parallax of the depth image is smaller than the amount of suppression of the motion parallax of the facing image.
 次に、図8を参照する。図8(A)は、前方に立面(疑似立面を含み、例えば虚像表示面の立像HUD領域)が配置され、それを見ている運転者の視点(単眼とする)がアイボックスの中央の分割領域に位置した状態から、車両の幅方向(左右方向)に沿って移動した場合を示す図、図8(B)、(C)、(D)は、視点が、アイボックスの左の分割領域、中央の分割領域、右の分割領域の各々に位置するときの立面の見え方(立面の輪郭である矩形の形状)を示す図、図8(E)、(F)、(G)は、立面の輪郭である矩形の変形を抑制するために表示画像に施す画像補正の内容を示す図、図8(H)、(I)、(J)は、視点が、アイボックスの左の分割領域、中央の分割領域、右の分割領域の各々に位置するときの矢印の虚像の見え方(矢印の虚像の形状)を示す図である。 Next, refer to FIG. In FIG. 8A, an elevation (including a pseudo-elevation, for example, a standing image HUD region of a virtual image display surface) is arranged in front, and the viewpoint (monocular) of the driver looking at it is the center of the eyebox. 8 (B), (C), and (D) are views showing the case where the vehicle moves along the width direction (horizontal direction) of the vehicle from the state of being located in the divided area of the above. Figures 8 (E), (F), (FIG. 8), FIGS. G) is a diagram showing the contents of image correction applied to the display image in order to suppress deformation of the rectangle which is the contour of the elevation, and FIGS. 8 (H), (I), and (J) have the viewpoint of the eye box. It is a figure which shows the appearance of the virtual image of an arrow (the shape of the virtual image of an arrow) when it is located in each of the left division area, the center division area, and the right division area of.
 図8(A)~(D)は、先に示した図7(A)~(D)に対応する。図8(A)では、立面の虚像表示面PS1aに、「120km/h」という車速表示SPが表示されている。正面から見た車速表示は、図8(C)のように見えるが、視点が左にずれると、図8(B)のように、車速表示SPは左に傾斜して見える。また、視点が右にずれると、図8(D)のように、車速表示SPは右に傾斜して見える。 FIGS. 8 (A) to 8 (D) correspond to FIGS. 7 (A) to 7 (D) shown above. In FIG. 8A, a vehicle speed display SP of “120 km / h” is displayed on the virtual image display surface PS1a on the elevation surface. The vehicle speed display seen from the front looks like FIG. 8 (C), but when the viewpoint shifts to the left, the vehicle speed display SP appears tilted to the left as shown in FIG. 8 (B). Further, when the viewpoint is shifted to the right, the vehicle speed display SP appears to be tilted to the right as shown in FIG. 8 (D).
 図8(C)に示されるように、立面(疑似立面を含む)に表示される正対画像(立像)については、従来と同様に、画像領域の輪郭(外形)を矩形(長方形又は正方形)とする。立面に表示される車速表示SP等は、固定位置に常時表示される場合が多く、数字や記号、文字等を、正確に読み取れることが重要であり、特に遠近感は必要ではないことから、傾斜像の場合のように、台形とすることの利点がないため、ここでは従来どおり、長方形の外形を採用するものである。 As shown in FIG. 8C, for a facing image (standing image) displayed on an elevation (including a pseudo-elevation), the contour (outer shape) of the image area is rectangular (rectangular or rectangular) as in the conventional case. Square). The vehicle speed display SP, etc. displayed on the elevation is often always displayed at a fixed position, and it is important to be able to read numbers, symbols, characters, etc. accurately, and no particular perspective is required. Since there is no advantage of having a trapezoid as in the case of an inclined image, a rectangular outer shape is adopted here as in the conventional case.
 但し、視点が左右方向にずれると、図8(B)、(D)のように表示が変形する。変形が生じる理由は、ワーピングによって光学系の歪みが除去されて矩形の画像の表示光が視点(眼)に到来するとしても、車速表示SPの位置は変わらないため、視点ずれが生じる前と後とでは、表示光の到来方向(入射方向)が異なり、視点ずれ前に、歪みのない図8(C)のような矩形が見えていたのなら、視点ずれが生じた後は、異なる角度から到来する光を眼で受けて脳が画像を判定すると、図8(B)、(D)のように、運動視差によって画像が歪んで見えるからである。 However, if the viewpoint shifts to the left or right, the display will be deformed as shown in FIGS. 8 (B) and 8 (D). The reason for the deformation is that even if the distortion of the optical system is removed by warping and the display light of the rectangular image reaches the viewpoint (eye), the position of the vehicle speed display SP does not change, so before and after the viewpoint shift occurs. If the arrival direction (incident direction) of the display light is different and a distortion-free rectangle as shown in Fig. 8 (C) is visible before the viewpoint shift, then after the viewpoint shift occurs, from a different angle. This is because when the brain judges an image by receiving the incoming light with the eyes, the image looks distorted due to motion disparity as shown in FIGS. 8 (B) and 8 (D).
 図8(B)、(D)のような、運動視差による画像の変形は、表示の認知性の点では好ましいものではない。そこで、本実施形態では、車速表示SP(広義には、正対画像(立像))のワーピング処理に際しては、そのような変形(長方形等の矩形の変形)が抑制される(打ち消される)ように、画像(画像領域の輪郭:矩形)を、運動視差の影響による実際の変形とは逆の方向に変形させ、その後、光学系で生じる歪みとは逆特性の歪みを与えるワーピング(従来のワーピング)を実施する。実際に生じる歪みとは逆特性の歪みが施された状態の画像は、図8(E)~(G)に示されている。 Deformation of the image due to motion parallax as shown in FIGS. 8 (B) and 8 (D) is not preferable in terms of display cognition. Therefore, in the present embodiment, such deformation (deformation of a rectangle such as a rectangle) is suppressed (cancelled) during the warping process of the vehicle speed display SP (in a broad sense, a facing image (standing image)). , Warping (conventional warping) that deforms an image (outline of an image area: rectangle) in the direction opposite to the actual deformation due to the influence of motion disparity, and then gives distortion with the opposite characteristics to the distortion generated in the optical system. To carry out. The images in the state where the distortion having the opposite characteristic to the actual distortion is applied are shown in FIGS. 8 (E) to 8 (G).
 従って、視点がずれると、その視点には、運動視差による変形とは逆に変形された画像の光が人の眼に入射することとなる。 Therefore, when the viewpoint is deviated, the light of the deformed image is incident on the human eye, which is the opposite of the deformation due to the motion parallax.
 このとき、運動視差による変形が、事前に施されている逆の変形と打ち消されることになり、人の脳が画像を判定すると、矩形(長方形や正方形)の画像に見える。このときの画像は、図8(H)~(J)に示されている。図8(H)、(J)からわかるように、視点ずれが生じた場合でも、車速表示SPの画像領域の外形(輪郭)は、矩形である。言い換えれば、視点の位置に関係なく、画像領域の外形は矩形に維持され、常に、見易い正対画像が表示される。認知性の低下が抑制され、また、正対画像が常に得られることから違和感が生じず、よって、視覚性が向上する。 At this time, the deformation due to motion parallax is canceled out by the reverse deformation applied in advance, and when the human brain judges the image, it looks like a rectangular (rectangular or square) image. The images at this time are shown in FIGS. 8 (H) to 8 (J). As can be seen from FIGS. 8H and 8J, the outer shape (contour) of the image area of the vehicle speed display SP is rectangular even when the viewpoint shift occurs. In other words, regardless of the position of the viewpoint, the outer shape of the image area is maintained in a rectangular shape, and an easy-to-see facing image is always displayed. The decrease in cognition is suppressed, and since a facing image is always obtained, there is no sense of discomfort, and therefore the visual sense is improved.
 以上説明したように。本実施形態では、奥行き画像(傾斜像)の画像領域の輪郭と、正対画像(立像)の画像領域の輪郭とを、個別に設定してワーピングを行うため、各画像について、実質的には、異なる方式のワーピングを実施していることになる。これにより、奥行き画像(傾斜像)については、自然な遠近感を感じられる、視認性が向上された表示が実現され、また、正対画像(立像)については、見易く、認知性が向上した表示(正確な情報の把握に適した表示等)が実現される。 As explained above. In the present embodiment, the contour of the image area of the depth image (tilted image) and the contour of the image area of the facing image (standing image) are individually set and warped, so that each image is substantially. , It means that different methods of warping are being carried out. As a result, for depth images (tilted images), a display with improved visibility that gives a natural perspective is realized, and for facing images (standing images), it is easier to see and has improved cognition. (Display suitable for grasping accurate information, etc.) is realized.
 次に、図9を参照する。図9(A)~(D)は、立面の輪郭である矩形の変形を抑制するたに表示画像に施す画像補正の2つの例(下辺又は上辺を変形させる例)について説明するための図である。 Next, refer to FIG. 9 (A) to 9 (D) are diagrams for explaining two examples (an example of deforming the lower side or the upper side) of image correction applied to a display image while suppressing deformation of a rectangle which is an elevation contour. Is.
 先に説明した図8(E)、(G)において、運動視差によって実際に生じる矩形の変形を打ち消すように、矩形を変形させる処理をワーピング時に行う点を説明したが、この矩形を変形させる方法としては、2通りの方法が考えられる。以下、この点について、具体的に説明する。 In FIGS. 8 (E) and 8 (G) described above, the point that the process of deforming the rectangle is performed at the time of warping so as to cancel the deformation of the rectangle actually caused by the motion parallax has been described. There are two possible methods. Hereinafter, this point will be specifically described.
 図9(A)では、車速表示SPが表示されている状態で、視点が右側にずれている。この場合は、車速表示SPは、先に説明したように左側に傾斜するように歪む。これを防止するためには、その歪みを打ち消すように、逆特性の歪みを、車速表示の画像に事前に与える必要がある。言い換えれば、図9(B)の左側の図に示される画像を、右側の図のような、右側に傾斜した画像に補正する必要がある。 In FIG. 9A, the viewpoint is shifted to the right while the vehicle speed display SP is displayed. In this case, the vehicle speed display SP is distorted so as to incline to the left as described above. In order to prevent this, it is necessary to apply the distortion of the opposite characteristic to the image of the vehicle speed display in advance so as to cancel the distortion. In other words, it is necessary to correct the image shown on the left side of FIG. 9B to an image inclined to the right side as shown on the right side.
 このような画像補正を行う場合は、図9(C)に示されるように、矩形の下辺BCを固定して、上辺ADを移動させて歪ませてもよい。図9(C)では、上辺ADが、上辺A’D’へと変位している。 When performing such image correction, as shown in FIG. 9C, the lower side BC of the rectangle may be fixed and the upper side AD may be moved to distort the image. In FIG. 9C, the upper side AD is displaced to the upper side A'D'.
 また、図8(D)に示されるように、上辺ADを固定して、下辺BCを移動させて歪ませてもよい。図8(D)では、下辺BDが、下辺B’D’へと変位している。 Further, as shown in FIG. 8D, the upper side AD may be fixed and the lower side BC may be moved to distort. In FIG. 8D, the lower side BD is displaced to the lower side B'D'.
 何れの方法でも、運動視差によって生じる実際の変形を打ち消すような変形(歪み)を画像領域に与えることができ、この点では、効果は共通である。 With either method, deformation (distortion) that cancels out the actual deformation caused by motion parallax can be given to the image area, and in this respect, the effects are common.
 但し、運転者(視認者)から見て、前方に表示される矩形の上辺は遠方にあり、下辺は近方にあり、運動視差を考慮すると、遠方ほど位置の変化が小さく感得されるのであり、よって、上辺を動かして矩形を変形する場合には、下辺を動かして矩形を変形する場合に比べて、動かす量(実質的な変形量)を減らすことができるものと考えられ、この点では差異がある。 However, when viewed from the driver (viewer), the upper side of the rectangle displayed in front is far away, and the lower side is near, and considering the motion parallax, the farther the rectangle is, the smaller the change in position is perceived. Therefore, it is considered that the amount of movement (substantial deformation amount) can be reduced when the upper side is moved to deform the rectangle, as compared with the case where the lower side is moved to deform the rectangle. Then there is a difference.
 次に、図10を参照する。図10(A)は、傾斜面に表示される傾斜像(奥行き画像)についてのワーピング制御の内容を示す図、図10(B)は、立面に表示され立像(正対画像)についてのワーピング制御の内容を示す図、図10(C)~(E)は、画像レンダリングにより表示面に表示する画像を生成する例を示す図、図10(F)~(H)は、視点位置に応じた、運転者によって視認される画像(視認画像、視認虚像)の例を示す図である。 Next, refer to FIG. FIG. 10A is a diagram showing the contents of warping control for an inclined image (depth image) displayed on an inclined surface, and FIG. 10B is a warping for an standing image (facing image) displayed on an elevation surface. 10 (C) to 10 (E) are diagrams showing the contents of control, FIGS. 10 (C) to 10 (E) are diagrams showing an example of generating an image to be displayed on the display surface by image rendering, and FIGS. 10 (F) to 10 (H) are views according to the viewpoint position. In addition, it is a figure which shows the example of the image (visual-viewing image, visual-viewing virtual image) visually recognized by a driver.
 先に図3(C)で説明したように、表示部160の表示面164の画像表示領域45は、ユーザーから見た場合の近傍側と遠方側との境界である境界位置LN’を基準として、立像HUD領域Z1に対応する第1の表示領域Z1’と、傾斜像HUD領域Z2に対応する第2の表示領域Z2’に区分される。 As described above with reference to FIG. 3C, the image display area 45 of the display surface 164 of the display unit 160 is based on the boundary position LN'which is the boundary between the near side and the far side when viewed from the user. , The first display area Z1'corresponding to the standing image HUD area Z1 and the second display area Z2' corresponding to the tilted image HUD area Z2.
 第2の表示領域Z2’には、第2の画像RG2としてのナビゲーション用の矢印の図形AW’が表示されるものとする。なお、図10においては、矢印の図形AW’は、便宜上、縦長の楕円にて示している。 It is assumed that the navigation arrow figure AW'as the second image RG2 is displayed in the second display area Z2'. In FIG. 10, the arrow figure AW'is shown by a vertically long ellipse for convenience.
 本実施形態では、ワーピング処理の際に、図10(A)のa1~a3に示されるように、まず、視点の位置に応じて、矢印の図形AW’の画像領域の外形(輪郭)を補正する。つまり、視点がアイボックスEBの中央に位置するときは、a2のように、例えば等脚台形とし、左に視点がずれれば、a1のように、等脚台形を右に傾斜させて歪ませ、右に視点がずれれば、a3のように、等脚台形を左に傾斜させて歪ませる。このとき、上述のとおり、運動視差を可変に制御して、適正な台形の変形となるようにワーピングを行うのが好ましい。目位置が、例えば左側に動くと、運動視差により画像の上辺が左側に大きく傾くように変形するが、これに対するワーピング制御では、上辺が若干右側に傾くような補正をする。これは、上述のとおり、虚像面が路面より手前にあるため、虚像面の運動視差変形が、路面の運動視差より大きくなってしまうため、この運動視差の影響(変形)を弱める必要があるからである。 In the present embodiment, as shown in a1 to a3 of FIG. 10A, first, the outer shape (contour) of the image area of the arrow figure AW'is corrected according to the position of the viewpoint during the warping process. do. That is, when the viewpoint is located in the center of the eyebox EB, the isosceles trapezoid is formed, for example, as in a2, and when the viewpoint shifts to the left, the isosceles trapezoid is tilted to the right and distorted as in a1. If the viewpoint shifts to the right, the isosceles trapezoid is tilted to the left and distorted, as in a3. At this time, as described above, it is preferable to variably control the motion parallax and perform warping so as to obtain an appropriate trapezoidal deformation. When the eye position moves to the left, for example, the upper side of the image is deformed so as to be greatly tilted to the left due to motion parallax, but in the warping control, the upper side is slightly tilted to the right. This is because, as described above, since the virtual image surface is in front of the road surface, the motion parallax deformation of the virtual image surface becomes larger than the motion parallax of the road surface, and it is necessary to weaken the influence (deformation) of this motion parallax. Is.
 次に、a4~a6のように、視点位置に応じて、ウインドシールドやHUD装置の光学系(これらを総称して光学部材と称する)による歪みとは逆特性の歪みを、各画像(各画像の画像領域)に与える。このようにして、正対画像(傾斜像)についての視点追従ワーピングが実施される。 Next, as in a4 to a6, depending on the viewpoint position, each image (each image) is distorted with the opposite characteristics to the distortion caused by the windshield or the optical system of the HUD device (collectively referred to as an optical member). Image area). In this way, the viewpoint-following warping for the facing image (tilted image) is performed.
 また、第1の表示領域Z1’には、第1の画像RG1としての車速表示の画像SP’が表示されるものとする。なお、図10においては、車速表示の画像SP’は、便宜上、横長の楕円にて示している。 Further, it is assumed that the image SP'of the vehicle speed display as the first image RG1 is displayed in the first display area Z1'. In FIG. 10, the image SP'of the vehicle speed display is shown by a horizontally long ellipse for convenience.
 図10(A)の場合と同様に、図10(B)のb1~b3のように、まず、画像領域の外形(輪郭)を視点位置に応じて補正する。次に、b4~b6に示されるように、視点位置に対応させて、光学部材による歪みとは逆特性の歪みを各画像(各画像の画像領域)に与える。このようにして、奥行き画像(傾斜像)についての視点追従ワーピングが実施される。 Similar to the case of FIG. 10A, first, as in b1 to b3 of FIG. 10B, the outer shape (contour) of the image area is corrected according to the viewpoint position. Next, as shown in b4 to b6, distortion having a characteristic opposite to the distortion caused by the optical member is applied to each image (image area of each image) in correspondence with the viewpoint position. In this way, the viewpoint-following warping for the depth image (tilted image) is performed.
 また、1つの画像に、奥行き画像と正対画像が混在する場合が想定され得る。この場合は、各画像(各画像の領域)について個別に、図10(A)又は図10(B)で説明したような、各画像に固有の画像補正が行われる。つまり、画像の種類に応じて異なる画像補正方式が採用され、個別に画像補正が実施される。 In addition, it can be assumed that a depth image and a facing image are mixed in one image. In this case, each image (region of each image) is individually subjected to image correction peculiar to each image as described with reference to FIG. 10A or FIG. 10B. That is, different image correction methods are adopted depending on the type of image, and image correction is performed individually.
 そして、その後、図10(C)~(D)に示されるように、視点位置に応じた各画像(ワーピング処理後の画像)を、例えば、画像レンダリングにより合成して、1つの画像を生成する処理が実施される。 Then, as shown in FIGS. 10C to 10D, each image (image after warping processing) corresponding to the viewpoint position is combined by, for example, image rendering to generate one image. The process is carried out.
 図10(C)~(E)の各々では、合成により得られた1つの画像にQ1~Q3の符号を付している。なお、この画像合成処理を含めて、ワーピング処理と称することも可能である。奥行き画像(あるいは奥行き画像の画像領域)と、正対画像(あるいは正対画像の画像領域)とを区別して取り扱って個別に画像補正を実施した後、それらを合成することから、各画像に所望の画像補正を迅速に実施できる。また、画像処理自体も簡素化され得る。よって、画像補正処理に伴う画像処理部(画像生成部や画像レンダリング部等を含むことができる)の負荷も軽減される。 In each of FIGS. 10 (C) to 10 (E), one image obtained by synthesis is designated by the reference numerals Q1 to Q3. Including this image composition process, it can also be referred to as a warping process. Depth images (or image areas of depth images) and facing images (or image areas of facing images) are handled separately, image corrections are performed individually, and then they are combined, which is desired for each image. Image correction can be performed quickly. In addition, the image processing itself can be simplified. Therefore, the load on the image processing unit (which can include the image generation unit, the image rendering unit, and the like) associated with the image correction processing is also reduced.
 図10(F)、(G)、(H)は、視点が、アイボックスの左の分割領域、中央の分割領域、右の分割領域の各々に位置するときの虚像の見え方を示す図である。視点がアイボックスEBの中央から見た傾斜像HUD領域Z2のうち、少なくとも上辺と下辺が互いに平行である台形(例えば等脚台形)で囲まれる領域を第1輪郭Zaとするとき、中央から左に視点がずれれば、運動視差の影響により等脚台形が左に傾斜しようとする(換言すると、第1輪郭Zaの上辺が下辺に対して相対的に左側になろうとする)。ここで、a1のように、等脚台形を右に傾斜させるワーピング制御が実行されることで、図10(F)のように、運動視差による等脚台形が左に傾斜しようとする変形が低減された第1輪郭Zaに表示される虚像が運転者(中央から左に視点がずれた運転者)に視認される。一方、中央から右に視点がずれれば、運動視差の影響により等脚台形が右に傾斜しようとする(換言すると、第1輪郭Zaの上辺が下辺に対して相対的に右側になろうとする)。ここで、a3のように、等脚台形を左に傾斜させるワーピング制御が実行されることで、図10(J)のように、運動視差による等脚台形が右に傾斜しようとする変形が低減された第1輪郭Zaに表示される虚像が運転者(中央から右に視点がずれた運転者)に視認される。これにより、奥行き画像の運動視差が、路面などの背景との運動視差に近づくため、自然な遠近感を感じられる、視認性が向上された表示が実現される。なお、第1輪郭Zaは、視点がアイボックスEBの中央から見た傾斜像HUD領域Z2のうち、上辺と下辺が互いに平行である台形(例えば等脚台形)で囲まれる一部の領域であってもよく、傾斜像HUD領域Z2自体が台形であれば、全部の領域であってもよい。 10 (F), (G), and (H) are diagrams showing how the virtual image looks when the viewpoint is located in each of the left divided area, the central divided area, and the right divided area of the eyebox. be. Of the tilted image HUD region Z2 whose viewpoint is viewed from the center of the eyebox EB, the region surrounded by a trapezoid (for example, an isosceles trapezoid) whose upper and lower sides are parallel to each other is defined as the first contour Za, from the center to the left. If the viewpoint shifts to the left, the isosceles trapezoid tends to tilt to the left due to the influence of motion parallax (in other words, the upper side of the first contour Za tends to be on the left side relative to the lower side). Here, by executing the warping control that tilts the isosceles trapezoid to the right as in a1, the deformation that the isosceles trapezoid tends to tilt to the left due to the motion parallax is reduced as shown in FIG. 10 (F). The virtual image displayed on the first contour Za is visually recognized by the driver (the driver whose viewpoint is shifted to the left from the center). On the other hand, if the viewpoint shifts from the center to the right, the isosceles trapezoid tends to tilt to the right due to the influence of motion parallax (in other words, the upper side of the first contour Za tends to be on the right side relative to the lower side. ). Here, by executing the warping control that tilts the isosceles trapezoid to the left as in a3, the deformation that the isosceles trapezoid tends to tilt to the right due to the motion parallax is reduced as shown in FIG. 10 (J). The virtual image displayed on the first contour Za is visually recognized by the driver (the driver whose viewpoint is shifted to the right from the center). As a result, the motion parallax of the depth image approaches the motion parallax with the background such as the road surface, so that a display with improved visibility is realized, in which a natural perspective can be felt. The first contour Za is a part of the tilted image HUD region Z2 whose viewpoint is viewed from the center of the eyebox EB and is surrounded by a trapezoid (for example, an isosceles trapezoid) in which the upper side and the lower side are parallel to each other. If the tilted image HUD region Z2 itself is trapezoidal, it may be the entire region.
 また、視点がアイボックスEBの中央から見た立像HUD領域Z1のうち、長方形又は正方形を含む矩形で囲まれる領域を第2輪郭Zbとするとき、中央から左に視点がずれれば、運動視差の影響により矩形が左に傾斜しようとする(換言すると、第2輪郭Zbの上辺が下辺に対して相対的に左側になろうとする)。ここで、b1のように、運動視差による第2輪郭Zbの変形を打ち消すように、矩形を右に傾斜させるワーピング制御が実行されることで、図10(F)のように、運動視差による矩形が左に傾斜しようとする変形が打ち消された第2輪郭Zbに表示される虚像が運転者(中央から左に視点がずれた運転者)に視認される。一方、中央から右に視点がずれれば、運動視差の影響により矩形が右に傾斜しようとする(換言すると、第2輪郭Zbの上辺が下辺に対して相対的に右側になろうとする)。ここで、b3のように、運動視差による第2輪郭Zbの変形を打ち消すように、矩形を左に傾斜させるワーピング制御が実行されることで、図10(H)のように、運動視差による矩形が右に傾斜しようとする変形が打ち消された第2輪郭Zbに表示される虚像が運転者(中央から左に視点がずれた運転者)に視認される。なお、第2輪郭Zbは、視点がアイボックスEBの中央から見た立像HUD領域Z1のうち、矩形で囲まれる一部の領域であってもよく、立像HUD領域Z1自体が矩形であれば、全部の領域であってもよい。 Further, when the second contour Zb is the region surrounded by the rectangle or the rectangle including the rectangle in the standing image HUD region Z1 when the viewpoint is viewed from the center of the eyebox EB, if the viewpoint shifts from the center to the left, the motion parallax The rectangle tends to tilt to the left due to the influence of (in other words, the upper side of the second contour Zb tends to be on the left side relative to the lower side). Here, as shown in FIG. 10 (F), the rectangle due to the motion parallax is executed by executing the warping control that inclines the rectangle to the right so as to cancel the deformation of the second contour Zb due to the motion parallax as in b1. The virtual image displayed on the second contour Zb in which the deformation that tends to incline to the left is canceled is visually recognized by the driver (the driver whose viewpoint is shifted to the left from the center). On the other hand, if the viewpoint shifts from the center to the right, the rectangle tends to tilt to the right due to the influence of motion parallax (in other words, the upper side of the second contour Zb tends to be on the right side relative to the lower side). Here, as shown in FIG. 10 (H), the rectangle due to the motion parallax is executed by executing the warping control that inclines the rectangle to the left so as to cancel the deformation of the second contour Zb due to the motion parallax as in b3. The virtual image displayed on the second contour Zb in which the deformation that tends to incline to the right is canceled is visually recognized by the driver (the driver whose viewpoint is shifted from the center to the left). The second contour Zb may be a part of the standing image HUD region Z1 seen from the center of the eyebox EB, which is surrounded by a rectangle, and if the standing image HUD region Z1 itself is a rectangle, the second contour Zb may be a part of the standing image HUD region Z1. It may be the entire area.
 また、上記のような画像処理を行うに際し、例えば、奥行の利用/非利用に応じて、表示レイヤーを分離し、レイヤー毎に別々な形状補正処理を行った後、画像を重ね合わせて最終表示画像としてもよい。すなわち、奥行き表現を行う傾斜像HUD領域Z2の表示レイヤーと、奥行き表現を行わない立像HUD領域Z1の表示レイヤーと、は分離され、レイヤー毎に別々な形状補正処理を行った後、画像を重ね合わせて最終表示画像としてもよい。また、奥行き表現を行う傾斜像HUD領域Z2は、領域毎に異なる複数の表示レイヤーで構成されてもよい。具体的には、傾斜像HUD領域Z2は、奥行き方向で複数の領域に分割された複数の表示レイヤーで構成されてもよい。また、コンテンツ毎の表示属性に応じて個々に形状補正を行ってもよい。 In addition, when performing image processing as described above, for example, the display layers are separated according to the use / non-use of depth, and after performing separate shape correction processing for each layer, the images are superimposed and finally displayed. It may be an image. That is, the display layer of the tilted image HUD area Z2 that expresses the depth and the display layer of the standing image HUD area Z1 that does not express the depth are separated, and after performing separate shape correction processing for each layer, the images are superimposed. Together, it may be used as the final display image. Further, the tilted image HUD region Z2 that expresses the depth may be composed of a plurality of display layers that are different for each region. Specifically, the tilted image HUD region Z2 may be composed of a plurality of display layers divided into a plurality of regions in the depth direction. Further, the shape may be corrected individually according to the display attribute of each content.
 次に、図11を参照する。図11は、HUD装置の構成例を示す図である。図11の上側の図は、図4(A)と同じである。ここでは、表示制御部190の構成について説明する。 Next, refer to FIG. FIG. 11 is a diagram showing a configuration example of the HUD device. The upper view of FIG. 11 is the same as that of FIG. 4 (A). Here, the configuration of the display control unit 190 will be described.
 表示制御部190は、視点位置検出部192と、ワーピング処理部194と、を有する。ワーピング処理部194は、ワーピング制御部192と、ROM198(第1、第の画像テーブル199、200を有する)と、VRAM(例えば、画像データ196やワーピング処理後データ197等を格納するものである)201と、画像生成部(画像レンダリング部)202と、を有する。なお、ワーピング制御部195を、ワーピング処理部194の外に設ける構成であってもよい。また、視点位置検出部192は、HUD装置100の外に設けることもできる。 The display control unit 190 has a viewpoint position detection unit 192 and a warping processing unit 194. The warping processing unit 194 stores a warping control unit 192, a ROM 198 (having the first and first image tables 199 and 200), and a VRAM (for example, image data 196, post-warping data 197, and the like). It has 201 and an image generation unit (image rendering unit) 202. The warping control unit 195 may be provided outside the warping processing unit 194. Further, the viewpoint position detection unit 192 can also be provided outside the HUD device 100.
 第1の画像変換テーブル199は、正対画像(立像)用のワーピングパラメータを記憶している。第2の画像変換テーブル200は、奥行き画像(傾斜像)用のワーピングパラメータを記憶している。 The first image conversion table 199 stores warping parameters for facing images (standing images). The second image conversion table 200 stores warping parameters for a depth image (tilted image).
 ワーピング制御部195は、視点位置検出部192から供給される視点位置情報に応じたワーピングパラメータを用いて、先に説明したような視点位置追従ワーピングが実施されるように、画像生成部(画像レンダリング部)202や、ROM198及びVRAM201等を制御する。 The warping control unit 195 uses an image generation unit (image rendering) so that the viewpoint position tracking warping as described above is performed using the warping parameters corresponding to the viewpoint position information supplied from the viewpoint position detection unit 192. Part) 202, ROM198, VRAM201, etc. are controlled.
 VRAMには、表示すべき画像の原画像データ196が記憶(蓄積)されている。例えば、その原画像データ196が読みだされて、その読みだされた原画像データに、視点位置に対応したワーピングパラメータが適用されることで、先に説明した画像領域の変形補正や、光学系等の歪みとは逆特性の歪みを事前に付与する画像補正等が実行される。ワーピング処理後データ197は、VRAM201に一時的に記憶された後、画像生成部(画像レンダリング)202に供給され、例えば、図10(C)~(E)に示したような画像合成処理が実施される。これによって、1つの画像(表示画像)が生成される。生成された画像は、表示部(例えば、液晶パネル等のフラットパネルディスプレイ)160に供給され、表示面(図3(C)の符号164))に表示される。 The original image data 196 of the image to be displayed is stored (stored) in the VRAM. For example, the original image data 196 is read out, and by applying the warping parameter corresponding to the viewpoint position to the read out original image data, the deformation correction of the image area and the optical system described above can be performed. Image correction or the like is performed in which distortion having the opposite characteristics to the distortion of the above is applied in advance. The post-warping data 197 is temporarily stored in the VRAM 201 and then supplied to the image generation unit (image rendering) 202, and for example, an image composition process as shown in FIGS. 10 (C) to 10 (E) is performed. Will be done. As a result, one image (display image) is generated. The generated image is supplied to a display unit (for example, a flat panel display such as a liquid crystal panel) 160 and displayed on a display surface (reference numeral 164 in FIG. 3C).
 次に、図12を参照する。図12(A)は、立像(疑似立像)を、視点位置に関係なく視点座標系の所定位置に固定して表示する場合のHUD装置の要部構成の例を示す図、図12(B)は、表示(虚像)が、視点の移動に追従して移動したように見えることを示す図である。 Next, refer to FIG. FIG. 12A is a diagram showing an example of a configuration of a main part of the HUD device when a standing image (pseudo standing image) is fixedly displayed at a predetermined position in the viewpoint coordinate system regardless of the viewpoint position, FIG. 12B. Is a diagram showing that the display (virtual image) appears to move following the movement of the viewpoint.
 図12の例では、正対画像(立像)について、先に説明した図8の画像処理方法とは別の方法にて、運動視差による画像領域の変形(歪み)を防止する。 In the example of FIG. 12, for the facing image (standing image), deformation (distortion) of the image region due to motion parallax is prevented by a method different from the image processing method of FIG. 8 described above.
 先に説明した図8の例では、視点位置ずれに伴って正対画像が歪むという課題に対して、その運動視差による歪み(矩形の変形)とは逆の歪み(逆の矩形の変形)を事前に与えておくことで対処しているが、図12の例では、視点移動に合わせて、正対画像(の虚像)の表示位置をずらすことによって対処する。 In the example of FIG. 8 described above, for the problem that the facing image is distorted due to the displacement of the viewpoint position, the distortion (conformation of the rectangle opposite) opposite to the distortion due to the motion parallax (deformation of the rectangle) is applied. This is dealt with by giving it in advance, but in the example of FIG. 12, it is dealt with by shifting the display position of the facing image (virtual image) according to the movement of the viewpoint.
 上述のとおり、視点位置ずれに伴って正対画像が歪むという課題は、実空間に設定される座標系において、正対画像(の虚像)の表示位置が固定であることによって生じ得るものである。 As described above, the problem that the facing image is distorted due to the viewpoint position shift can be caused by the fixed display position of the facing image (virtual image) in the coordinate system set in the real space. ..
 図12の例では、実空間に設定される座標系における表示位置が固定である、という前提をなくすことで、上記課題に対処する。 In the example of FIG. 12, the above problem is dealt with by eliminating the premise that the display position in the coordinate system set in the real space is fixed.
 言い換えれば、視点座標系(人の視点を中心として、車両の前後方向、左右方向、上下方向の各々に沿う軸を設定した座標系であり、人の視点(顔等も含む)が動けば、その動きに応じて座標系も動く)において、正対画像の虚像が、常に同じ位置にあるように、画像(矩形の画像領域)の、表示部の表示面上での位置(あるいは、画像生成部における画像生成面や画像生成空間での位置)を、視点移動に対応させて、適宜、移動させる制御を実施する。 In other words, the viewpoint coordinate system (a coordinate system in which axes are set along each of the front-rear direction, the left-right direction, and the up-down direction of the vehicle with the person's viewpoint as the center, and if the person's viewpoint (including the face, etc.) moves, In (the coordinate system also moves according to the movement), the position (or image generation) of the image (rectangular image area) on the display surface of the display unit so that the imaginary image of the facing image is always at the same position. Control is performed to appropriately move the image generation surface or the position in the image generation space in the unit in correspondence with the movement of the viewpoint.
 このことは、実空間に設定される座標系では、虚像の位置が、視点ずれに応じて、適宜、移動するということである。この場合、例えば、正対画像の虚像と人の視点との相対位置関係は、常に一定であり、運動視差は考慮する必要がなくなり、見え方に変化がない。よって、運動視差によって画像が変形されて認知性が低下するという上記の課題が、この方法によっても解消され得る。また、画像(虚像)の表示位置が変わらないことは、視認者である運転者等に、安心感を与える効果もある。 This means that in the coordinate system set in the real space, the position of the virtual image moves appropriately according to the viewpoint shift. In this case, for example, the relative positional relationship between the virtual image of the facing image and the viewpoint of the person is always constant, it is not necessary to consider the motion parallax, and there is no change in the appearance. Therefore, the above-mentioned problem that the image is deformed by the motion parallax and the cognition is lowered can be solved by this method as well. In addition, the fact that the display position of the image (virtual image) does not change has the effect of giving a sense of security to the driver or the like who is the viewer.
 図12(A)では、図11の下側に示される構成に、視点座標系の移動量(回転量)算出部193を追加している。他の部分は図11と共通であるため、図12(A)は簡略化して、要部のみを記載している。 In FIG. 12A, a movement amount (rotation amount) calculation unit 193 of the viewpoint coordinate system is added to the configuration shown on the lower side of FIG. Since the other parts are the same as those in FIG. 11, FIG. 12 (A) is simplified and only the main parts are shown.
 図12(A)のHUD装置は、視点位置検出部192によって、実空間に設定されている座標系(XYZ座標系)における視点位置の変化(視点の変位)が検出されると、その変化(変位)を、視点座標系(X’Y’Z’座標系)の移動量(回転量も含む)として検出する。ワーピング制御部195は、検出された移動量に基づいて視点座標系を移動させ、移動後の視点座標系における所定位置(移動前の視点位置を示す座標点)に画像が表示されるように制御する。 In the HUD device of FIG. 12A, when the viewpoint position detection unit 192 detects a change in the viewpoint position (displacement of the viewpoint) in the coordinate system (XYZ coordinate system) set in the real space, the change (displacement of the viewpoint) is detected. Displacement) is detected as the amount of movement (including the amount of rotation) of the viewpoint coordinate system (X'Y'Z' coordinate system). The warping control unit 195 moves the viewpoint coordinate system based on the detected movement amount, and controls so that the image is displayed at a predetermined position (coordinate point indicating the viewpoint position before the movement) in the viewpoint coordinate system after the movement. do.
 この結果、図12(B)に示されるように、視点Aが、座標点X0から+X1に移動した場合には、車速表示SPも、左右方向に沿って、かつ座標点の移動方向と同じ方向に同じ距離だけ移動する。例えば、視点Aの移動前に車速表示SPが、運転者の正面に見えていたのならば、視点移動後においても、その状態が維持される。よって、運動視差が生じない。運転者は、歪みのない正対画像(立像)の虚像を、常時、見ることができる。よって、視認性が向上する。 As a result, as shown in FIG. 12B, when the viewpoint A moves from the coordinate point X0 to + X1, the vehicle speed display SP also follows the left-right direction and the same direction as the coordinate point movement direction. Move the same distance to. For example, if the vehicle speed display SP is visible in front of the driver before the viewpoint A is moved, that state is maintained even after the viewpoint is moved. Therefore, motion parallax does not occur. The driver can always see a virtual image of a front-facing image (standing image) without distortion. Therefore, the visibility is improved.
 次に、図13を参照する。図13は、視点追従ワーピング制御における手順の一例を示す図である。 Next, refer to FIG. FIG. 13 is a diagram showing an example of a procedure in viewpoint tracking warping control.
 ステップS1では、視点位置を検出して、ワーピング処理を開始する。ステップS2では、表示しようとする画像が、奥行き画像(傾斜像、傾斜画像)であるか、正対画像(立像、疑似立像)であるか、といった画像の種類の判定がなされる。 In step S1, the viewpoint position is detected and the warping process is started. In step S2, the type of image such as whether the image to be displayed is a depth image (tilted image, tilted image) or a facing image (standing image, pseudo standing image) is determined.
 ステップS2にて、奥行き画像と判定された場合には、ステップS3に移行する。 If it is determined to be a depth image in step S2, the process proceeds to step S3.
 ステップS3では、例えば、画像領域の外形(輪郭)が、虚像として正しく視認されたときの外形(輪郭)を台形(平行四辺形を含む)として、ワーピングを実施する。また、視点位置が車両の幅方向にずれたときは、運動視差による虚像の見え方の変化の方向を維持しつつも、変形の程度を低減するための、運動視差による見え方の変化とは逆特性の変形を台形に事前に与える画像処理を実施する。路面重畳画像の虚像が、例えば傾斜面の虚像表示面に表示されて路面から浮いて表示されるようなときは、運動視差による虚像の変形を抑えるように補正する(但し、適切な運動視差は残し、運動視差の相殺はしない)。 In step S3, for example, warping is performed with the outer shape (contour) of the image area correctly visually recognized as a virtual image as a trapezoid (including a parallelogram). In addition, when the viewpoint position shifts in the width direction of the vehicle, what is the change in the appearance due to the motion parallax in order to reduce the degree of deformation while maintaining the direction of the change in the appearance of the virtual image due to the motion parallax? Image processing is performed to give the trapezoidal shape a deformation of the reverse characteristic in advance. When the virtual image of the road surface superimposed image is displayed on the virtual image display surface of the inclined surface, for example, and is displayed floating from the road surface, it is corrected so as to suppress the deformation of the virtual image due to the motion parallax (however, the appropriate motion parallax is Leave it, and do not offset the motor parallax).
 ステップS2にて、正対画像と判定されたときは、ステップS4に移行する。 If it is determined in step S2 that the image is a facing image, the process proceeds to step S4.
 ステップS4では、例えば、画像領域の外形(輪郭)が、虚像として正しく視認されたときの外形(輪郭)を長方形又は正方形(これらを総称して「矩形」と称する)として、ワーピングを実施する。視点位置が車両の幅方向にずれたときは、虚像の、運動視差による見え方の変化とは逆特性の歪みを画像(画像領域)に事前に与える、『視点位置に関係なく矩形を維持するような矩形維持補正(運動視差相殺、抑制補正)』を実施する。また、矩形維持補正には、長方形の下辺(下底)を固定して、上辺(上底)を移動させて歪ませる態様、上辺(上底)を固定して、下辺(下底)を移動させて歪ませる態様の何れかを選択的に実施し、また、必要に応じて、虚像の表示位置を、視点座標系の所定位置に固定する制御(視点の移動に対応して虚像表示位置を移動させる制御)を実施する。 In step S4, for example, warping is performed with the outer shape (contour) of the image area correctly visually recognized as a virtual image as a rectangle or a square (collectively referred to as a "rectangle"). When the viewpoint position shifts in the width direction of the vehicle, the image (image area) is preliminarily distorted by the characteristics opposite to the change in the appearance of the virtual image due to motion parallax. Rectangle maintenance correction (motion parallax cancellation, suppression correction) ”is carried out. For rectangle maintenance correction, the lower side (lower base) of the rectangle is fixed and the upper side (upper base) is moved to distort it. The upper side (upper base) is fixed and the lower side (lower base) is moved. Control to selectively implement any of the modes of causing and distorting, and fixing the display position of the virtual image to a predetermined position in the viewpoint coordinate system (the virtual image display position corresponding to the movement of the viewpoint) as necessary. Control to move) is carried out.
 ステップS5では、画像補正の終了を判定する。Yの場合は視点追従ワーピング処理を終了する。Nの場合はステップS2に戻る。 In step S5, it is determined that the image correction is completed. In the case of Y, the viewpoint tracking warping process is terminated. In the case of N, the process returns to step S2.
 以上説明したように、本発明の実施形態によれば、例えば、立体的な視覚を前提とした奥行き画像(傾斜像)、あるいは、原則的には立体的な視覚を前提としない正対画像(立像、疑似立像)について、より自然な視認性を確保できるようなワーピング制御を実現することができる。 As described above, according to the embodiment of the present invention, for example, a depth image (tilted image) premised on stereoscopic vision, or a facing image not premised on stereoscopic vision (in principle). It is possible to realize warping control that can ensure more natural visibility for (standing image, pseudo standing image).
 近年のHUD装置は、例えば車両の前方のかなり広い範囲にわたって虚像を表示することを前提として開発される傾向があり、この場合、ウインドシールド上における虚像表示領域が拡大され、多様な表示が可能となる。 Recent HUD devices tend to be developed on the assumption that a virtual image is displayed over a fairly wide area in front of the vehicle, for example. In this case, the virtual image display area on the windshield is expanded and various displays are possible. Become.
 種類の異なる画像を同時に表示することもできるが、但し、この場合には、画像の種類毎に、必要とされるワーピングの方式が異なることが予想される。本発明によれば、画像の種類に応じて、ワーピングの方式を変更しつつ対応できることから、多様な表示の視認性を低下させることがない。よって、HUD装置の高機能化や高性能化を実現することができる。 It is possible to display different types of images at the same time, but in this case, it is expected that the required warping method will differ depending on the type of image. According to the present invention, since the warping method can be changed according to the type of the image, the visibility of various displays is not deteriorated. Therefore, it is possible to realize high functionality and high performance of the HUD device.
 本発明は、左右の各眼に同じ画像の表示光を入射させる単眼式、左右の各眼に、視差をもつ画像を入射させる視差式のいずれのHUD装置においても使用可能である。 The present invention can be used in either a monocular HUD device in which the display light of the same image is incident on each of the left and right eyes, or a parallax type HUD device in which an image having parallax is incident on each of the left and right eyes.
 本明細書において、車両という用語は、広義に、乗り物としても解釈し得るものである。また、ナビゲーションに関する用語(例えば標識等)についても、例えば、車両の運行に役立つ広義のナビゲーション情報という観点等も考慮し、広義に解釈するものとする。また、HUD装置には、シミュレータ(例えば、航空機のシミュレータやゲーム装置としてのシミュレータ等)として使用されるものも含まれるものとする。 In this specification, the term vehicle can be broadly interpreted as a vehicle. In addition, terms related to navigation (for example, signs, etc.) shall be interpreted in a broad sense in consideration of, for example, the viewpoint of navigation information in a broad sense useful for vehicle operation. Further, the HUD device shall include a device used as a simulator (for example, a simulator as an aircraft simulator, a simulator as a game device, etc.).
 図14(A)、図14(B)は、虚像表示面の変形例を示す図である。虚像表示面PS1の車両の幅方向(左右方向、X方向)から見た断面形状は、上記の運転者側が凸形状のものに限定されるものではない。虚像表示面PS1は、図14(A)に示すように、運転者側が凹形状であってもよい。また、虚像表示面PS1は、図14(B)に示すように、湾曲していなくてもよい。 14 (A) and 14 (B) are views showing a modified example of the virtual image display surface. The cross-sectional shape of the virtual image display surface PS1 as seen from the width direction (horizontal direction, X direction) of the vehicle is not limited to the one having a convex shape on the driver side. As shown in FIG. 14A, the virtual image display surface PS1 may have a concave shape on the driver side. Further, the virtual image display surface PS1 does not have to be curved as shown in FIG. 14 (B).
 本発明は、上述の例示的な実施形態に限定されず、また、当業者は、上述の例示的な実施形態を特許請求の範囲に含まれる範囲まで、容易に変更することができるであろう。 The present invention is not limited to the above-mentioned exemplary embodiments, and those skilled in the art will be able to easily modify the above-mentioned exemplary embodiments to the extent included in the claims. ..
1・・・車両(自車両)、2・・・被投影部材(反射透光部材、ウインドシールド等)、4・・・投影領域、5・・・虚像表示領域、7・・・ステアリングホイール、51・・・表示光、100・・・HUD装置、120・・・光学部材を含む光学系、150・・・投光部(画像投射部)、160・・・表示部(例えば液晶表示装置やスクリーン等)、164・・・表示面、170・・・曲面ミラー(凹面鏡等)、179・・・反射面、188・・・撮像部(瞳検出部、顔撮像部等)、190・・・表示制御部、192・・・視点位置検出部、193・・・視点座標系の移動量(回転量を含む)算出部、194・・・ワーピング処理部、195・・・ワーピング制御部、196・・・原画像データ、197・・・ワーピング処理後データ、199・・・第1の画像変換テーブル、200・・・第2の画像変換テーブル、201・・・VRAM(画像処理用記憶装置)、202・・・画像生成部(画像レンダリング部)、EB・・・アイボックス、WP・・・ワーピングパラメータ、PS1a・・・立面(疑似立面を含む)の虚像表示面、PS1b・・・傾斜面(路面に略重畳されるものも含む)の虚像表示面、V・・・虚像、Z1・・・立像HUD領域、Z2・・・傾斜像HUD領域 1 ... Vehicle (own vehicle), 2 ... Projected member (reflected translucent member, windshield, etc.), 4 ... Projection area, 5 ... Virtual image display area, 7 ... Steering wheel, 51 ... Display light, 100 ... HUD device, 120 ... Optical system including optical member, 150 ... Light projecting unit (image projection unit), 160 ... Display unit (for example, liquid crystal display device) Screen, etc.) 164 ... Display surface, 170 ... Curved mirror (concave mirror, etc.), 179 ... Reflective surface, 188 ... Imaging unit (pupil detection unit, face imaging unit, etc.), 190 ... Display control unit, 192 ... viewpoint position detection unit, 193 ... movement amount (including rotation amount) calculation unit of the viewpoint coordinate system, 194 ... warping processing unit, 195 ... warping control unit, 196. Original image data, 197 ... data after warping processing, 199 ... first image conversion table, 200 ... second image conversion table, 201 ... VRAM (storage device for image processing), 202 ... Image generation unit (image rendering unit), EB ... Eye box, WP ... Warping parameter, PS1a ... Virtual image display surface of elevation (including pseudo elevation), PS1b ... Tilt Virtual image display surface of a surface (including those substantially superimposed on the road surface), V ... virtual image, Z1 ... standing image HUD region, Z2 ... tilted image HUD region

Claims (7)

  1.  車両に搭載され、画像を、前記車両に備わる被投影部材に投影することで、運転者に前記画像の虚像を視認させるヘッドアップディスプレイ(HUD)装置であって、
     前記画像を生成する画像生成部と、
     前記画像を表示する表示部と、
     前記画像の表示光を反射して、前記被投影部材に投影する光学部材を含む光学系と、
     アイボックスにおける運転者の視点位置に応じてワーピングパラメータを更新し、そのワーピングパラメータを用いて前記表示部に表示する画像を補正する視点位置追従ワーピング制御を実施する制御部と、
     を有し、
     前記制御部は、
     奥行き画像に対してワーピング処理を実施する場合には、前記奥行き画像の画像領域の輪郭が虚像として正しく視認されたときの輪郭を、少なくとも1組の対辺が互いに平行である台形としてワーピング制御を実施し、
     正対画像に対してワーピング処理を実施する場合には、前記正対画像の画像領域の輪郭が虚像として正しく視認されたときの輪郭を、長方形又は正方形を含む矩形としてワーピング制御を実施する、
     ことを特徴とするヘッドアップディスプレイ装置。
    A head-up display (HUD) device mounted on a vehicle and projecting an image onto a projected member provided on the vehicle so that the driver can visually recognize a virtual image of the image.
    An image generation unit that generates the image and
    A display unit that displays the image and
    An optical system including an optical member that reflects the display light of the image and projects it onto the projected member.
    A control unit that updates the warping parameter according to the viewpoint position of the driver in the eyebox and performs viewpoint position tracking warping control that corrects the image displayed on the display unit using the warping parameter.
    Have,
    The control unit
    When performing warping processing on a depth image, warping control is performed with the contour when the contour of the image area of the depth image is correctly recognized as a virtual image as a trapezoid in which at least one set of opposite sides is parallel to each other. death,
    When the warping process is performed on the facing image, the warping control is performed by setting the contour when the contour of the image area of the facing image is correctly recognized as a virtual image as a rectangle or a rectangle including a square.
    A head-up display device characterized by that.
  2.  前記制御部は、
     前記奥行き画像に対してワーピング処理を実施しているときに、前記視点が前記車両の幅方向に沿って移動した場合には、その移動に起因して生じる前記台形の変形が反映されるように前記台形を変形させるワーピング制御を実施し、
     前記正対画像に対してワーピング処理を実施しているときに、前記視点が前記車両の幅方向に沿って移動した場合には、その移動に起因して生じる前記矩形の変形を打ち消すように前記矩形を変形させるワーピング制御を実施する、
     ことを特徴とする、請求項1に記載のヘッドアップディスプレイ装置。
    The control unit
    When the viewpoint moves along the width direction of the vehicle while the warping process is performed on the depth image, the deformation of the trapezoid caused by the movement is reflected. Warping control that deforms the trapezoid is performed.
    When the viewpoint moves along the width direction of the vehicle while the warping process is performed on the facing image, the deformation of the rectangle caused by the movement is canceled out. Implement warping control that deforms a rectangle,
    The head-up display device according to claim 1, wherein the head-up display device is characterized in that.
  3.  前記制御部は、前記正対画像に対するワーピング制御において、前記矩形の変形を打ち消すように前記矩形を変形させる場合に、
     前記矩形の下辺を固定して、上辺を移動させて歪ませる、
     又は、
     上辺を固定して、下辺を移動させて歪ませる、
     ことを特徴とする請求項2に記載のヘッドアップディスプレイ装置。
    When the control unit deforms the rectangle so as to cancel the deformation of the rectangle in the warping control for the facing image.
    Fix the lower side of the rectangle and move the upper side to distort it.
    Or,
    Fix the upper side and move the lower side to distort it,
    The head-up display device according to claim 2.
  4.  前記制御部は、
     前記正対画像に対してワーピング処理を実施しているときに、前記視点が前記車両の幅方向に沿って移動した場合には、
     前記正対画像の虚像が、前記視点を基準とする視点座標系の所定位置に固定されるように、前記正対画像の位置を変更することを特徴とする、請求項1に記載のヘッドアップディスプレイ装置。
    The control unit
    If the viewpoint moves along the width direction of the vehicle while the warping process is being performed on the facing image,
    The head-up according to claim 1, wherein the position of the facing image is changed so that the virtual image of the facing image is fixed at a predetermined position in the viewpoint coordinate system with respect to the viewpoint. Display device.
  5.  前記制御部は、
     前記奥行き画像と前記正対画像が混在する表示を行う場合において、前記奥行き画像と前記正対画像とに、個別に画像処理を実施した後、画像処理後の各画像を合成して1つの画像とする、
     ことを特徴とする請求項1乃至4の何れか1項に記載のヘッドアップディスプレイ装置。
    The control unit
    In the case of displaying a mixture of the depth image and the facing image, the depth image and the facing image are individually subjected to image processing, and then the images after the image processing are combined to form one image. ,
    The head-up display device according to any one of claims 1 to 4.
  6.  前記制御部は、
     前記アイボックスの左右方向における所定の第1の位置から見た前記奥行き画像の画像領域のうち、少なくとも1組の対辺が互いに平行である台形で囲まれる領域を第1輪郭とするとき、
      前記第1の位置より左側から見た場合、その移動に起因して生じる前記台形の変形に対し、前記第1輪郭の上辺が下辺に対して相対的に右側になり、
      前記第1の位置より右側から見た場合、その移動に起因して生じる前記台形の変形に対し、前記第1輪郭の上辺が下辺に対して相対的に左側になる、ように前記ワーピング制御を実施し、
     前記アイボックスの左右方向における前記第1の位置から見た前記正体画像の画像領域のうち、長方形又は正方形を含む矩形で囲まれる領域を第2輪郭とするとき、
      前記第1の位置より左側及び右側から見た場合、前記第2輪郭の上辺の位置が下辺の位置に対して相対的に変化しないように、前記ワーピング制御を実施する、
     ことを特徴とする、請求項1乃至5の何れか1項に記載のヘッドアップディスプレイ装置。
    The control unit
    When the region surrounded by a trapezoid whose opposite sides are parallel to each other is defined as the first contour in the image region of the depth image viewed from a predetermined first position in the left-right direction of the eye box.
    When viewed from the left side of the first position, the upper side of the first contour is on the right side relative to the lower side with respect to the deformation of the trapezoid caused by the movement.
    When viewed from the right side of the first position, the warping control is performed so that the upper side of the first contour is on the left side relative to the lower side with respect to the deformation of the trapezoid caused by the movement. Carry out,
    When the area surrounded by a rectangle or a rectangle including a square is defined as the second contour in the image area of the true image viewed from the first position in the left-right direction of the eye box.
    When viewed from the left side and the right side of the first position, the warping control is performed so that the position of the upper side of the second contour does not change relative to the position of the lower side.
    The head-up display device according to any one of claims 1 to 5, wherein the head-up display device is characterized in that.
  7.  前記制御部が、前記奥行き画像に対してワーピング処理を実施しているときに、前記視点が前記車両の幅方向に沿って移動した場合に、その移動に起因して生じる前記台形の変形が反映されるように前記台形を変形させるワーピング制御を実施する際、
     表示画像が、路面に重畳するように表示される路面重畳画像であり、
     前記路面重畳画像の虚像が、路面に対して傾斜している傾斜面に表示され、かつ前記虚像が路面から浮き上がって見える場合には、路面から浮き上がって見えない場合に比べて、前記台形の変形の程度を軽減する運動視差補正を伴うワーピング制御を実施する、
     ことを特徴とする、請求項2に記載のヘッドアップディスプレイ装置。
    When the control unit performs warping processing on the depth image and the viewpoint moves along the width direction of the vehicle, the deformation of the trapezoid caused by the movement is reflected. When performing warping control that deforms the trapezoid so as to be
    The displayed image is a road surface superimposed image displayed so as to be superimposed on the road surface.
    When the virtual image of the road surface superimposed image is displayed on the inclined surface inclined with respect to the road surface and the virtual image appears to be lifted from the road surface, the trapezoid is deformed as compared with the case where the virtual image is lifted from the road surface and cannot be seen. Implement warping control with motion parallax correction to reduce the degree of
    The head-up display device according to claim 2, wherein the head-up display device is characterized in that.
PCT/JP2021/020328 2020-05-29 2021-05-28 Head-up display device WO2021241718A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022526655A JPWO2021241718A1 (en) 2020-05-29 2021-05-28

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020094169 2020-05-29
JP2020-094169 2020-05-29

Publications (1)

Publication Number Publication Date
WO2021241718A1 true WO2021241718A1 (en) 2021-12-02

Family

ID=78744678

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/020328 WO2021241718A1 (en) 2020-05-29 2021-05-28 Head-up display device

Country Status (2)

Country Link
JP (1) JPWO2021241718A1 (en)
WO (1) WO2021241718A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4212378A1 (en) * 2022-01-11 2023-07-19 Hyundai Mobis Co., Ltd. System for vehicle display image warping

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014199385A (en) * 2013-03-15 2014-10-23 日本精機株式会社 Display device and display method thereof
JP2016053691A (en) * 2014-09-04 2016-04-14 矢崎総業株式会社 In-vehicle projection display device
JP2016068577A (en) * 2014-09-26 2016-05-09 矢崎総業株式会社 Head-up display device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014199385A (en) * 2013-03-15 2014-10-23 日本精機株式会社 Display device and display method thereof
JP2016053691A (en) * 2014-09-04 2016-04-14 矢崎総業株式会社 In-vehicle projection display device
JP2016068577A (en) * 2014-09-26 2016-05-09 矢崎総業株式会社 Head-up display device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4212378A1 (en) * 2022-01-11 2023-07-19 Hyundai Mobis Co., Ltd. System for vehicle display image warping

Also Published As

Publication number Publication date
JPWO2021241718A1 (en) 2021-12-02

Similar Documents

Publication Publication Date Title
JP6248931B2 (en) Image display device and image display method
JP2014026244A (en) Display device
WO2018105533A1 (en) Image projection device, image display device, and moving body
CN101720445A (en) Scanning image display device, eyeglasses-style head-mount display, and automobile
WO2018088362A1 (en) Head-up display
JP2012079291A (en) Program, information storage medium and image generation system
US10809526B2 (en) Display system and movable object
JP7126115B2 (en) DISPLAY SYSTEM, MOVING OBJECT AND DESIGN METHOD
WO2021241718A1 (en) Head-up display device
JP7358909B2 (en) Stereoscopic display device and head-up display device
JP2018132685A (en) Head-up display device
JP2012083534A (en) Transmission display device
JP6864580B2 (en) Head-up display device, navigation device
JP7110968B2 (en) head-up display device
WO2018199244A1 (en) Display system
JP7354846B2 (en) heads up display device
WO2022024962A1 (en) Head-up display device
CN114127614B (en) Head-up display device
JP2022114602A (en) Display control device, display device, display system, and image display control method
WO2021002428A1 (en) Head-up display device
JP2022036432A (en) Head-up display device, display control device, and method for controlling head-up display device
WO2022024964A1 (en) Head-up display device
WO2020009218A1 (en) Head-up display device
KR20220027494A (en) Head up display and control method thereof
WO2021065699A1 (en) Display control device and head-up display device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21813672

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022526655

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21813672

Country of ref document: EP

Kind code of ref document: A1