CN113126293A - Head-up display system - Google Patents

Head-up display system Download PDF

Info

Publication number
CN113126293A
CN113126293A CN202010026607.2A CN202010026607A CN113126293A CN 113126293 A CN113126293 A CN 113126293A CN 202010026607 A CN202010026607 A CN 202010026607A CN 113126293 A CN113126293 A CN 113126293A
Authority
CN
China
Prior art keywords
image source
prompt
information
distance
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010026607.2A
Other languages
Chinese (zh)
Other versions
CN113126293B (en
Inventor
方涛
徐俊峰
吴慧军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Future Beijing Black Technology Co ltd
Original Assignee
Future Beijing Black Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Future Beijing Black Technology Co ltd filed Critical Future Beijing Black Technology Co ltd
Priority to CN202010026607.2A priority Critical patent/CN113126293B/en
Priority to PCT/CN2021/070945 priority patent/WO2021139792A1/en
Publication of CN113126293A publication Critical patent/CN113126293A/en
Application granted granted Critical
Publication of CN113126293B publication Critical patent/CN113126293B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q9/00Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B21/00Projectors or projection-type viewers; Accessories therefor
    • G03B21/14Details
    • G03B21/20Lamp housings
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B21/00Projectors or projection-type viewers; Accessories therefor
    • G03B21/14Details
    • G03B21/20Lamp housings
    • G03B21/2066Reflectors in illumination beam
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B21/00Projectors or projection-type viewers; Accessories therefor
    • G03B21/14Details
    • G03B21/28Reflectors in projection beam
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0183Adaptation to parameters characterising the motion of the vehicle

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • Mechanical Engineering (AREA)
  • Instrument Panels (AREA)

Abstract

The present invention provides a head-up display system including: the device comprises an image source group, a curved mirror, a projection image source, a light ray control device and an auxiliary driving controller; the auxiliary driving controller is respectively connected with the projection image source and the plurality of image sources in the image source group; when prompt content needs to be displayed, the driving assistance controller selects a target image source from the projection image source and the plurality of image sources in the image source group, and outputs the prompt content to the target image source so that the target image source displays the prompt content. The head-up display system provided by the embodiment of the invention can respectively image at a plurality of imaging positions with different distances from the reflecting device, so that images formed by the image source can be attached to objects with different distances, and the attaching effect is improved; and the light ray control device can be arranged in a large range, so that a large-area projection imaging area is formed on the surface of the reflecting device, large-range imaging is realized, and the display effect of the reflecting device can be improved.

Description

Head-up display system
Technical Field
The invention relates to the technical field of vehicles, in particular to a head-up display system.
Background
HUD (head up display) technique can avoid the driver to look at the distraction that the panel board leads to driving in-process head-lowering, can improve driving safety factor, also can bring better driving experience simultaneously. Therefore, HUDs that use automobile windshields for imaging are receiving increasing attention.
The augmented reality head-up display (AR-HUD) reasonably and vividly displays some driving information in a superimposed manner in a visual line area of a driver through an optical system specially designed in the AR-HUD, so that the perception of the driver to the actual driving environment is further enhanced. The rise of AR-HUD puts higher technical requirements on the HUD industry.
Based on the principle of AR-HUD, images projected by an image source need to be perfectly attached to a real environment or a real object, and the images need to be accurately fused with a road if direction indicating arrows. However, during driving, the distance between an object (such as a vehicle, a pedestrian, a sign board, etc.) to be AR-attached outside the vehicle is not consistent with the distance between the vehicle and the object, and the distance is from several meters, dozens of meters, to even dozens of meters. The existing AR-HUD can only image at a fixed distance, so that the real environment and the image cannot be well fused.
Disclosure of Invention
To solve the above problems, it is an object of an embodiment of the present invention to provide a head-up display system.
An embodiment of the present invention provides a head-up display system, including: the device comprises an image source group, a curved mirror, a projection image source, a light ray control device and an auxiliary driving controller; the auxiliary driving controller is respectively connected with the projection image source and the plurality of image sources in the image source group;
the image source group comprises a first image source and a second image source, and the light ray control device comprises a main optical axis control element and a diffusion element; the curved mirror and the light ray control device are arranged on the same side of the reflecting device;
the first image source is used for emitting first imaging light rays incident to the curved mirror, and the second image source is used for emitting second imaging light rays incident to the curved mirror; a first object distance corresponding to the first image source is different from a second object distance corresponding to the second image source, wherein the first object distance is the length of a propagation path of the first imaging light from the first image source to the curved mirror, and the second object distance is the length of a propagation path of the second imaging light from the second image source to the curved mirror;
the curved mirror is used for reflecting the first imaging light rays and the second imaging light rays to the reflecting device and reflecting the first imaging light rays and the second imaging light rays to the range of the eye box through the reflecting device;
the projection source is used for emitting projection imaging light rays which are incident to the light ray control device, the main optical axis control element is used for reflecting a plurality of paths of projection imaging light rays to the reflecting device and reflecting the plurality of paths of projection imaging light rays to the same observation range through the reflecting device, and the observation range is a position or an area in the eye box range;
the dispersion element is arranged on one side, close to the projection light source, of the main optical axis control element and is arranged between the main optical axis control element and the projection light source, and the dispersion element is used for dispersing the projection imaging light reflected by the main optical axis control element and forming light spots;
when prompt content needs to be displayed, the driving assistance controller selects a target image source from the projection image source and the plurality of image sources in the image source group, and outputs the prompt content to the target image source so that the target image source displays the prompt content.
In the scheme provided by the embodiment of the invention, the plurality of image sources with different object distances in the image source group are utilized to respectively image at a plurality of imaging positions with different distances from the reflecting device, so that the images formed by the image sources can be attached to the objects with different distances, the attaching effect is improved, and the problem of larger parallax caused by the longer distance between the images formed by the image sources and the objects is avoided. Meanwhile, the light control device can converge the projection imaging light rays with different incidence angles into the same observation range and disperse the projection imaging light rays into the eye box range, so that the brightness of the projection imaging light rays can be improved, and the imaging range can be ensured; the light ray control device can be arranged in a large range, so that a large-area projection imaging area is formed on the surface of the reflecting device, and large-range imaging is realized. The driving assistance controller selects a proper image source as a target image source and controls the target image source to display prompt contents on the surface of the reflecting device, so that the reflecting device can display a large-range or multi-level image, and the display effect of the reflecting device can be improved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram illustrating a first configuration of an image source forming image in a head-up display system according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a head-up display system according to an embodiment of the present invention, illustrating a projected image source imaging structure;
FIG. 3 illustrates a schematic diagram of a projected image source imaging in a head-up display system provided by an embodiment of the present invention;
FIG. 4 illustrates a schematic view of a head-up display system provided by an embodiment of the present invention imaging outside of a reflective device;
FIG. 5 illustrates a first schematic view of an imaging area on a reflective device provided by embodiments of the present invention;
FIG. 6 illustrates a second schematic view of an imaging area on a reflective device provided by embodiments of the present invention;
FIG. 7 is a diagram illustrating a second configuration of source group imaging in a head-up display system in accordance with an embodiment of the present invention;
FIG. 8 is a diagram illustrating a third exemplary configuration of an image source component image in a head-up display system according to an embodiment of the invention;
FIG. 9 is a diagram illustrating a fourth exemplary configuration of source group imaging in a head-up display system according to an embodiment of the invention;
FIG. 10 is a diagram illustrating a fifth exemplary configuration of an image source component in a head-up display system according to an embodiment of the invention;
FIG. 11 is a diagram illustrating a sixth configuration of an image source component in a head-up display system according to an embodiment of the invention;
FIG. 12 is a diagram illustrating a seventh configuration of source group imaging in a head-up display system according to an embodiment of the invention;
FIG. 13 is a diagram illustrating an eighth exemplary configuration of an image source component in a head-up display system according to an embodiment of the invention;
FIG. 14 is a diagram illustrating a ninth configuration of an image source component image in the head-up display system according to the embodiment of the invention;
FIG. 15 is a diagram illustrating a tenth exemplary configuration of source group imaging in a head-up display system according to an embodiment of the invention;
FIG. 16 is a schematic diagram illustrating an imaging principle of a discrete first reflective structure in a head-up display system according to an embodiment of the present invention;
FIG. 17 is a schematic diagram illustrating the imaging principle of a continuous second reflective structure in a head-up display system according to an embodiment of the present invention;
FIG. 18 is a schematic diagram illustrating a primary optical axis control element with a second reflective structure in a head-up display system in accordance with an embodiment of the present invention;
fig. 19 is a schematic diagram illustrating an overall structure of a head-up display system according to an embodiment of the present invention;
fig. 20 is a flowchart showing a driving assistance based on a safe distance by the driving assistance controller according to the embodiment of the present invention;
FIG. 21 is a schematic view of a display of a reflector at a closer vehicle distance in an embodiment of the present invention;
FIG. 22 is a schematic view of a reflective device displaying a bird's eye view in accordance with an embodiment of the present invention;
FIG. 23 is a schematic view of a display of a reflective device when a pedestrian is approaching in accordance with an embodiment of the present invention;
fig. 24 is a flowchart showing the driving assistance controller determining whether there is an offset in the lane according to the embodiment of the present invention;
FIG. 25 is a schematic view of a display of a reflector during lane departure in an embodiment of the present invention;
fig. 26 is another schematic diagram of the display screen of the reflecting device in lane departure in the embodiment of the present invention.
Icon:
11-a first image source, 12-a second image source, 13-a third image source, 111-a first magnified imaging position, 121-a second magnified imaging position, 131-a third magnified imaging position, 20-a curved mirror, 21-a first transflective element, 22-a second transflective element, 23-a planar mirror, 30-a projection image source, 301-a projection imaging position, 40-a light ray control device, 41-a primary optical axis control element, 411-a first reflective structure, 412-a second reflective structure, 42-a dispersive element, 50-a reflective device, 51-a magnified imaging area, 52-a projection imaging area, 501-a text, 502-a rectangular frame, 503-a bird's eye view, 504-an arrow, 61-an observation area, 62-an eye box area, 71-front vehicle, 72-rear vehicle, 73-local vehicle, 74-pedestrian, 75-current lane of travel.
Detailed Description
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the present invention, unless otherwise expressly specified or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The head-up display system provided by the embodiment of the invention is used for realizing multi-level imaging, large-range imaging and even comprehensive imaging, so that information can be displayed on a large range on a vehicle, or the formed image is more attached to external objects, and further parallax is eliminated. Specifically, the head-up display system includes: the device comprises an image source group, a curved mirror 20, a projection image source 30, a light ray control device 40 and a driving-assistant controller; the driving-assistant controller is respectively connected with the projection image source 30 and a plurality of image sources in the image source group.
The image source group comprises a first image source 11 and a second image source 12, and may further comprise more image sources, for example, a third image source 13. The first image source 11 and the second image source 12 are respectively connected with the driving assistant controller for realizing multi-level imaging. The light ray control device 40 includes a main optical axis control element 41 and a dispersion element 42; the projection source 30 and the light control device 40 achieve wide-range imaging. The curved mirror 20 and the light control device 40 are disposed on the same side of the reflection device 50, and the reflection device 50 may be a windshield of the vehicle, or a reflection film inside the windshield, which can reflect the imaging light emitted from the head-up display system and does not affect the driver to observe the object or scene outside the vehicle through the reflection film; accordingly, the curved mirror 20 and the light control device 40 are both located within the vehicle, i.e., inside the reflector device 50. Specifically, the curved mirror 20 and the light control device 40 in the present embodiment may be disposed below the reflection device 50, for example, at an IP (Instrument Panel) desk of an automobile.
Referring to fig. 1, a first image source 11 is used for emitting a first imaging light incident on a curved mirror 20, and a second image source 12 is used for emitting a second imaging light incident on the curved mirror 20; a first object distance corresponding to the first image source 11 is different from a second object distance corresponding to the second image source 12, the first object distance is the length of a propagation path of the first imaging light from the first image source 11 to the curved mirror 20, and the second object distance is the length of a propagation path of the second imaging light from the second image source 12 to the curved mirror 20; the curved mirror 20 is used for reflecting the first and second imaging light rays to the reflecting device 50, and reflecting the first and second imaging light rays to the eye box range 62 through the reflecting device 50.
Referring to fig. 2 and 3, the shadowgraph image source 30 is configured to emit a plurality of projection image light rays incident on the light control device 40, the main optical axis control element 41 is configured to reflect the plurality of projection image light rays to the reflection device 50, and reflect the plurality of projection image light rays to the same observation range 61 via the reflection device 50, where the observation range 61 is a position or an area within the eye box range 62; the dispersion element 42 is disposed on a side of the main optical axis control element 41 close to the projection light source and between the main optical axis control element 41 and the projection light source, and the dispersion element 42 is configured to disperse the projection imaging light reflected by the main optical axis control element 41 and form a light spot.
When the prompt content needs to be displayed, the driving assistance controller selects a target image source from the projection image source 30 and the plurality of image sources in the image source group, and outputs the prompt content to the target image source, so that the target image source displays the prompt content.
In the embodiment of the present invention, the first image source 11 and the second image source 12 in the image source set are respectively used for imaging, and the first object distance of the first image source 11 is different from the second object distance of the second image source 12. Based on the imaging principle of the curved mirror 20, the length of the propagation path of the imaging light (e.g. the first imaging light, the second imaging light, or the third imaging light, etc.) emitted by the image source (e.g. the first image source 11, or the second image source 12, or the third image source 13, etc.) to the curved mirror 20 is closer to the focal length of the curved mirror 20, so that the curved mirror 20 can image at a farther position, that is, the image formed by the curved mirror 20 has a larger image distance; accordingly, the image formed by the curved mirror 20 can form a corresponding virtual image on the other side of the reflection device 50 under the action of the reflection device 50, and the larger the image distance of the virtual image formed by the curved mirror 20 is, the farther the virtual image formed by the reflection device 50 is from the reflection device 50. As shown in fig. 4, the image source set in fig. 4 includes three image sources (i.e. the first image source 11, the second image source 12 and the third image source 13) as an example, and the three image sources are respectively imaged at the first enlarged imaging position 111, the second enlarged imaging position 121 and the third enlarged imaging position 131, so that the head-up display system can be imaged at different positions from the reflection device 50, i.e. a multi-level image is formed. Alternatively, the first object distance and the second object distance are both smaller than the focal length of the curved mirror 20, so that the curved mirror 20 can form magnified virtual images of the first image source 11 and the second image source 12.
In this embodiment, the imaging light emitted from the image source in the image source group can be incident to the reflection device 50 after being reflected by the curved mirror 20, and is reflected by the reflection device 50 into the eye box range, so that the driver can observe the image formed by the image source in the eye box range. The eye box range in this embodiment refers to a range in which the driver can view an image on the reflection device 50, approximately corresponding to the position of the head of the driver; the size of the eye box range may be specifically determined based on the actual situation.
In addition, as shown in fig. 2, the projection imaging light emitted from the projection image source 30 may also be incident to the reflection device 50 after being processed by the light control device, and the projection imaging light is reflected to the eye box range by the reflection action of the reflection device 50, so that the driver can view a virtual image formed outside the reflection device 50. In particular, referring to FIG. 4, the reflection device 50 may form an image of the projection source 30 at a projection imaging location 301.
Specifically, referring to fig. 3, the projection imaging light a emitted from the projection image source 30 passes through the diffusion element 42 and then is emitted to the main optical axis control element 41; the diffusion element 42 performs a first diffusion of the projected imaging light A, the process of which is not illustrated in FIG. 3 for convenience of description. Then the main optical axis control element 41 reflects the incident projection imaging light ray a; as shown in FIG. 3, in the absence of dispersing element 42, projected imaging light ray A may be directed along optical path a toward observation range 61; after the diffusion element 42 is disposed outside the main optical axis control element 41, the diffusion element 42 performs a second diffusion on the projection imaging light ray a, and diffuses the projection imaging light ray a into a plurality of light rays (including light ray a1, light ray a2, and the like) and disperses the light rays into a range to form a light spot, which can be used as the eye box range 62, so that the driver can view the image of the projection image source 30 in the eye box range 62.
Alternatively, the predetermined shape of the spot includes, but is not limited to, a circle, an ellipse, a square, a rectangle, a batwing shape. In the present embodiment, the size of the light spot is determined by two dispersions, and the shape of the light spot is determined by the shape of the dispersion element 42, and fig. 3 illustrates a rectangular light spot as an example. The observation range 61 may be a point or an area, that is, the main optical axis control element 41 may converge the projection imaging light emitted from the projection image source 30 into the observation range 61. In addition, the diffusion angle of the diffused light spot in the side view direction can be 10 degrees, and preferably 5 degrees; the dispersion angle in the front view direction may be 50 degrees, preferably 30 degrees. The diffusing element 42 includes, but is not limited to, a Diffractive Optical Element (DOE), such as a Beam Shaper (Beam Shaper), and light passing through the Diffractive Optical element diffuses and forms a spot with a specific geometric shape, and the size and shape of the spot are determined by the microstructure of the Diffractive Optical element. The diffusion element 42 is used for controlling the diffusion degree of the light, and diffuses the light passing through the main optical axis control element 41 at a certain angle, so that the required eye box range can be covered. The final imaging brightness and the visual angle are determined by the spread angle and the spot size of the light after dispersion, and the smaller the dispersion angle is, the higher the imaging brightness is, and the smaller the visual angle is; and vice versa.
In this embodiment, the main optical axis control element 41 may converge the projection imaging light rays with different incident angles into the same observation range 61, so as to improve the brightness of the projection imaging light rays; meanwhile, the light is diffused by the diffusion element 42, so that a driver can conveniently see the image formed by the projection image source 30 in a light spot range, and the imaging range can be expanded while the light brightness is improved. In this embodiment, since the light control device 40 can converge the projection imaging light, the driver can observe the virtual image formed by the reflection device 50 without the projection image source 30 having a particularly high brightness; and the light ray control device 40 may have a large area so that the light ray control device 40 may reflect the projection imaging light rays to a large position on the surface of the reflection device 50, the light ray control device 40 may be laid on the surface of an IP table of a vehicle in particular.
Specifically, as shown in fig. 5 and 6, the projection imaging light may be incident on a surface area of the reflection device 50, that is, the projection imaging area 52, and the driver may view a virtual image formed at the projection imaging position 301 through the projection imaging area 52, where the virtual image is an image formed by the reflection device 50 and corresponds to the projection imaging source 30. Accordingly, the area where the imaging light emitted by the image source group enters the surface of the reflection device 50 is the enlarged imaging area 51, that is, the driver can view the virtual image at the corresponding enlarged imaging position (for example, the first enlarged imaging position 111, the second enlarged imaging position 121, the third enlarged imaging position 131, and the like) through the enlarged imaging area 51, and the virtual image is the image formed by the reflection device 50 and corresponding to the image sources in the image source group. In addition, different image sources in the image source group may correspond to different enlarged imaging regions, and fig. 5 and 6 illustrate that three enlarged imaging regions are included, and the first image source 11, the second image source 12, and the third image source 13 correspond to different enlarged imaging regions respectively. For example, the first imaging light emitted from the first image source 11 may be incident on an enlarged imaging area on the surface of the reflection device 50, and the reflection device 50 forms a virtual image corresponding to the first image source 11 at the first enlarged imaging position 111, so that the driver can view the virtual image at the first enlarged imaging position 111 through the enlarged imaging area.
Alternatively, since the projection source 30 can form a larger range of images, the area of the projection imaging area 52 on the reflection device 50 is larger than the area of the magnified imaging area 51. Specifically, the magnified imaging area 51 may be located within the projected imaging area 52, as shown in FIG. 5; or the magnified imaging area 51 and the projected imaging area 52 are two different areas, as shown in fig. 6; or the magnified imaging area 51 and the projected imaging area 52 may be two partially overlapping areas.
It should be noted that, due to scattering and the like, the light emitted by the image source set and the projection image source 30 may cover all of the reflection device 50, but since the light reaches the eye box range 62 only after being reflected by the reflection device 50 and is viewed by the driver, the "imaging light" in this embodiment refers to the light emitted by the image source in the image source set and capable of imaging in the eye box range 62; accordingly, "projected imaging light" refers to light emitted by the projection source and capable of being imaged within the eye box region 62. That is, on the surface of the reflection device 50, only the area on which the imaging light capable of imaging in the eye box region 62 is incident is taken as the enlarged imaging area or the projected imaging area.
In an embodiment of the present invention, the head-up display system further includes a driving assistance controller for determining a prompt to be displayed and determining which image source the prompt is to be displayed. In this embodiment, the driving assistance controller selects a target image source from the projection image source 30 and the plurality of image sources in the image source group, and outputs prompt content to the target image source, so that the target image source displays the prompt content, and the reflection device 50 forms a virtual image at a corresponding imaging position, so that the prompt content can be displayed in the imaging area for the driver to watch. For example, the vehicle speed is required to be displayed in the projection imaging area at present, that is, the vehicle speed can be used as a prompt content; since the projection image source 30 can make the observer view the image in the projection imaging area, the projection image source 30 can be used as the target image source.
It should be noted that "displaying the prompting content in the imaging region" in this embodiment means that the driver can view the prompting content through the imaging region so that it appears from the driver's perspective that the prompting content is displayed in the imaging region, but a virtual image corresponding to the prompting content is substantially located outside the reflection device 50, for example, at the imaging positions (including the first enlarged imaging position 111, the second enlarged imaging position 121, the third enlarged imaging position 131, the projection imaging position 301) in fig. 4. The same or similar descriptions as "display the prompting content in the imaging area" in this embodiment (for example, the following "display the prompting content at the prompting position" and the like) are only for convenience of description, and are not used to limit the imaging area and the like to display the prompting content by themselves.
According to the head-up display system provided by the embodiment of the invention, the plurality of image sources with different object distances in the image source group are utilized to respectively image at the plurality of imaging positions with different distances from the reflecting device, so that the images formed by the image sources can be attached to the objects with different distances, the attaching effect is improved, and the problem of larger parallax caused by the longer distance between the images formed by the image sources and the objects is avoided. Meanwhile, the light control device can converge the projection imaging light rays with different incidence angles into the same observation range and disperse the projection imaging light rays into the eye box range, so that the brightness of the projection imaging light rays can be improved, and the imaging range can be ensured; the light ray control device can be arranged in a large range, so that a large-area projection imaging area is formed on the surface of the reflecting device, and large-range imaging is realized. The driving assistance controller selects a proper image source as a target image source and controls the target image source to display prompt contents on the surface of the reflecting device, so that the reflecting device can display a large-range or multi-level image, and the display effect of the reflecting device can be improved.
On the basis of the above-described embodiment, the driving assistance controller may determine the prompting content that needs to be displayed, which may also be used to determine the prompting position, i.e., the position where the prompting content is displayed on the reflection device 50. Furthermore, the driving assistance controller determines an imaging area including the prompt position, takes an image source corresponding to the imaging area as a target image source, and instructs the target image source to display prompt contents at the prompt position; the imaging area is an area where imaging light can be incident on the surface of the reflection device 50.
In the embodiment of the present invention, each of the prompt contents has a corresponding prompt position, and the prompt position may be preset or determined based on the current actual scene. For example, if the prompting content is a vehicle speed and the vehicle speed is preset at the lower left of the reflection device 50 for display, the corresponding position at the lower left of the reflection device 50 may be directly used as the prompting position; or, when there is a pedestrian outside the vehicle, a graph corresponding to the position of the pedestrian needs to be formed to remind the driver, the graph is the prompt content, and the position on the reflection device 50 where the prompt content needs to be displayed is the prompt position; specifically, the position where the pedestrian is projected onto the reflection device 50 may be used as the presentation position. In this embodiment, the prompt position may be a position point or a position range, and may be determined based on actual conditions.
In this embodiment, different image sources correspond to different imaging areas on the reflection device 50, and if the prompt position is located in a certain imaging area, the image source corresponding to the imaging area may be used as a target image source, and corresponding prompt content may be displayed at the prompt position based on the target image source. For example, if the cue position is within the magnified imaging region corresponding to the first image source 11, then the first image source 11 may be the target image source. In addition, if the different imaging areas have intersection and the prompting position is located in the multiple imaging areas, one imaging area can be selected; the imaging region may be randomly selected, or may be selected based on a preset selection rule.
Optionally, the head-up display system can display in a fitting manner based on AR (Augmented Reality) principle. Specifically, when the projection position of the projection mapping of the external object onto the reflection device 50 is located within the projection imaging area, the external object is taken as a target object; the projection imaging area is an area where the projection imaging light emitted from the projection image source 30 can be incident on the surface of the reflection device 50. Meanwhile, the projection position or the edge of the projection position located within the projection imaging area is taken as a cue position.
In the embodiment of the present invention, the external object is an object located outside the reflection device 50, and includes a stationary object such as a road surface and an indicator, and may also include a movable object such as a motor vehicle, a pedestrian, an animal, and a non-motor vehicle. The external object may be projected and mapped onto the reflection device 50, specifically, the external object may be projected and mapped to a certain position of the reflection device 50 along a direction toward the eye box range, and the position is a projection position, that is, the external object, the projection position, and the eye box range are collinear, so that the driver can see the external object through the projection position in the eye box range. If the projection position is located in the projection imaging area, the projection image source 30 can be used as a target image source; meanwhile, the external object can be used as a target object for AR display. Specifically, the projection position or the edge of the projection position in the magnified imaging area is used as the prompt position, and the target image source (i.e., the projection image source 30) can be controlled to display the prompt content at the prompt position, and the prompt position is consistent with the projection position of the external object, so that the external object, the prompt content displayed on the reflection device 50, and the three points of the eye box range are collinear, and therefore, a driver at the eye box range can view the prompt content attached to the external object (e.g., frame the external object out, etc.), and the driver can be effectively reminded.
Optionally, when the projection position of the projection of the external object onto the reflection device 50 is located in the magnified imaging area, the external object is used as a target object, and a target distance between the external object and the target object is determined; the magnified imaging area is the area where imaging light from the image source set can be incident on the surface of the reflection device 50. Meanwhile, the projection position or the edge of the projection position in the magnified imaging area is used as a prompt position; respectively determining the image distance of each image source in the image source group, taking the image distance matched with the target distance as a target image distance, and taking the image source corresponding to the target image distance as a target image source; the image distance is the distance between the virtual image of the image source formed by the curved mirror 20 and the curved mirror 20.
In the embodiment of the invention, similar to the situation that the projection position of the external object is located in the projection imaging area, if the projection position of the external object is located in the amplification imaging area, the external object can also be used as a target object for AR display, and the corresponding image source in the image source group can be used as a target image source. Since the image source group includes a plurality of image sources, in the present embodiment, which image source is the target image source is determined based on the distance (i.e. the target distance) between the external object and the head-up display system, wherein the distance may be the distance between the external object and the reflection device 50.
Specifically, in this embodiment, since different image sources in the image source group have different object distances, based on the imaging rule, different image sources also have different image distances, that is, the distances between the virtual image formed by the image sources and the curved mirror 20 are different, and the image distances may be mapped to different enlarged imaging positions outside the reflection device 50, for example, the first enlarged imaging position 111, the second enlarged imaging position 121, the third enlarged imaging position 131, and the like in fig. 4; and the larger the image distance, the further the corresponding magnified imaging position. After the target distance of the target object is determined, the magnified imaging position closest to the target object can be determined, and the image source corresponding to the closest magnified imaging position is taken as the target image source. For example, if the external object is near the second magnified imaging location 121, the second image source may be the target image source. The distance range can be allocated according to the image distance of each image source, and the image distance matched with the target distance is determined according to the distance range in which the target distance falls, so that which image source is used as the target image source is determined.
Furthermore, when the projection position of the external object onto the reflection device 50 is located within the enlarged imaging region and within the projection imaging region, the external object may be also taken as the target object, and the target distance to the target object may be determined; meanwhile, the projection position or the edge of the projection position is used as a prompt position; and respectively determining the image distance of the projection image source 30 and the image distance of each image source in the image source group, taking the image distance matched with the target distance as the target image distance, and taking the projection image source 30 corresponding to the target image distance or the image source in the image source group as the target image source. Wherein the image distance of the projection image source 30 is the distance between the projection imaging position 301 and the reflection device 50.
In the embodiment of the present invention, the most suitable image source is determined as the target image source based on the target distance of the target object, so that the distance difference between the virtual image of the target image source outside the reflection device 50 and the target object is minimized, the virtual image and the target object can be better attached, the parallax can be effectively reduced, and the effect of augmented reality display can be improved.
On the basis of the above embodiment, as shown in fig. 7, the head-up display system further includes a first transflective element 21; the first transflective element 21 is capable of transmitting light having a first characteristic and reflecting light having a second characteristic. The first image source 11 is arranged on one side of the first transflective element 21, and the second image source 12 and the curved mirror 20 are arranged on the other side of the first transflective element 21; the first imaging light has a first characteristic and the second imaging light has a second characteristic.
In the embodiment of the present invention, in order to avoid mutual influence when the first image source 11 and the second image source 12 are imaged, the position of a certain image source in the image source group is adjusted by using the first transflective element 21, so that the imaging light emitted by the image source is not shielded by other image sources; fig. 7 illustrates an example of changing the position of the second image source 12. Wherein the first transflective element 21 is capable of transmitting light with the first characteristic, so that the first imaging light can normally transmit and be incident on the curved mirror 20, and the first image source 11 can normally image; meanwhile, the first transflective element 21 may also reflect light with the second characteristic, so that the second imaging light may be reflected by the first transflective element 21 and then incident to the curved mirror 20 for imaging. The first characteristic and the second characteristic may be two different characteristics, and the "characteristic" in this embodiment refers to a property of the light, such as a polarization characteristic, a wavelength characteristic, and the like. For example, the first transflective element can transmit polarized light with a first polarization direction and reflect polarized light with a second polarization direction, and the first polarization direction and the second polarization direction are perpendicular to each other; meanwhile, the first image source 11 may emit a first imaging light with a first polarization direction, and the second image source 12 may emit a second imaging light with a second polarization direction, so that the first image source 11 and the second image source 12 can image without influence. The first transflective element in this embodiment may be specifically a Reflective Polarizer Mirror (RPM) Film or a Dual Brightness Enhancement Film (DBEF).
Alternatively, the first and second properties may be the same property, while the first transflective element is a transflective medium. For example, the first transflective element is a transflective medium, that is, the light transmittance and the light reflectance of the first transflective element are both 50%, and at this time, half of the first imaging light emitted from the first image source 11 is transmitted and the other half is reflected when passing through the first transflective element 21, so that half of the first imaging light can be transmitted to the curved mirror 20; accordingly, when the second imaging light emitted from the second image source 12 reaches the first transflective element 21, half of the second imaging light can be reflected to the curved mirror 20, so that the imaging of the first image source 11 and the second image source 12 can be realized.
Furthermore, it will be understood by those skilled in the art that the first imaging light ray having the first characteristic may mean that the first imaging light ray has only the first characteristic; alternatively, part of the characteristic of the first imaging light is the first characteristic, which may have other characteristics, or even the second characteristic. As described in the above paragraphs, if the first image source 11 can emit the first image light that is natural light, the first image light can be decomposed into the polarized light with the first polarization characteristic and the polarized light with the second polarization characteristic, that is, the first image light has both the first characteristic and the second characteristic, and at this time, the light of the first characteristic portion of the first image light can still transmit through the first transflective element 21, that is, a portion of the first image light can still be incident on the curved mirror 20, and the imaging of the first image source 11 is not affected. Meanwhile, since the light can be decomposed, the fact that the transflective element can transmit the light with a certain characteristic means that the transflective element can only transmit the light with the characteristic or can transmit part of the light with the characteristic; accordingly, a transflective element capable of reflecting light of a certain characteristic has a similar meaning. For example, the first transflective element 21 may transmit horizontally polarized light and reflect vertically polarized light, and if the first imaging light is light having a polarization direction at an angle of 45 degrees to the horizontal direction, the first imaging light may be decomposed into horizontally polarized light and vertically polarized light, that is, the horizontally polarized light in the first imaging light may transmit the first transflective element 21, and it may be considered that "the first transflective element 21 may transmit light having the first characteristic". In addition, the first characteristic and the second characteristic in this embodiment may be the same type of characteristic, such as both polarization characteristics, or different types of characteristics, such as the first characteristic being one polarization characteristic and the second characteristic being one wavelength characteristic, which may be determined based on the selected transflective element.
Optionally, the head-up display system further comprises a second transflective element 22, and the image source set further comprises a third image source 13; the third image source 13 is used for emitting third imaging light rays with third characteristics, which are incident to the curved mirror 20; the third object distance corresponding to the third image source 13 is different from both the first object distance and the second object distance, and the third object distance is the length of the propagation path of the third imaging light from the third image source 13 to the curved mirror 20.
In this embodiment, the second transflective element 22 is capable of transmitting light having the first characteristic and reflecting light having the third characteristic; the first transflective element 21 is also capable of transmitting light having a third characteristic. The second transflective element 22 is arranged between the first image source 11 and the first transflective element 21, and the third image source 13 is arranged on the same side of the second transflective element 22 as the first transflective element 21; see in particular fig. 8.
Alternatively, the second transflective element 22 is capable of transmitting light having the second characteristic and reflecting light having the third characteristic; the first transflective element 21 is also capable of reflecting light rays having a third characteristic; the second transflective element 22 is arranged between the second image source 12 and the first transflective element 21, and the third image source 13 is arranged on the same side of the second transflective element 22 as the first transflective element 21; see in particular fig. 9.
In the embodiment of the present invention, the object distance of the third image source 13 (i.e. the third object distance) is different from the object distances of the first image source 11 and the second image source 12, so that the three image sources can image at different positions outside the reflection device 50, for example, at three enlarged imaging positions 111, 121, and 131 shown in fig. 4, respectively, thereby realizing multi-level imaging. Meanwhile, the third characteristic of the third imaging light may be other characteristic different from both the first characteristic and the second characteristic.
As shown in fig. 8, it is assumed that the second transflective element 22 can transmit light with the first polarization direction and reflect light with the third polarization direction, and at the same time, the first transflective element 21 can transmit light with the fourth polarization direction and reflect light with the second polarization direction; the first polarization direction and the third polarization direction are not perpendicular to the fourth polarization direction. The first image light emitted from the first image source 11 has a first polarization direction, and the first image light can transmit through the second transflective element 22 and is incident on the first transflective element 21; since the first polarization direction and the fourth polarization direction are not perpendicular, the first imaging light can decompose a part of light in the fourth polarization direction, so that the part of light can penetrate through the first transflective element 21, that is, a part of the first imaging light can penetrate through the first transflective element 21, that is, the first transflective element 21 capable of penetrating through the light in the fourth polarization direction can also be regarded as light capable of penetrating through the first polarization direction (that is, light capable of penetrating through the first characteristic), and only a part of the light can be penetrated through by the first transflective element 21; similarly, the third image light emitted from the third image source 13 has the third polarization direction, and the third image light can also pass through a part of the first transflective element 21, i.e. can pass through the component of the fourth polarization direction, so that the first transflective element 21 can also pass through the light with the third characteristic. Meanwhile, the second imaging light emitted by the second image source 12 has a second polarization direction, which can be reflected by the first transflective element 21, thereby realizing respective imaging of the three image sources.
Alternatively, the first characteristic, the second characteristic and the third characteristic are three different bands. For example, in fig. 8, the second transflective element 22 may transmit light of a first wavelength band and reflect light of a third wavelength band, the first transflective element 21 may reflect light of a second wavelength band and transmit light of other wavelength bands (including the first wavelength band and the second wavelength band), and imaging light emitted from the three image sources may be incident on the curved mirror 20 based on the two transflective elements, so as to realize imaging respectively. The imaging principle of this embodiment shown in fig. 9 is substantially similar to that of fig. 8, except that in fig. 9, transflective elements with different properties are selected, that is, the first transflective element 21 can transmit light with the first property and can reflect light with the second property and the third property, and the second transflective element 22 can transmit light with the second property and can reflect light with the third property. The scheme shown in fig. 9 is not described in detail here.
In addition, it should be noted that the three image sources in the present embodiment have different object distances, that is, the lengths of the propagation paths of the imaging light to the curved mirror 20 are different, where the "length of the propagation path" is the length of the path of the light from the starting point to the end point, and if the light is directly incident from the starting point to the end point, the length of the propagation path is the distance between the starting point and the end point; if the light is incident to the end point after being reflected for multiple times, the length of the propagation path is the sum of the lengths of the light reaching each reflection point in sequence. As shown in fig. 9, the first imaging light emitted from the first image source 11 can be directly incident on the curved mirror 20, so the first object distance is the distance between the first image source 11 and the curved mirror 20; the second image light emitted from the second image source 12 first reaches the first transflective element 21, and is reflected by the first transflective element 21 before being incident on the curved mirror, so the second object distance of the second image source 12 can be the distance between the second image source 12 and the first transflective element 21, plus the distance between the first transflective element 21 and the curved mirror 20. Accordingly, the third object distance of the third image source 13 may be the sum of the distance between the third image source and the second transflective element 22, the distance between the second transflective element 22 and the first transflective element 21, and the distance between the first transflective element 21 and the curved mirror.
Optionally, in order to reduce the volume of the head-up display system, the distance between the image source in the image source group and the curved mirror 20 is changed by the plane mirror 23 in the mirror group in the present embodiment, thereby reducing the volume of the head-up display system. As shown in fig. 10, the head-up display system further includes a mirror group including one or more plane mirrors 23; the plane mirror 23 is used to reflect the imaging light emitted from the image source set to the curved mirror 20.
In the embodiment of the present invention, the plane mirror 23 is disposed on the propagation path of the imaging light to change the propagation path, so that the imaging light can be transmitted to the curved mirror 20 in a manner of reflecting the imaging light.
Specifically, the mirror group includes a plane mirror 23, and the plane mirror 23 is used to reflect the imaging light emitted from each image source in the image source group to the curved mirror 20. In this embodiment, a plurality of image sources may share one plane mirror 23, and as shown in fig. 10, the first image source 11 and the second image source 12 share one plane mirror 23.
Or, the reflector group includes a plurality of plane reflectors 23, and the plane reflectors 23 correspond to the image sources in the image source group one by one; the plane mirror 23 is used to reflect the imaging light emitted from the corresponding image source to the curved mirror 20. In this embodiment, different plane mirrors 23 may be used for different image sources; as shown in fig. 11 and 12, the first image source 11 and the second image source 12 use a plane mirror 23 corresponding thereto, respectively; at this time, one total image source may be set, and the image distances corresponding to different areas of the total image source are changed by setting the plane mirror 23 at different positions, so that the total image source may be divided into a plurality of image sources, as shown in fig. 11, the total image source is divided into the first image source 11 and the second image source 12, and the plane mirror 23 corresponding to the first image source 11 and the second image source 12 is located at different positions, so that the first object distance of the first image source 11 is different from the second object distance of the second image source 12.
When the transflective element is included, the optical path may be changed by the plane mirror 23. Specifically, when the plane mirror 23 is added to the embodiment shown in fig. 7 to 9, the structure thereof can be seen in fig. 13 to 15.
On the basis of the above embodiment, the surface of the main optical axis control element 41 of the light control device is provided with a plurality of reflection structures, and the multiple paths of projection imaging light are converged to the same observation range 61 based on the reflection structures. Specifically, the main optical axis control element 41 may include a plurality of discrete first reflective structures 411, where the first reflective structures 411 are configured to reflect a path of the projected imaging light to the observation range 61; each of the first reflective structures 411 is similar to a micro mirror, and can reflect a path of projection image light emitted from the projection image source 30 to the observation area 61. In this embodiment, the one-path light refers to light with the same incident angle or the incident angle within a predetermined range.
In this embodiment, the point (x, y, z) on the first reflective structure 411 satisfies the following equation:
Figure BDA0002362695400000191
wherein, P1Is the coordinate of the location of the projection image source 30, P2As coordinates of the observation range 61, M0(x0,y0,z0) Being the coordinates of a known point on the first reflecting structure 411,
Figure BDA0002362695400000192
representing the normal vector of the first reflecting structure 411.
In the embodiment of the present invention, the plane of each first reflective structure 411 is determined by the position of the projection image source 30, the observation range 61 to which the projection imaging light is reflected, and the position of the first reflective structure 411 itself. Specifically, fig. 16 illustrates one first reflective structure 411 in the main optical axis control element 41. In FIG. 16, the projected image source 30 is located at a position P1The point where the observation range 61 is located is P2. For the first reflective structure 411, the normal (i.e. the dashed line in fig. 16) is perpendicular to the plane of the first reflective structure 411, i.e. the normal is the perpendicular vector of the plane of the first reflective structure 411. In the space coordinate system, the vertical vector is:
Figure BDA0002362695400000193
meanwhile, the incidence of the first reflecting structure 411The incident angle and the exit angle of the light (i.e., the projected image light) are the same, let M in FIG. 160(x0,y0,z0) Is a known point on the first reflecting structure 411, the perpendicular vector is located at the vector
Figure BDA0002362695400000201
And
Figure BDA0002362695400000202
on the bisector of the angle (c), so:
Figure BDA0002362695400000203
at the same time, since M0Is a point on the first reflective structure 411 with known coordinates, then for any point M (x, y, z) on the first reflective structure 411, the vector
Figure BDA0002362695400000204
Perpendicular to the vector
Figure BDA0002362695400000205
Then
Figure BDA0002362695400000206
Namely:
P⊥,x(x-x0)+P⊥,y(y-y0)+P⊥,z(z-z0)=0
therefore, for the discrete first reflective structures 411, the reflective surface of the first reflective structure 411 (i.e. the plane of the first reflective structure 411) can be defined by the normal vector
Figure BDA0002362695400000207
And a known point M on the reflecting surface0To be determined. Meanwhile, the first reflective structure 411 is a micro-structure, that is, only the point (x, y, z) of the first reflective structure 411 needs to be determined within a very small value range, that is, the point (x, y, z) on the first reflective structure 411 satisfies the following equation within the corresponding value range:
Figure BDA0002362695400000208
wherein, P1Is the coordinate of the location of the projection image source 30, P2As coordinates of the observation range 61, M0(x0,y0,z0) Being the coordinates of a known point on the first reflecting structure 411,
Figure BDA0002362695400000209
denotes the normal vector, P, of the first reflective structure 411⊥,x、P⊥,y、P⊥,zRespectively represent normal vectors
Figure BDA00023626954000002010
Components in the x, y and z axes.
For each first reflective structure 411 of the primary optical axis control element 41, a known point on each first reflective structure 411 may be determined, in combination with the position P of the projection source 301And observation range 61P2A normal vector of each first reflecting structure 411 may be determined, thereby determining a reflecting surface of each first reflecting structure 411. Wherein the known point M0The center point of the first reflective structure 411 may be, or may be, a point on a plane intersection line where the first reflective structure 411 and the main optical axis control element 41 are located, or may be, other preset points on the first reflective structure 411, which is not limited in this embodiment.
Meanwhile, the value range of the point (x, y, z) may be specifically:
Figure BDA0002362695400000211
wherein x1,x2,y1,y2,z1,z2Is a predetermined value determined according to the placing position of the first reflective structure 411, and x corresponding to different first reflective structures 4111,x2,y1,y2,z1,z2Is not completeThe same is true. For example, for the x-axis, if the x component of the position of a first reflective structure 411 is between 1 and 1.5, then for the first reflective structure 411, x is1=1,x21.5; if the x component of the position of the further first reflective structure 411 is between 1.5 and 1.9, then for the further first reflective structure 411, x is1=1.5,x21.9. Wherein, the meaning of the numerical values not being completely the same is: for six values of x1,x2,y1,y2,z1,z2The six values corresponding to the two different first reflective structures 411 may not be identical, that is, at least 1 or even all of the six values are different.
Alternatively, the main optical axis control element 41 includes a plurality of consecutive second reflecting structures 412, and the second reflecting structures 412 are configured to reflect the plurality of paths of projection imaging light rays to the observation range 61;
the included angle between the second reflecting structure 412 and the plane of the main optical axis control element 41 is θ:
Figure BDA0002362695400000212
wherein the content of the first and second substances,
Figure BDA0002362695400000213
a normal vector representing the plane of the main optical axis control element 41; p1Is the coordinate of the location of the projection image source 30, P2As coordinates of the observation range 61, M0(x0,y0,z0) Is the coordinate of a known point on the intersection of the second reflecting structure 412 and the plane of the main optical axis control element 41,
Figure BDA0002362695400000214
indicating that the second reflective structure 412 is at point M0A normal vector of (d);
the point M (x, y, z) at the intersection of the second reflecting structure 412 and the plane of the main optical axis control element 41 satisfies the following equation:
Figure BDA0002362695400000221
wherein the content of the first and second substances,
Figure BDA0002362695400000222
representing the normal vector of the second reflecting structure 412 at point M.
In this embodiment, the second reflective structure 412 is a continuous structure, that is, the main optical axis control element 41 includes a plurality of continuous second reflective structures 412, and each of the second reflective structures 412 is configured to reflect the multiple paths of projection imaging light rays emitted from the projection image source 30 to the observation range 61.
Specifically, the second reflective structure 412 is a continuous free-form surface, and an included angle between the free-form surface and the plane of the main optical axis control element 41 is a fixed value θ. Referring to fig. 17, the upper half of fig. 17 shows a schematic view of a front view of the light ray control device, and the lower half shows a schematic view of a top view of the light ray control device. Wherein the second reflecting structure 412 intersects with the main optical axis control element 41, and the intersecting line is a free curve, i.e. the middle point M and the point M at the lower half of fig. 170The curve in which it lies.
In this embodiment, a known point M on the intersection line of the second reflection structure 412 and the main optical axis control element 41 is preset first0And M is0Has the coordinates of (x)0,y0,z0). Similar to the embodiment of FIG. 16, the shadowgraph image source 30 is located at a position P1The point where the observation range 61 is located is P2. For the second reflective structure 412, since the second reflective structure 412 is a free-form surface, it has no unique normal, but at a known point M0Normal to the second reflective structure 412
Figure BDA0002362695400000223
Comprises the following steps:
Figure BDA0002362695400000224
meanwhile, for the plane of the main optical axis control element 41, the normal vector of the plane is set
Figure BDA0002362695400000225
Is (A, B, C), i.e.
Figure BDA0002362695400000226
A. B, C denote normal vectors
Figure BDA0002362695400000227
Components in the x, y and z axes. According to the geometric relationship, the normal vector
Figure BDA0002362695400000228
From the normal
Figure BDA0002362695400000229
The included angle between the second reflecting structure 412 and the plane of the main optical axis control element 41 is the included angle θ. So that the normal vector of the plane of the control element 41 is determined according to the main optical axis
Figure BDA0002362695400000231
And the second reflective structure 412 is at point M0Normal to
Figure BDA0002362695400000232
The included angle theta can be determined. That is to say that the first and second electrodes,
Figure BDA0002362695400000233
according to the vector quantity product formula, the following formula is obtained:
Figure BDA0002362695400000234
therefore, the included angle θ between the second reflective structure 412 and the plane of the main optical axis control element 41 satisfies:
Figure BDA0002362695400000235
wherein the content of the first and second substances,
Figure BDA0002362695400000236
a normal vector representing the plane of the main optical axis control element 41; p1Is the coordinate of the location of the projection image source 30, P2As coordinates of the observation range 61, M0(x0,y0,z0) Is the coordinate of a known point on the intersection of the second reflecting structure 412 and the plane of the main optical axis control element 41,
Figure BDA0002362695400000237
indicating that the second reflective structure 412 is at point M0The normal vector of (c).
After the included angle θ is determined, the intersection line of the second reflecting structure 412 and the plane of the main optical axis control element 41 (i.e. the point M at the bottom half of FIG. 17 and the point M at the point M)0The curve on which) the free-form surface of the second reflective structure 412 can be determined.
Specifically, referring to fig. 17, for any point M (x, y, z) on the intersection line of the second reflecting structure 412 and the plane of the main optical axis control element 41, the point M is located in the plane of the main optical axis control element 41, so that:
A(x-x0)+B(y-y0)+C(z-z0)=0
meanwhile, the normal vector of the second reflective structure 412 at point M
Figure BDA0002362695400000238
Is composed of
Figure BDA0002362695400000241
And the normal vector
Figure BDA0002362695400000242
Normal vector to the plane of the main optical axis control element 41
Figure BDA0002362695400000243
The included angle therebetween is also theta, so
Figure BDA0002362695400000244
In addition, a preset value range exists on the plane where the main optical axis control element 41 is located, and therefore, a point M (x, y, z) on an intersection line of the second reflection structure 412 and the plane where the main optical axis control element 41 is located satisfies the following equation within the preset value range:
Figure BDA0002362695400000245
wherein the content of the first and second substances,
Figure BDA0002362695400000246
representing the normal vector of the second reflecting structure 412 at point M. The preset value range of the point M (x, y, z) may specifically be:
Figure BDA0002362695400000247
wherein x isv,xu,yv,yu,zv,zuRespectively, are boundary values for the size of the main optical axis control element 41.
In this embodiment, the second reflective structure 412 is a continuous free-form surface, and the free-form surface of the second reflective structure 412 can be accurately determined by using the fixed included angle θ between the second reflective structure 412 and the main optical axis control element 41 and the intersection line between the two. At the same time, another known point M may be redetermined for the other second reflecting structure 4120And determining the corresponding included angle theta and the intersection line. Different second reflective structures 412 have different included angles θ, and the intersection lines between the second reflective structures 412 and the main optical axis control element 41 are also different. For the main optical axis control element 41, different forms of intersection lines are distributed on the plane thereof. Referring to fig. 18, the two second reflective structures 412 correspond to different angles θ1And theta2And the two included angles correspond to the tracks L of different intersecting lines1And L2
Meanwhile, after the included angle and the intersection line of the continuous second reflection structure 412 are determined, when the second reflection structure 412 on the main optical axis control element 41 is manufactured and processed, the included angle can be fixed by a processing machine, and then the processing can be carried out along the track of the intersection line, so that the processing technology is simple; meanwhile, if the processing depth of the second reflective structures 412 (or the height of the second reflective structures 412) is the same, since the included angle θ of the second reflective structures 412 is fixed, the distance between two adjacent intersecting lines is also fixed, and the distribution of the second reflective structures 412 is more uniform.
On the basis of the above-described embodiment, referring to fig. 19, the head-up display system further includes an information collection device 200, the information collection device 200 being communicatively connected to the driving assistance controller 100; the information collecting device 200 is configured to collect current driving information and environmental information, and send the collected driving information and environmental information to the driving assistance controller 100. The driving assistance controller 100 is specifically configured to: and acquiring the driving information and the environmental information, and generating prompt content according to the driving information and the environmental information.
In the embodiment of the present invention, the information collecting device 200 may collect driving information related to the current driving state of the vehicle or related to the driver, and may also collect environmental information around the outside of the vehicle, so that the driving assistance controller 100 may generate corresponding prompt content based on the driving information and the environmental information. The information acquisition device may specifically include one or more of an image acquisition device, a Vehicle-mounted radar, an infrared sensor, a laser sensor, an ultrasonic sensor, a rotational speed sensor, an angular velocity sensor, a GPS (Global Positioning System), a V2X (Vehicle to X, which represents information exchange between a Vehicle and the outside), and an ADAS (Advanced Driving assistance System). Different information acquisition devices can be installed at different positions based on the requirements of the information acquisition devices, and the detailed description is omitted here.
On the basis of the above embodiment, the head-up display system provided by the embodiment of the invention can be arranged on a vehicle, and the prompt content required to be displayed is determined based on the speed of the vehicle. Specifically, the driving information acquired by the information acquisition device comprises local vehicle speed information, and the local vehicle speed information can represent the speed of the vehicle; meanwhile, the information acquisition device can also monitor objects outside the vehicle, namely external objects, and determine the distance between the information acquisition device and the external objects. Specifically, the information acquisition device may include a speed sensor or a rotational speed sensor disposed on a wheel, and may further determine corresponding local vehicle speed information; or, when the vehicle is a vehicle, the vehicle speed information of the vehicle can be read through a data transmission system of the vehicle, such as an On-Board Diagnostics (OBD), so that the local vehicle speed information can be determined; or the vehicle speed is measured by the vehicle speed measuring function of an auxiliary device arranged inside the vehicle, such as a vehicle data recorder, an electronic dog, a smart phone and the like, so as to determine the local vehicle speed information of the vehicle. Meanwhile, the information acquisition device can further comprise image acquisition equipment, a vehicle-mounted radar, or a distance sensor (such as an infrared distance sensor, a laser distance sensor, an ultrasonic distance sensor and the like), and the like, so that the current distance between an external object and the vehicle can be determined, the current distance is a target distance, namely, a target image source can be selected from a plurality of image sources based on the current distance, and the attaching display is realized.
After the assistant driving controller 100 acquires the local vehicle speed information and the current distance to the external object, as shown in fig. 20, the assistant driving controller 100 generates the prompt content according to the driving information and the environment information includes:
step S101: and determining the current safe distance according to the local vehicle speed information, and judging whether the current distance is greater than the safe distance.
In the embodiment of the invention, the safe distance is a critical value of the safe distance when the vehicle runs, the corresponding relation between the vehicle speed and the safe distance can be preset, and the current local vehicle speed information can be mapped into the corresponding safe distance based on the corresponding relation. For example: when the vehicle speed v is more than or equal to 60km/h, the safe distance S is equal to the vehicle speed v in number, and if the vehicle speed is 110km/h, the safe distance S is 110 meters; when the vehicle speed v is less than or equal to 60km/h and is less than or equal to 40km/h, the safe distance S is 50 m; when the vehicle speed v is less than or equal to 40km/h and is less than or equal to 20km/h, the safe distance S is 30 m; and when the vehicle speed v is less than or equal to 20km/h, the safe distance S is 15m and the like. Other corresponding relations may also be adopted, which is not limited in this embodiment. Meanwhile, the external object in the embodiment may include other vehicles, pedestrians, animals, non-motor vehicles, and the like outside the vehicle, and may also include stationary objects such as roads and indicators. For different external objects, different corresponding relations can be adopted to determine the safe distance.
Step S102: and when the current distance is not more than the safety distance, determining that the current alarm state is in, and taking corresponding alarm information as prompt content, wherein the alarm information comprises one or more of alarm characters, alarm images and alarm videos.
In the embodiment of the invention, if the current distance between the external object and the vehicle is not greater than the safe distance, the external object is closer to the vehicle, and the risk of traffic accidents is higher, so that the external object can be used as an alarm state, and further the alarm information required to be displayed in the alarm state can be displayed as prompt content. Specifically, the warning message may include warning words, such as "too close to the vehicle ahead, please decelerate"; the warning information may also include a warning image, for example, a graphic displaying a red exclamation mark, or a graphic matching the external object highlighted at a position corresponding to the external object (i.e., a cue position); the warning information may also include warning video, such as animation showing a collision between two vehicles, etc.
Optionally, the safety distance in this embodiment may include one or more of a front safety distance, a rear safety distance, and a side safety distance. If the external object is positioned in front, when the current distance is not greater than the front safety distance, the current state of alarm can be determined; if the external object is located behind, when the current distance is not greater than the rear safety distance, the current state of warning can be determined; and if the external object is positioned on the side, determining that the external object is in the alarm state currently when the current distance is not greater than the side safety distance. In this case, appropriate warning information may be used as the content of the prompt in response to the situation, for example, if the external object on the right side is close to the vehicle, the content of the prompt may be "please keep the distance to the vehicle on the right side".
Meanwhile, while determining the prompt content, the driving assistance controller 100 may also determine a corresponding prompt position, determine an image source that needs to display the prompt content, that is, a target image source, and then display the prompt content at the prompt position on the reflection device 50 through the target image source. As described in the other embodiments, the prompting position may be preset, or may be determined based on the projection position of the external object on the reflection device 50, so as to implement the fit display.
Step S103: and when the current distance is greater than the safe distance, determining that the current position is in a normal state, and taking corresponding prompt information as prompt content, wherein the prompt information comprises one or more of empty sets, prompt characters, prompt images and prompt videos.
In this embodiment, if the current distance is greater than the safe distance, it indicates that the external object is far from the vehicle, and at this time, the vehicle is safe, that is, the vehicle can be regarded as being in a normal state, and at this time, the prompt information mainly playing a prompt role can be used as the prompt content. The prompt message can be an empty set, namely the prompt content is empty, and the head-up display system can not display any message; alternatively, the prompt message may be a prompt text, such as "safe distance, please keep on", etc.; the prompt information can also be a prompt image, such as a light-colored image; the prompt information may also be a prompt video, such as a clapping animation.
Optionally, when the mobile terminal is in a different state, that is, in an alarm state or a normal state, the prompt content may be displayed in a different display manner. Specifically, when the current warning state is present, the driving assistance controller 100 may instruct the target image source to display the prompt content in a normal manner or a first highlighting manner, where the first highlighting manner includes one or more of scrolling, jumping, flashing, highlighting, and displaying in a first color. When currently in the normal state, the driving assistance controller 100 may instruct the target image source to display the guidance content in the normal manner or in a second highlighting manner that includes display in a second color.
In the embodiment of the present invention, when the alarm state or the normal state is adopted, the prompt content may be displayed in the same manner (i.e., in a normal manner), but the prompt content displayed in different states is different, and the normal manner includes one or more of still display, scrolling display, jumping display, flashing display, highlighting display, and the like.
Alternatively, in different states, not only the contents of the prompts displayed are different, but also the display modes are different. For example, in a normal state, the heads-up display system may display "currently safe, please continue to hold" in a second color (e.g., green); in the warning state, "too close to the preceding vehicle, please decelerate" may be displayed in a first color (e.g., red). Alternatively, the same prompt content may be displayed in different display modes in different states. For example, the external object is a pedestrian, and the head-up display system currently needs to identify the pedestrian in an AR manner, for example, a rectangular frame is used to mark the position of the pedestrian; if the current state is normal, the rectangular frame can be displayed in a second color (for example, green), that is, the rectangular frame in green is displayed; if the alarm state is currently present, the rectangular frame may be displayed in a first color (e.g., red), i.e., a red rectangular frame is displayed.
In addition, the state of the vehicle can be determined in real time in the embodiment, so that the prompt content can be displayed in different display modes in real time. For example, if the current state is an alarm state, the prompt content of 'please slow down' is displayed in red; then the driver has adjusted the distance between the object and the external world through modes such as speed reduction etc. and has made external world object be located outside safe distance, be normal state afterwards, can show suggestion contents such as "driving safety at present" again with green at this moment. Fig. 21 schematically shows a display mode when the vehicle is too close to the local vehicle, taking an external object as an example. As shown in FIG. 21, when the head-up display system detects that the front vehicle 71 is at a current distance of 50m from the local vehicle and the current safe distance is 60m, i.e., the warning state is present, the prompt that the head-up display system can display on the reflection device 50 (i.e., the windshield of the local vehicle) includes a warning text 501, i.e., "please slow down! The prompt content further includes a rectangular frame 502 for framing the vehicle 501 ahead, and the rectangular frame 502 may be specifically displayed in red or highlighted to enhance the reminding effect. Further, the distance to the front vehicle 71 may be displayed at the same time, and the distance "50.0 m" may be displayed below the rectangular frame 502 in fig. 21.
In this embodiment, when the current state is in the alarm state, other reminding modes can be adopted for auxiliary reminding. Specifically, the driving assistance controller 100 may be further configured to: sending an alarm voice to the sound generating device and indicating the sound generating device to play the alarm voice; or sending a vibration instruction to the vibration device to indicate the vibration device to vibrate; the vibration means is a device that can be brought into contact with a user. In this embodiment, a speaker may be added to the head-up display system or a speaker on the vehicle may be used for voice reminding, and the warning voice may be a warning ring without specific meaning or a specific voice, such as "caution! Maintain the distance between vehicles! "and the like. In addition, a mechanical vibration device may be provided at a position where a driver may directly contact, such as a steering wheel or a seat of a vehicle, so that the driver can be alerted in a vibrating manner in an alarm state.
In a possible implementation manner, no matter what state the external object is currently in, the related information of the external object can be displayed in real time. Specifically, the information acquisition device may include an image acquisition device, a vehicle-mounted radar, or a distance sensor (e.g., an infrared distance sensor, a laser distance sensor, an ultrasonic distance sensor, etc.), and the like, and determines the current distance between the information acquisition device and the external object while determining the current distance between the information acquisition device and the external object, that is, the environmental information may include the current distance between the information acquisition device and the external object and the current position of the external object. At this time, the driving assistance controller 100 may use the position of the external object and the current distance between the external object as the prompt content, so that the prompt content may be displayed on the reflection device 50 in real time, and the driver may be reminded of the position, the distance, and the like of the external object in real time.
Alternatively, the position of the external object may be visually recognized in an AR display manner. Specifically, the driving assistance controller 100 may also determine a projection position where the external object is projected onto the reflection device, use the projection position or an edge of the projection position as a prompt position, and instruct the target image source to display preset prompt content at the prompt position. In this embodiment, by setting the projection position of the external object as the prompt position, the prompt content consistent with the external object can be displayed at the corresponding position of the reflection device 50, so that the external object can be visually identified to the driver. For example, if the external object is a vehicle, a box may be displayed at a corresponding position on the windshield, and the box may frame the vehicle.
Optionally, the head-up display system displays some information that can be displayed all the time as a prompt in real time at a preset position of the reflection device 50. For example, the positions and distances of all external objects around the vehicle can be monitored in real time, and a bird's-eye view of the vehicle is generated, wherein the positions of the external objects in all directions around the vehicle can be schematically represented in the bird's-eye view, so that a driver can conveniently and quickly view the surrounding environment; meanwhile, the surrounding external objects can be displayed in different colors to represent different danger levels. Referring to fig. 22, other vehicles around the local vehicle 73 may be displayed on the reflection device 50 in a bird's eye view, for example, a rear vehicle 72 behind the local vehicle 73 to the left may be displayed in the bird's eye view 503, i.e., will overtake, and a warning text 501, i.e., "rear overtake", may be displayed.
On the basis of the above embodiment, if the external object is a pedestrian, a non-motor vehicle, etc., the external object generally has a higher safety priority, that is, the vehicle needs to give priority to the positions of the pedestrian, etc. during the driving process, so as to avoid collision; therefore, the warning is given preferentially when the external object is a pedestrian, a non-motor vehicle and the like. In this embodiment, when the external object is a special object, it is determined that the external object is currently in an alarm state, and corresponding alarm information is used as a prompt content, where the alarm information may include one or more of an alarm character, an alarm image, and an alarm video.
Specifically, when the external object is a pedestrian, an animal, or a non-motor vehicle, and the external object is located in the current travel route, the external object is taken as a special object.
Or, when the external object is a pedestrian, an animal, or a non-motor vehicle, and the external object moves toward the current travel route, the external object is taken as a special object.
Or, when the external object is a pedestrian, an animal or a non-motor vehicle and the current distance between the external object and the external object is less than the preset distance value, the external object is taken as the special object.
Or when the external object is a pedestrian, an animal or a non-motor vehicle and is currently located in the object dense area, taking the external object as a special object; the subject-dense area includes one or more of a school, a hospital, a parking lot, an urban area.
Or the driving information comprises sight line direction information of the driver; and when the external object is a pedestrian, an animal or a non-motor vehicle and the sight line position information is not matched with the position of the external object, taking the external object as a special object.
In the embodiment of the invention, when the external object is a pedestrian, an animal or a non-motor vehicle and the like which needs special attention, whether the external object can be used as a special object can be judged. Specifically, when the external object is located in the current driving route of the vehicle or the external object is moving toward the current driving route, it indicates that the vehicle has a high possibility of colliding with the external object, and at this time, the vehicle may be regarded as an alarm state. Or, if the current distance between the vehicle and the external object is smaller than the preset distance value, the external object is closer to the vehicle, and the vehicle can be used as an alarm state; the preset distance value may be a preset distance value, for example, a "safe distance" determined based on the vehicle speed in the above embodiment. Or, when it can be determined that the external object is located in a person-intensive area such as a school or a hospital based on GPS, etc., a large number of pedestrians generally exist, and thus, the external object can be set to an alarm state to remind the driver. Alternatively, the information acquisition device may further include an image acquisition device, an infrared sensor, and the like, and the sight line orientation information of the driver, such as the positions of both eyes and the sight line of the driver, is determined based on the information acquisition device; if the sight line direction information does not match the position of the external object, it indicates that the driver is most likely not to notice the external object, and at this time, the driver may be set to an alarm state to remind the driver. The information acquisition device may specifically determine the gaze location information based on an eye tracking technique, and may also adopt other techniques, which are not limited herein.
In this embodiment, when it is determined that the current state is the warning state, warning information for reminding the driver, such as "pedestrian exists in front, attention to avoidance", "school in front, attention to pedestrian", and the like, may be generated, and the warning information is used as a prompt content. As shown in fig. 23, when a pedestrian 74 is detected in front, the head-up display system may display a warning text 501 on the reflection device 50, i.e., "notice pedestrian", and may also highlight the pedestrian 74 through a rectangular frame 502 and remind the driver that the pedestrian is currently moving toward the current driving lane 75 through an arrow 504 capable of indicating the movement trend of the pedestrian 74. Meanwhile, the prompt content may be displayed in a normal manner or a first highlighted manner, or auxiliary prompting may be performed in a manner of voice prompting, which is basically similar to that of the above embodiment and is not described herein again.
Optionally, the "safe distance" in the above embodiment may further include a front safe distance, which refers to a safe distance between the vehicle and an external object located in front. If the external object is located in front, when the current distance is not greater than the front safety distance, the difference between the safety distance and the current distance is greater than a preset distance difference value and/or the time length in the alarm state exceeds a preset time length, a deceleration instruction or a braking instruction is generated, and the deceleration instruction or the braking instruction is sent to an external driving system.
In the embodiment of the invention, if the current distance of the external object is not more than the front safety distance, the current state can be an alarm state; meanwhile, if the difference between the safe distance and the current distance is greater than the preset distance difference value, or the time length in the alarm state exceeds the preset time length, it indicates that the external object is too close to the vehicle, or the distance between the external object and the vehicle is within a dangerous range for a long time, at this time, the assistant driving controller 100 may generate a deceleration instruction or a braking instruction, and send the deceleration instruction or the braking instruction to an external driving system, so that the vehicle may be decelerated or braked, and the safe distance between the vehicle and the external object may be maintained.
On the basis of the above embodiment, when the vehicle is a vehicle, the head-up display system can also monitor whether lane deviation exists, and determine that the problem of lane deviation exists when the vehicle deviates from the lane, and then alarm can be given. Specifically, the information acquisition device may include an image acquisition device, a vehicle-mounted radar, a GPS, and the like, and the position of the vehicle, that is, vehicle position information, may be determined based on the information acquisition device; meanwhile, the lane condition in front of the vehicle, that is, the lane position information may be determined based on the image capturing device, and the lane position information may specifically include a lane where the vehicle is currently located, a lane adjacent to the vehicle, and the like. If the driving assistance controller 100 can obtain the vehicle position information and the lane position information, referring to fig. 24, the generating of the prompt content by the driving assistance controller 100 according to the driving information and the environment information may include:
step S201: determining an offset parameter of the vehicle deviating from the current driving lane according to the vehicle position information and the lane position information, and judging whether the offset parameter is greater than a corresponding offset threshold value; the offset parameter includes an offset distance and/or an offset angle.
In the embodiment of the invention, whether the vehicle is positioned in a proper lane can be determined based on the position of the vehicle and the position of the lane, namely whether the vehicle deviates or not can be judged. If the vehicle is located in the corresponding lane, the offset parameter may be zero, that is, the offset distance and the offset angle are both zero; if the vehicle may have an offset, such as a vehicle line, determining a corresponding offset distance; if the driving direction of the vehicle does not coincide with the lane direction, a corresponding offset angle, i.e. the angle at which the vehicle deviates from the lane, needs to be determined. Whether the current offset exists can be determined by comparing the offset parameter with the preset offset threshold value.
Step S202: and when the offset parameter is larger than the corresponding offset threshold value, determining that the vehicle is in an alarm state at present, and taking corresponding alarm information as prompt content, wherein the alarm information comprises one or more of alarm characters, alarm images, alarm videos and preferential driving lanes.
In the embodiment of the invention, if the current offset parameter is greater than the offset threshold value, the offset distance is too large and/or the offset angle is too large, the vehicle is indicated to have the offset risk at this time, namely the vehicle can be regarded as being in an alarm state, and corresponding alarm information is used as prompt content to remind a driver. The warning information comprises warning words, warning images or warning videos related to lane deviation, and the current priority driving lane can be marked, namely the priority driving lane is used as prompt content. Specifically, the priority driving lane may be used as an external object, and the projection position where the priority driving lane is mapped onto the reflection device 50 may be determined to determine the corresponding prompt position, for example, the edge of the projection position or the projection position is used as the prompt position, and the priority driving lane is displayed at the prompt position on the reflection device 50. Specifically, the reflector 50 may display a graphic such as an arrow corresponding to the priority lane, a trapezoid (corresponding to the straight priority lane), a fan ring with a gradually decreasing width (corresponding to the priority lane requiring turning), or the like. The graphic shape displayed on the reflector 50 may be based on the actual shape of the priority driving lane mapped to the reflector 50.
Optionally, when the offset parameter is greater than the corresponding offset threshold, it may be directly determined that the mobile terminal is in an alarm state; or further, when the offset parameter is larger than the corresponding offset threshold value, comprehensively judging whether the lane is offset at present or not based on other driving information, namely whether the lane can be regarded as an alarm state or not. Specifically, the information acquisition device comprises a speed sensor, an acceleration sensor, an angular velocity sensor and the like, and can be respectively used for acquiring vehicle speed, vehicle acceleration, vehicle steering angle and the like; the system based on the vehicle can determine the state of the steering lamp, namely whether the steering lamp is in an on state; the present embodiment generates vehicle state information based on information such as a vehicle speed, a vehicle acceleration, a turn signal state, etc., and transmits the vehicle state information as a kind of driving information to the driving assist controller 100, and the driving assist controller 100 determines whether it is in an alert state based on the current offset parameter and the vehicle state information.
Specifically, if the offset parameter is greater than the corresponding offset threshold, and when the vehicle speed is greater than a first preset speed value or the vehicle acceleration is not greater than zero, it is determined that the vehicle is currently in the warning state.
And if the offset parameter is larger than the corresponding offset threshold value, and when the steering lamp on the same side of the direction corresponding to the offset angle of the vehicle is not in the opening state, determining that the steering lamp is in the warning state currently.
And if the offset parameter is larger than the corresponding offset threshold value and the current track-unchangeable state is the unchangeable state, determining that the current track-unchangeable state is in the alarm state.
And if the offset parameter is greater than the corresponding offset threshold value and the time length of the lane departure is greater than a preset first departure time length threshold value, determining that the vehicle is in the warning state currently.
In the embodiment of the invention, if the offset parameter is greater than the corresponding offset threshold, the offset risk is indicated to exist, then whether the offset state is normal or not is judged based on the vehicle state information, and if the offset parameter is not normal, the offset state can be used as an alarm state. Specifically, if the vehicle speed is greater than the first preset speed value or the vehicle acceleration is not greater than zero, it indicates that the vehicle speed is too high, or the vehicle still does not decelerate under the condition of deviation, and at this time, the vehicle may be considered to be dangerous, that is, the vehicle may be considered to be in an alarm state. Or if the direction corresponding to the offset angle of the vehicle is not opposite to the direction corresponding to the steering angle, the steering angle of the vehicle is the same as the offset direction, or the vehicle still moves straight, and the vehicle can be considered to be in an alarm state; for example, the vehicle is currently offset to the left, that is, the direction corresponding to the offset angle is the left direction, and if the vehicle also turns to the left, the offset angle is increased, which may cause a great risk. Or, if the turn signal lamp on the same side of the direction corresponding to the offset angle of the vehicle is not in the on state, for example, the vehicle is offset to the left, and the turn signal lamp on the left side is not on, it can be indirectly determined that the driver is not turning to the left side according to the specification, and there is a great risk at this time, which is an alarm state. Alternatively, if the lane change is currently not possible, for example, if there is another vehicle in the lane corresponding to the offset direction, the lane change is not allowed, and if the driver continues to change the lane in the offset direction, the traffic accident is likely to occur, so the vehicle may be regarded as the warning state. Or if the time length of the vehicle deviating from the lane is greater than a preset first deviation time length threshold value, the vehicle is indicated to deviate from the lane for a long time, and the driver should be reminded.
Accordingly, when the deviation parameter is greater than the corresponding deviation threshold value, some situations are normal deviation, and the driver may not be specially reminded at the moment, namely, the situation is normal at the moment, or the situation of vehicle deviation is not considered at the moment. Specifically, the driving information collected by the information collecting device further includes vehicle state information including one or more of a vehicle speed, a vehicle acceleration, a turn signal state, a double blinker state, and a yaw rate. The driving assistance controller 100 can specifically make the following determination based on the vehicle state information:
and if the offset parameter is larger than the corresponding offset threshold value, and when the vehicle speed is smaller than a second preset speed value or the vehicle acceleration is smaller than zero, determining that the vehicle is in a normal state at present.
And if the offset parameter is larger than the corresponding offset threshold value, and when the steering lamp on the same side of the direction corresponding to the offset angle of the vehicle is in the opening state, determining that the steering lamp is in the normal state currently.
And if the offset parameter is larger than the corresponding offset threshold value and the double-flashing signal lamp is in the opening state, determining that the double-flashing signal lamp is in the normal state currently.
And if the offset parameter is greater than the corresponding offset threshold value and the yaw velocity is greater than the preset angular velocity threshold value, determining that the current state is normal.
And if the offset parameter is greater than the corresponding offset threshold value and the time length of the lane departure is less than a preset second departure time length threshold value, determining that the lane departure is in a normal state currently.
And if the offset parameter is larger than the corresponding offset threshold value and the driving information comprises the sight line direction information of the driver, determining that the driver is in a normal state currently when the sight line direction information is the same as the direction corresponding to the offset angle.
In the embodiment of the present invention, if the offset parameter is greater than the corresponding offset threshold, it indicates that there is an offset risk, but if it is determined that the current offset is a normal offset (e.g., a normal lane change) based on the vehicle state information, the current offset may be regarded as a normal state without performing an alarm. Specifically, if the vehicle speed is less than the second preset speed value or the vehicle acceleration is less than zero, it indicates that the vehicle speed is not fast or is decelerating, and the risk is smaller at this time, and the vehicle can be taken as a normal state. If the turn signal lamp on the same side of the direction corresponding to the offset angle of the vehicle is in the on state, it indicates that the vehicle is currently deviated from the lane, but the driver is turning in the offset direction normally, i.e. the driver is changing lanes or turning normally, and at this time, the vehicle can be considered as a normal state. If the double-flashing signal lamp is in an on state or the yaw velocity is greater than the preset angular velocity threshold value, it indicates that the vehicle needs to deviate or change lanes due to faults or the vehicle encounters an emergency to cause emergency steering, avoidance and the like, and at this time, the vehicle may not be regarded as a situation that the lane deviation needs to be warned, that is, for the lane deviation, the vehicle may also be regarded as a normal state which does not belong to the lane deviation situation. In addition, if the direction corresponding to the sight line direction information of the driver is the same as the direction corresponding to the offset angle, it is described that although the vehicle is offset from the lane at present, the driver notices the offset condition, and at this time, the driver can be taken as a normal state without extra warning to remind the driver.
Step S203: and when the offset parameter is not greater than the corresponding offset threshold value, determining that the vehicle is in a normal state at present, and taking corresponding prompt information as prompt content, wherein the prompt information comprises one or more of empty sets, prompt characters, prompt images, prompt videos and preferential driving lanes.
In the embodiment of the invention, if the current offset parameter is not greater than the offset threshold, it indicates that the offset distance is not large and/or the offset angle is not large, and at this time, it indicates that the vehicle is running normally, that is, the vehicle can be regarded as being in a normal state, and at this time, the corresponding prompt information can be used as the prompt content.
Alternatively, similar to the above-described embodiment of determining the safe distance, when the vehicle is in different states, such as the warning state or the normal state, the prompting content may be displayed in different display manners, for example, when the vehicle is currently in the warning state, the driving assistance controller 100 may instruct the target image source to display the prompting content in the normal manner or in a first highlighting manner, where the first highlighting manner includes one or more of scrolling, jumping, flashing, highlighting, and displaying in a first color. When currently in the normal state, the driving assistance controller 100 may instruct the target image source to display the guidance content in the normal manner or in a second highlighting manner that includes display in a second color. The display mode in this embodiment is substantially similar to that in the above embodiment, and is not described herein.
In this embodiment, there are two situations when the vehicle is in a normal state, that is, if the offset parameter is not greater than the corresponding offset threshold, it indicates that the vehicle is running normally and there is no offset, and at this time, a simple prompt content may be determined, for example, displaying a text "lane keeping in progress", etc. If the deviation parameter is larger than the corresponding deviation threshold value, but the vehicle is in the normal state, the situation shows that the vehicle deviates from the lane currently, but the vehicle is turning normally, and the corresponding prompting content can be displayed in a prompting mode at this moment. For example, the head-up display system AR displays images corresponding to the current lane and the turning lane, such as a direction arrow projecting blue directed to the turning lane, a virtual blue road projected in conformity with the current road, and a green lane projected in conformity with the turning lane; alternatively, an abridged map of the road may be projected, including the current lane and the turn lanes, which may be distinguished by a particular color, shape. For example, when the driver is about to exit the highway, the driver turns to the right ramp, and the images of the main lane and the ramp are projected on the reflection device 50 and provided with an arrow pointing to the ramp; when the driver changes lane and overtakes, the reflection device 50 projects images of the lane and the overtaking lane and can twinkle to remind the driver of the lane changing track. As shown in fig. 25, the head-up display system can determine that the current driving lane 75 is a right-turning lane according to the lane position information corresponding to the current driving lane 75, and if the vehicle continues to run straight, the offset angle of the vehicle is increased, at this time, a warning text 501, i.e., "please turn right", can be displayed on the reflection device 50, and an arrow 504 attached to the current driving lane 75 can be displayed, so that the driver can be intuitively reminded of turning right. Or, as shown in fig. 26, if the driver changes lane to the left currently, the direction corresponding to the offset angle of the vehicle is the left direction at this time; if the driver does not turn on the left turn light currently, the driver changes lane in violation currently, and at this time, a warning text 501 "please turn on the left turn light" may be displayed on the reflection device 50 to remind the driver to turn on the left turn light; at the same time, the current direction of travel of the vehicle may also be indicated by arrow 504, alerting the driver that the vehicle is currently drifting to the left.
Optionally, in the vehicle driving process, no matter the vehicle is in the warning state or the normal state, the preferential driving lane can be displayed in real time. Specifically, the driving assistance controller 100 determines a priority traveling lane of the vehicle based on the vehicle position information and the lane position information, and takes the priority traveling lane as a target object; determining a projection position of the target object projected onto the reflection device 50, taking the projection position or the edge of the projection position as a prompt position, and instructing the target image source to display preset prompt content at the prompt position.
In the embodiment of the invention, the priority driving lane of the vehicle can be determined in real time, and the projection position of the priority driving lane projected onto the reflecting device 50 is determined based on the position of the priority driving lane, so that the prompt position is determined. The distance between the whole lane and the vehicle is gradually increased, namely the priority driving lane can not be treated as a point; a plurality of points can be selected as sampling points from the priority driving lane, so that the head-up display system can more accurately determine at which positions of the reflection device 50 the contents attached to the priority driving lane are displayed. Furthermore, since the distances between different points on the priority travel lane and the vehicle are different, a part of the priority travel lane may be displayed by a plurality of image sources, for example, the first image source 11 and the second image source 12; alternatively, a point on the priority travel lane is used as a reference point (for example, a middle point is used as a reference point), and the distance between the reference point and the vehicle is used as a target distance, thereby determining a target image source.
Further, the priority travel lane may be displayed in different display manners in different states. For example, in a normal state, the priority travel lane is displayed in green display; when the vehicle is offset, the priority lane may be displayed in red. Specifically, a figure, an arrow, or the like visually attached to the priority travel lane may be displayed to guide the driver to travel.
On the basis of the above embodiment, if the offset parameter is greater than the corresponding offset threshold, and the difference between the offset parameter and the offset threshold is greater than the preset offset difference value and/or the time length in the offset state exceeds the preset safe offset time length, a deceleration command or a brake command may be generated, and the brake signal or the deceleration signal may be sent to an external driving system.
In the embodiment of the invention, if the offset parameter of the vehicle is greater than the corresponding offset threshold value, the offset risk may exist at present; meanwhile, if the difference between the offset parameter and the offset threshold is greater than a preset offset difference value, or the time length in the offset state exceeds a preset safe offset time length, it indicates that the current offset degree of the vehicle is too large, or the vehicle runs under the offset condition for a long time, and the risk coefficient is high, at this time, the driving assistance controller 100 may generate a deceleration instruction or a braking instruction, and send the deceleration instruction or the braking instruction to an external driving system, so that the vehicle may be decelerated or braked, and a traffic accident caused by the serious offset problem of the vehicle is avoided.
On the basis of the above-described embodiment, the head-up display system can also present an abnormal road to the driver. Specifically, the information acquisition device may include an image acquisition device, a vehicle-mounted radar, and the like, and is used for acquiring road abnormality information; alternatively, the road abnormality information may be acquired based on other external systems (e.g., a real-time transportation system, etc.), and the road abnormality information may include one or more of an obstacle position, a maintenance link position, a dangerous link position, an uneven link position, an accident link position, and a temporary inspection link position. When the driving support controller 100 acquires the road abnormality information, the road abnormality information may be used as a prompt. Alternatively, the driving assistance controller 100 determines a projection position where the abnormal position is projected onto the reflection device 50 based on the road abnormality information, takes the projection position or an edge of the projection position as a presentation position, and instructs the target image source to display presentation contents corresponding to the road abnormality information at the presentation position.
In the embodiment of the present invention, if there is a road abnormality near the transportation means, the information acquisition device or other systems may acquire corresponding road abnormality information, and the driving assistance controller 100 may directly display the road abnormality information as a prompt content on the reflection device 50, for example, the prompt content is "there is a traffic accident one hundred meters ahead" or the like. Alternatively, the position of the abnormal road may be identified on the reflection device 50 in the AR display manner. For example, if the information acquisition device detects that there is an obstacle (for example, a stone, an ice surface, a pothole, or the like) on the road surface, the position of the obstacle, that is, the abnormal position, may be determined, and the projection position of the abnormal position onto the reflection device 50 may be used as a prompt position, and then corresponding prompt content (for example, a graph matching the shape of the obstacle or the like) may be displayed at the prompt position, so that the position of the obstacle may be visually displayed to the driver, and the driver may be more effectively reminded.
Optionally, the head-up display system may also remind the driver in an environment with low visibility, such as rainy days, foggy days, or at night. Specifically, the environmental information further includes visibility information, when the visibility information is smaller than a preset visibility threshold, it indicates that the current visibility is low and the driving environment is severe, at this time, the information acquisition device includes components such as a vehicle-mounted radar and a distance sensor that can normally detect an external object in an environment with low visibility, and the position information of the external object can be acquired based on the information acquisition device 200; thereafter, the driving assistance controller 100 may use the position information of the external object as the prompt content; alternatively, the driving assistance controller 100 determines a projection position where an external object is projected onto the reflection device 50, takes the projection position or an edge of the projection position as a prompt position, and instructs the target image source to display preset prompt contents at the prompt position.
In the embodiment of the invention, the position information of the external object comprises the position of the external object and the current distance between the external object and the vehicle, and when the head-up display system detects the external object, the head-up display system can display the position information of the external object. Or, the position of the external object is marked more intuitively in an AR mode, so that the driver is informed of the direction of the external object, and collision is avoided. The road may be used as an external object, and the position of the road may be determined according to the real-time road condition and the networked road information, so that the driving route may be displayed on the reflection device 50 in an auxiliary manner, for example, an auxiliary line and a turn sign may be marked on the correct driving road.
Optionally, for external objects on abnormal roads and in environments with low visibility, all the external objects can be used as objects with key marks, that is, the external objects can be determined to be in an alarm state; alternatively, the information may be subdivided into a normal state and an alarm state. For example, the division may be performed based on the distance to the vehicle, and if the abnormal road or the external object is far away from the vehicle, the abnormal road or the external object may be in a normal state; if the distance is short, the alarm state can be set. The prompt contents in different states can be displayed in corresponding display modes.
As will be appreciated by those skilled in the art, the prompt content in the above embodiments refers to a content that can be currently displayed on the reflection apparatus 50; at the same point in time, multiple prompts may be displayed on the reflective device 50. In addition, if an external object exists currently, one external object can correspond to at least one prompt content. Meanwhile, the alarm state and the normal state determined in the above embodiment may also correspond to a state of a prompt content, that is, at the same time, the states corresponding to different prompt contents may be different. For example, where the vehicle is a vehicle and there are two pedestrians a and B in front of the vehicle, pedestrian a being closer to the vehicle and pedestrian B being farther from the vehicle, then an alert condition may be determined for pedestrian a, such as pedestrian a being identified by a red frame on reflective device 50 (e.g., the windshield of the vehicle); for the pedestrian B, the normal state can be determined, and the pedestrian B can be identified by a green frame on the reflection device 50, that is, the pedestrian a and the pedestrian B can be respectively identified by a red frame and a green frame on the reflection device 50, and the two frames can be independent of each other.
Optionally, the driving assistance controller 100 is further configured to: and generating sharing information according to the prompt content, and sending the sharing information to a server or other vehicles within a preset distance. In the embodiment of the invention, the vehicle provided with the head-up display system can share the acquired information with other vehicles, and the acquired information can be directly sent to other vehicles nearby or uploaded to a server and forwarded to the vehicles needing the information by the server. Specifically, when the head-up display system takes the position information of an external object, the position information of a special object, road abnormality information and the like as prompting contents, the prompting contents can be shared to other vehicles; alternatively, when the local vehicle has lane deviation, other vehicles nearby may be notified, and the other vehicles may be reminded to avoid the lane deviation.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present invention, and the present invention shall be covered by the claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (28)

1. A head-up display system, comprising: the device comprises an image source group, a curved mirror, a projection image source, a light ray control device and an auxiliary driving controller; the auxiliary driving controller is respectively connected with the projection image source and the plurality of image sources in the image source group;
the image source group comprises a first image source and a second image source, and the light ray control device comprises a main optical axis control element and a diffusion element; the curved mirror and the light ray control device are arranged on the same side of the reflecting device;
the first image source is used for emitting first imaging light rays incident to the curved mirror, and the second image source is used for emitting second imaging light rays incident to the curved mirror; a first object distance corresponding to the first image source is different from a second object distance corresponding to the second image source, wherein the first object distance is the length of a propagation path of the first imaging light from the first image source to the curved mirror, and the second object distance is the length of a propagation path of the second imaging light from the second image source to the curved mirror;
the curved mirror is used for reflecting the first imaging light rays and the second imaging light rays to the reflecting device and reflecting the first imaging light rays and the second imaging light rays to the range of the eye box through the reflecting device;
the projection source is used for emitting projection imaging light rays which are incident to the light ray control device, the main optical axis control element is used for reflecting a plurality of paths of projection imaging light rays to the reflecting device and reflecting the plurality of paths of projection imaging light rays to the same observation range through the reflecting device, and the observation range is a position or an area in the eye box range;
the dispersion element is arranged on one side, close to the projection light source, of the main optical axis control element and is arranged between the main optical axis control element and the projection light source, and the dispersion element is used for dispersing the projection imaging light reflected by the main optical axis control element and forming light spots;
when prompt content needs to be displayed, the driving assistance controller selects a target image source from the projection image source and the plurality of image sources in the image source group, and outputs the prompt content to the target image source so that the target image source displays the prompt content.
2. The heads up display system of claim 1, further comprising a first transflective element; the first transflective element is capable of transmitting light having a first characteristic and reflecting light having a second characteristic;
the first image source is arranged on one side of the first transflective element, and the second image source and the curved mirror are arranged on the other side of the first transflective element; the first imaging light ray has the first characteristic and the second imaging light ray has the second characteristic.
3. The heads up display system of claim 2 further comprising a second transflective element, and the image source set further comprises a third image source; the third image source is used for emitting third imaging light rays with third characteristics, which are incident to the curved mirror; a third object distance corresponding to the third image source is different from both the first object distance and the second object distance, and the third object distance is the length of a propagation path of the third imaging light from the third image source to the curved mirror;
the second transflective element is capable of transmitting light having the first characteristic and reflecting light having a third characteristic; the first transflective element is also capable of transmitting light having a third characteristic;
the second transflective element is disposed between the first image source and the first transflective element, and the third image source and the first transflective element are disposed on a same side of the second transflective element;
alternatively, the first and second electrodes may be,
the second transflective element is capable of transmitting light having a second characteristic and reflecting light having a third characteristic; the first transflective element is further capable of reflecting light rays having a third characteristic;
the second transflective element is disposed between the second image source and the first transflective element, and the third image source and the first transflective element are disposed on a same side of the second transflective element.
4. The heads up display system of claims 1-3, further comprising a mirror group comprising one or more planar mirrors;
the plane reflector is used for reflecting the imaging light rays emitted by the image source group to the curved mirror.
5. Head-up display system according to claim 4,
the reflecting mirror group comprises a plane reflecting mirror, and the plane reflecting mirror is used for reflecting imaging light rays emitted by each image source in the image source group to the curved mirror;
or the reflector group comprises a plurality of plane reflectors, and the plane reflectors correspond to the image sources in the image source group one by one; the plane mirror is used for reflecting imaging light rays emitted by the corresponding image source to the curved mirror.
6. A heads-up display system as claimed in claim 1 wherein the area of the projected imaging region on the reflective device is greater than the area of the magnified imaging region;
the projection imaging area is an area where the projection imaging light can be incident on the surface of the reflecting device, and the amplification imaging area is an area where the imaging light emitted by the image source group can be incident on the surface of the reflecting device; the imaging light rays comprise one or more of first imaging light rays, second imaging light rays and third imaging light rays.
7. Head-up display system according to claim 1,
the main optical axis control element comprises a plurality of discrete first reflecting structures, and the first reflecting structures are used for reflecting one path of the projection imaging light to the observation range;
a point (x, y, z) on the first reflective structure satisfies the following equation:
Figure FDA0002362695390000031
wherein, P1Is the coordinate, P, of the location of the projection source2Is the observation rangeCoordinate of (D), M0(x0,y0,z0) Being the coordinates of a known point on the first reflecting structure,
Figure FDA0002362695390000032
a normal vector representing the first reflective structure;
alternatively, the first and second electrodes may be,
the main optical axis control element comprises a plurality of continuous second reflecting structures, and the second reflecting structures are used for reflecting the plurality of paths of projection imaging light rays to the observation range;
the included angle between the second reflection structure and the plane where the main optical axis control element is located is theta:
Figure FDA0002362695390000033
wherein the content of the first and second substances,
Figure FDA0002362695390000034
a normal vector representing a plane in which the main optical axis control element is located; p1Is the coordinate, P, of the location of the projection source2As coordinates of said observation range, M0(x0,y0,z0) Is the coordinate of a known point on the intersection line of the second reflecting structure and the plane of the main optical axis control element,
Figure FDA0002362695390000041
indicating that the second reflecting structure is at point M0A normal vector of (d);
the point M (x, y, z) on the intersection line of the second reflecting structure and the plane of the main optical axis control element satisfies the following equation:
Figure FDA0002362695390000042
wherein the content of the first and second substances,
Figure FDA0002362695390000043
representing a normal vector of the second reflecting structure at point M.
8. A heads-up display system as claimed in claim 1, wherein the driver assistance controller is configured to determine a prompt location, the prompt location being a location on the reflective device at which the prompt is displayed;
determining an imaging area containing the prompt position, taking an image source corresponding to the imaging area as a target image source, and indicating the target image source to display the prompt content at the prompt position; the imaging area is an area where imaging light can be incident on the surface of the reflecting device.
9. Head-up display system according to claim 8,
when the projection position of the projection mapping of the external object onto the reflecting device is located in the enlarged imaging area, taking the external object as a target object, and determining a target distance between the external object and the target object; the enlarged imaging area is an area where imaging light rays emitted by the image source set can be incident on the surface of the reflecting device;
taking the projection position or the edge of the projection position in the enlarged imaging area as a prompt position;
respectively determining the image distance of each image source in the image source group, taking the image distance matched with the target distance as a target image distance, and taking the image source corresponding to the target image distance as a target image source; the image distance is the distance between the virtual image of the image source and the curved mirror formed by the curved mirror.
10. The head-up display system according to claim 8, wherein the external object is taken as a target object when a projection position where an external object is projectively mapped onto the reflection means is located within a projection imaging area; the projection imaging area is an area where projection imaging light rays emitted by the projection imaging source can be incident on the surface of the reflecting device;
and taking the projection position or the edge of the projection position in the projection imaging area as a prompt position.
11. A heads up display system as claimed in any one of claims 1, 8, 9, 10 further comprising an information collection device communicatively coupled to the auxiliary drive controller;
the information acquisition device is used for acquiring current driving information and environmental information and sending the acquired driving information and environmental information to the auxiliary driving controller;
the driving assistance controller is further configured to: and acquiring the driving information and the environment information, and generating the prompt content according to the driving information and the environment information.
12. The head-up display system as claimed in claim 11, wherein if the driving information includes local vehicle speed information and the environment information includes a current distance to an external object, the generating the prompt content according to the driving information and the environment information by the assistant driving controller includes:
determining a current safe distance according to the local vehicle speed information, and judging whether the current distance is greater than the safe distance;
when the current distance is not greater than the safety distance, determining that the current position is in an alarm state, and taking corresponding alarm information as the prompt content, wherein the alarm information comprises one or more of alarm characters, alarm images and alarm videos;
and when the current distance is greater than the safe distance, determining that the current position is in a normal state, and taking corresponding prompt information as the prompt content, wherein the prompt information comprises one or more of an empty set, prompt characters, a prompt image and a prompt video.
13. The heads up display system of claim 12, wherein the safe distance includes one or more of a front safe distance, a rear safe distance, a side safe distance;
if the external object is positioned in front, determining that the external object is in an alarm state currently when the current distance is not greater than the front safety distance;
if the external object is located behind, determining that the external object is in an alarm state currently when the current distance is not greater than the rear safety distance;
and if the external object is positioned on the side, determining that the external object is in an alarm state currently when the current distance is not greater than the side safety distance.
14. The head-up display system according to claim 11, wherein the environment information includes a location of an external object and a current distance to the external object;
taking the position of the external object and the current distance between the external object and the external object as prompt contents;
or determining a projection position of the external object projected onto the reflecting device, taking the projection position or the edge of the projection position as a prompt position, and indicating the target image source to display preset prompt content at the prompt position.
15. The head-up display system according to claim 14, wherein when the external object is a special object, it is determined that it is currently in an alarm state, and corresponding alarm information including one or more of an alarm text, an alarm image, an alarm video is used as the prompt content;
when the external object is a pedestrian, an animal or a non-motor vehicle and is located in the current driving route, taking the external object as a special object;
or when the external object is a pedestrian, an animal or a non-motor vehicle and moves towards the current driving route, taking the external object as a special object;
or when the external object is a pedestrian, an animal or a non-motor vehicle and the current distance between the external object and the external object is less than a preset distance value, taking the external object as a special object;
or when the external object is a pedestrian, an animal or a non-motor vehicle and is currently located in an object dense area, taking the external object as a special object; the subject-dense area comprises one or more of a school, a hospital, a parking lot, an urban area;
or the driving information comprises sight line direction information of the driver; and when the external object is a pedestrian, an animal or a non-motor vehicle and the sight line direction information is not matched with the position of the external object, taking the external object as a special object.
16. The heads up display system of claim 12, wherein the safe distance comprises a front safe distance;
if the external object is located in front, when the current distance is not greater than the front safety distance, the difference between the safety distance and the current distance is greater than a preset distance difference value and/or the time length in the alarm state exceeds a preset time length, generating a deceleration instruction or a braking instruction, and sending the deceleration instruction or the braking instruction to an external driving system.
17. The heads-up display system of claim 11, wherein the driving information includes vehicle location information, the environmental information includes lane location information; the generating of the prompt content by the driving assistance controller according to the driving information and the environmental information includes:
determining an offset parameter of the vehicle deviating from the current driving lane according to the vehicle position information and the lane position information, and judging whether the offset parameter is greater than a corresponding offset threshold value; the offset parameter comprises an offset distance and/or an offset angle;
and when the offset parameter is larger than the corresponding offset threshold value, determining that the vehicle is in an alarm state at present, and taking corresponding alarm information as the prompt content, wherein the alarm information comprises one or more of alarm characters, alarm images, alarm videos and preferential driving lanes.
18. The heads-up display system of claim 17, wherein when the offset parameter is not greater than the respective offset threshold, determining that the vehicle is currently in a normal state and using a respective prompt message as the prompt, the prompt message including one or more of an empty set, a prompt text, a prompt image, a prompt video, and a priority driving lane.
19. The heads up display system of claim 17, wherein the driving information further includes vehicle status information including one or more of vehicle speed, vehicle acceleration, turn signal status;
if the offset parameter is larger than the corresponding offset threshold value, and when the vehicle speed is larger than a first preset speed value or the vehicle acceleration is not larger than zero, determining that the vehicle is in an alarm state at present;
if the offset parameter is larger than the corresponding offset threshold value, and when the steering lamp on the same side of the direction corresponding to the offset angle of the vehicle is not in the opening state, determining that the vehicle is in the warning state currently;
if the offset parameter is larger than the corresponding offset threshold value and the current lane-unchangeable state is achieved, the current state of the alarm is determined;
and if the offset parameter is greater than the corresponding offset threshold value and the time length of the lane departure is greater than a preset first departure time length threshold value, determining that the vehicle is in the warning state currently.
20. The heads-up display system of claim 17, wherein the driving information further includes vehicle status information including one or more of vehicle speed, vehicle acceleration, turn light status, dual blinker status, yaw rate;
if the offset parameter is larger than the corresponding offset threshold value, and when the vehicle speed is smaller than a second preset speed value or the vehicle acceleration is smaller than zero, determining that the vehicle is in a normal state currently;
if the offset parameter is larger than the corresponding offset threshold value, and when the turn lights on the same side of the direction corresponding to the offset angle of the vehicle are in an on state, determining that the turn lights are in a normal state;
if the offset parameter is larger than the corresponding offset threshold value and the double-flashing signal lamp is in an opening state, determining that the double-flashing signal lamp is in a normal state currently;
if the offset parameter is larger than the corresponding offset threshold value and the yaw velocity is larger than a preset angular velocity threshold value, determining that the current state is normal;
if the offset parameter is greater than the corresponding offset threshold value and the lane departure time length is less than a preset second departure time length threshold value, determining that the vehicle is in a normal state currently;
and if the offset parameter is larger than the corresponding offset threshold value and the driving information comprises the sight line direction information of the driver, determining that the driver is in a normal state currently when the sight line direction information is the same as the direction corresponding to the offset angle.
21. The heads-up display system of claim 17, wherein the driver assistance controller is further configured to:
determining a priority driving lane of the vehicle according to the vehicle position information and the lane position information, and taking the priority driving lane as a target object;
determining a projection position of the target object projected onto the reflecting device, taking the projection position or the edge of the projection position as a prompt position, and indicating the target image source to display preset prompt content at the prompt position.
22. Head-up display system according to claim 17, characterized in that a deceleration or braking command is generated and sent to an external driving system when the offset parameter is greater than the respective offset threshold value, and the difference between the offset parameter and the offset threshold value is greater than a preset offset difference value and/or the time duration in the offset state exceeds a preset safety offset time duration.
23. Head-up display system according to any of claims 12-22,
when the target image source is in a warning state, the auxiliary driving controller instructs the target image source to display the prompt content in a normal mode or a first highlighting mode, wherein the first highlighting mode comprises one or more of scrolling display, jumping display, flickering display, highlighting display and first color display;
and when the target image source is in a normal state, the driving assistance controller instructs the target image source to display the prompt content in a normal mode or a second highlighting mode, wherein the second highlighting mode comprises displaying in a second color.
24. The heads-up display system of any of claims 12-22, wherein the driver assistance controller, when currently in the warning state, is further configured to:
sending an alarm voice to a sound generating device and indicating the sound generating device to play the alarm voice;
or sending a vibration instruction to a vibration device to indicate the vibration device to vibrate; the vibration device is a device that can be contacted by a user.
25. The head-up display system according to claim 11, wherein the environmental information includes road abnormality information including one or more of an obstacle position, a maintenance link position, a dangerous link position, an uneven link position, an accident link position, a temporary inspection link position;
taking the road abnormal information as prompt content;
or determining a projection position of an abnormal position projected onto the reflecting device according to the road abnormal information, taking the projection position or the edge of the projection position as a prompt position, and indicating the target image source to display prompt contents corresponding to the road abnormal information at the prompt position.
26. The heads-up display system of claim 11, wherein the environmental information includes visibility information; when the visibility information is smaller than a preset visibility threshold value, acquiring position information of an external object acquired by an information acquisition device;
taking the position information of the external object as prompt content;
or determining a projection position of the external object projected onto the reflecting device, taking the projection position or the edge of the projection position as a prompt position, and indicating the target image source to display preset prompt content at the prompt position.
27. The heads-up display system of claim 11, wherein the driver assistance controller is further configured to:
and generating sharing information according to the prompt content, and sending the sharing information to a server or other vehicles within a preset distance.
28. Head-up display system according to claim 11, characterised in that the information acquisition means comprise one or more of an image acquisition device, an on-board radar, an infrared sensor, a laser sensor, an ultrasonic sensor, a speed sensor, a rotational speed sensor, an angular velocity sensor, GPS, a V2X system, ADAS.
CN202010026607.2A 2020-01-10 2020-01-10 Head-up display system Active CN113126293B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010026607.2A CN113126293B (en) 2020-01-10 2020-01-10 Head-up display system
PCT/CN2021/070945 WO2021139792A1 (en) 2020-01-10 2021-01-08 Head-up display system and control method therefor, and means of transport

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010026607.2A CN113126293B (en) 2020-01-10 2020-01-10 Head-up display system

Publications (2)

Publication Number Publication Date
CN113126293A true CN113126293A (en) 2021-07-16
CN113126293B CN113126293B (en) 2023-07-21

Family

ID=76771501

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010026607.2A Active CN113126293B (en) 2020-01-10 2020-01-10 Head-up display system

Country Status (2)

Country Link
CN (1) CN113126293B (en)
WO (1) WO2021139792A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107076991A (en) * 2014-07-22 2017-08-18 诺迪公司 Compact HUD system
CN207557584U (en) * 2017-11-22 2018-06-29 苏州车萝卜汽车电子科技有限公司 Augmented reality head-up display device
CN110554497A (en) * 2018-05-31 2019-12-10 东莞创奕电子科技有限公司 Display device and vehicle head-up display system thereof

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8395529B2 (en) * 2009-04-02 2013-03-12 GM Global Technology Operations LLC Traffic infrastructure indicator on head-up display
US20130033650A1 (en) * 2011-08-02 2013-02-07 3M Innovative Properties Company Display system and method for projection onto multiple surfaces
JP2015034919A (en) * 2013-08-09 2015-02-19 株式会社デンソー Information display device
TWI617841B (en) * 2014-10-22 2018-03-11 英特爾股份有限公司 Anti-moire pattern diffuser for optical systems
JP2017125886A (en) * 2016-01-12 2017-07-20 富士フイルム株式会社 Head-up display device
KR20180093583A (en) * 2017-02-14 2018-08-22 현대모비스 주식회사 Head up display apparatus having multi display field capable of individual control and display control method for head up dispaly apparatus
CN209381917U (en) * 2018-11-30 2019-09-13 深圳点石创新科技有限公司 A kind of head up display and automobile

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107076991A (en) * 2014-07-22 2017-08-18 诺迪公司 Compact HUD system
CN207557584U (en) * 2017-11-22 2018-06-29 苏州车萝卜汽车电子科技有限公司 Augmented reality head-up display device
CN110554497A (en) * 2018-05-31 2019-12-10 东莞创奕电子科技有限公司 Display device and vehicle head-up display system thereof

Also Published As

Publication number Publication date
CN113126293B (en) 2023-07-21
WO2021139792A1 (en) 2021-07-15

Similar Documents

Publication Publication Date Title
US11077753B2 (en) Information provision device, information provision method, and recording medium storing information provision program for a vehicle display
US10600250B2 (en) Display system, information presentation system, method for controlling display system, computer-readable recording medium, and mobile body
CN113109941B (en) Layered imaging head-up display system
JP7113259B2 (en) Display system, information presentation system including display system, display system control method, program, and mobile object including display system
WO2017082067A1 (en) Image display system for vehicle
JP6443716B2 (en) Image display device, image display method, and image display control program
JP6969509B2 (en) Vehicle display control device, vehicle display control method, and control program
US10649207B1 (en) Display system, information presentation system, method for controlling display system, recording medium, and mobile body
WO2019003929A1 (en) Display system, information presentation system, method for controlling display system, program and recording medium for display system, and mobile body device
JP6748947B2 (en) Image display device, moving body, image display method and program
JP2016107947A (en) Information providing device, information providing method, and control program for providing information
CN113126295A (en) Head-up display device based on environment display
CN113126294B (en) Multi-layer imaging system
CN113126293B (en) Head-up display system
CN113119862B (en) Head-up display device for driving assistance
JP2022138173A (en) Display device for vehicle
JP2021117089A (en) Display device and method for display
WO2021015090A1 (en) Control device
JP7266257B2 (en) DISPLAY SYSTEM AND METHOD OF CONTROLLING DISPLAY SYSTEM
JP7054483B2 (en) Information providing device, information providing method and information providing control program
CN113147595B (en) Vehicle driving control system based on stereoscopic vision display
WO2023145851A1 (en) Display device
WO2023145852A1 (en) Display control device, display system, and display control method
CN113109940A (en) High-brightness head-up display system
JP2021018807A (en) Controller

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant