CN115620545A - Augmented reality method and device for driving assistance - Google Patents

Augmented reality method and device for driving assistance Download PDF

Info

Publication number
CN115620545A
CN115620545A CN202211358721.0A CN202211358721A CN115620545A CN 115620545 A CN115620545 A CN 115620545A CN 202211358721 A CN202211358721 A CN 202211358721A CN 115620545 A CN115620545 A CN 115620545A
Authority
CN
China
Prior art keywords
information
virtual
dimensional display
road
display information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211358721.0A
Other languages
Chinese (zh)
Inventor
王淳
李炜明
刘志花
董杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority to CN202211358721.0A priority Critical patent/CN115620545A/en
Publication of CN115620545A publication Critical patent/CN115620545A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • G08G1/096716Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information does not generate an automatic action on the vehicle control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a method and a device for augmented reality for driving assistance, which are applied to the technical field of augmented reality, and the method comprises the following steps: the driving assistance information is determined based on the information acquired in the driving process, and then the virtual three-dimensional display information corresponding to the driving assistance information is displayed.

Description

Augmented reality method and device for driving assistance
The application is a divisional application of an invention patent application with the application number of 201710737404.2, the application date of 2017, 08 and 24, and the name of the invention is 'augmented reality method and device for assisting driving'.
Technical Field
The invention relates to the technical field of augmented reality, in particular to a method and a device for augmented reality for driving assistance.
Background
AR (Augmented Reality) technology can superimpose virtual objects and/or virtual information onto a real scene, so that a user can obtain a sensory experience beyond Reality, that is, the user can perceive a scene in which real objects and virtual objects and/or virtual information exist simultaneously.
In the process of driving a vehicle, due to complex road conditions and some limitations of a driver, the driver cannot completely master the driving information in the driving process of the vehicle, and thus accidents may occur. By applying the AR technology to vehicle driving, the driver can be helped to better master the driving information in the driving process of the vehicle, so that the motor vehicle can be driven more safely, and accidents in the driving process of the vehicle can be reduced. Therefore, how to apply the AR technology in the process of driving the vehicle by the driver becomes a key issue.
Disclosure of Invention
In order to overcome the above technical problems or at least partially solve the above technical problems, the following technical solutions are proposed:
according to an aspect, embodiments of the present invention provide a method for augmented reality for driving assistance, including:
determining driving auxiliary information based on information acquired in the driving process;
and displaying the virtual three-dimensional display information corresponding to the driving auxiliary information.
Embodiments of the present invention also provide an apparatus for augmented reality for driving assistance according to another aspect, including:
the determining module is used for determining driving auxiliary information based on the information acquired in the driving process;
and the display module is used for displaying the virtual three-dimensional display information corresponding to the driving auxiliary information determined by the determination module.
The invention provides a method and a device for augmented reality for driving assistance, compared with the prior art, the method and the device for augmented reality for driving assistance determine driving assistance information based on information acquired in the driving process, and then display virtual three-dimensional display information corresponding to the driving assistance information, namely the method and the device determine the driving assistance information in the driving process through the information acquired in the driving process of a vehicle, and display the virtual three-dimensional display information corresponding to the driving assistance information in the driving process to a driver in a visual and/or auditory way to inform or warn the driver, so that the method and the device can help the driver to better master the driving information in the driving process of the vehicle by applying augmented reality technology in the driving process of the vehicle, and further improve user experience.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flow chart of a method for augmented reality for driving assistance according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of determining road information when a road surface is not completely covered according to an embodiment of the present invention;
FIG. 3 is a schematic illustration of determining road information when the road surface is completely covered and the center lane isolation barrier is visible in an embodiment of the present invention;
FIG. 4 is a schematic diagram of a road covered completely and without distinguishing the road surface from the road edges according to an embodiment of the present invention;
FIG. 5 is a schematic illustration of determining road information when the road surface is completely covered and the center lane isolation barrier is not visible in an embodiment of the present invention;
FIG. 6 is a schematic view showing ruts in the embodiment of the present invention;
FIG. 7 is a schematic diagram of enhanced rutting when rutting is unclear in an embodiment of the present invention;
FIG. 8 is a schematic diagram illustrating a relationship between AR information and a driver's gaze in an embodiment of the present invention;
FIG. 9 is a schematic view of a complete traffic sign/indicator shown when the traffic sign/indicator is partially or fully covered in accordance with an embodiment of the present invention;
FIG. 10 is a schematic diagram illustrating an embodiment of determining a traffic sign and/or an indicator corresponding to a current location according to a history;
FIG. 11 is a schematic illustration of the determination of the extended area of the side rearview mirror in an embodiment of the present invention;
FIG. 12 is a schematic view of the viewing area and the extended area of the side rearview mirror of the physical side rearview mirror in an embodiment of the present invention;
FIG. 13 is a schematic view of the extended area of the endoscope in an embodiment of the present invention;
FIG. 14 is a schematic diagram illustrating a virtual traffic light in accordance with an embodiment of the present invention;
FIG. 15 is a schematic diagram of displaying corresponding AR information according to a traffic police gesture in an embodiment of the present invention;
FIG. 16 is a schematic diagram illustrating an AR information display manner of keys required for operating a keyboard according to an embodiment of the present invention;
fig. 17 is a schematic diagram of displaying a region suitable for parking and a region not suitable for parking and displaying corresponding augmented reality assistant driving display information according to an embodiment of the present invention;
FIG. 18 is a diagram illustrating an embodiment of a device pre-preparing and rendering a larger range of AR information to reduce latency;
FIG. 19 is a schematic diagram of different driving area display modes due to different vehicle speeds in the embodiment of the present invention;
FIG. 20 is a schematic diagram illustrating a display manner of multiple AR messages when multiple AR messages need to be displayed simultaneously according to an embodiment of the present invention;
FIG. 21 is a schematic diagram of the device displaying AR information on the right side when the statistics of the driver's gaze show that the driver's attention is biased to the left side in the embodiment of the present invention;
fig. 22 is a schematic structural diagram of an augmented reality device for driving assistance in an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative only and should not be construed as limiting the invention.
As used herein, the singular forms "a", "an", "the" and "the" include plural referents unless the content clearly dictates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As will be appreciated by those skilled in the art, a "terminal" as used herein includes both devices having a wireless signal receiver, which are devices having only a wireless signal receiver without transmit capability, and devices having receive and transmit hardware, which have devices having receive and transmit hardware capable of two-way communication over a two-way communication link. Such a device may include: a cellular or other communication device having a single line display or a multi-line display or a cellular or other communication device without a multi-line display; PCS (Personal Communications Service), which may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global Positioning System) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "terminal" or "terminal device" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. As used herein, a "terminal Device" may also be a communication terminal, a web terminal, a music/video playing terminal, such as a PDA, an MID (Mobile Internet Device) and/or a Mobile phone with music/video playing function, or a smart tv, a set-top box, etc.
ADAS (Advanced Driver Assistance System) aims to help drivers drive motor vehicles safely and reduce accidents, and can inform or warn drivers by means of visual, auditory or tactile feedback based on road conditions, and may include but is not limited to lane departure warning, lane keeping systems, etc.
Existing ADAS mainly aims to provide driver assistance in good road conditions, but lacks an effective solution for challenging environments such as snow and mud.
The existing advanced driver assistance system adopting the augmented reality technology usually uses a vehicle-mounted single screen, the screen is small, the type of displayed information is fixed, and the driver feels unnatural. And the delay is large, and the driver cannot be effectively helped in a challenging driving scene. Therefore, how to adaptively select an object/information to be displayed for a driver, how to present in a natural manner, how to display information with low delay, and how to display a plurality of objects/information at the same time are problems to be solved.
For the embodiments of the present invention, for challenging environments, such as a snow road surface and a muddy road surface, the embodiments of the present invention may estimate at least one of a lane range, a lane line position, a road edge line position, a road traffic sign and a non-road traffic sign by sensing road environment and/or road map information, and generate corresponding AR information (i.e., virtual three-dimensional display information).
Wherein, for perceiving the road environment, the device may perceive the road environment by at least one of: sensing using at least one sensor carried by the device; sensing using at least one sensor carried by the host vehicle; using the communication means to: the similar equipment, the dissimilar equipment and other vehicles acquire information; information is obtained using a GPS (Global Positioning System).
Further, the perceivable area of the device may be a set of perceiving ranges using the various ways described above.
For embodiments of the present invention, the AR information (i.e., the virtual three-dimensional display information) may include, but is not limited to, at least one of: AR objects, AR text, AR pictures, and AR animations. The embodiments of the present invention are not limited thereto.
For the embodiment of the present invention, for adaptively selecting the AR information to be displayed, in the embodiment of the present invention, whether one or more kinds of AR information need to be displayed is adaptively determined by at least one of ways including perceiving the road environment, the state of the vehicle, and the intention of the driver, and corresponding content is generated.
For the embodiment of the present invention, for presenting the AR information in a natural manner, the AR information is displayed at a physically correct position (e.g., a position with a correct relation such as a position and a posture scale occlusion relative to a corresponding real object), and/or a position where a driver is accustomed (i.e., a position where the driver does not need to change driving habits).
Embodiments of the invention may be used in conjunction with head-mounted display devices (e.g., 3D augmented reality/mixed reality glasses), and/or vehicle-mounted display devices (e.g., 3D heads-up displays) placed on a vehicle. In particular, by using a head-mounted display device, the apparatus can expand the AR information display space into the entire three-dimensional space.
For reducing the delay, the apparatus and method of the embodiments of the present invention adaptively reduce two kinds of delays: one is attention delay and the other is display delay. Attention delay is defined as the time delay from the display of the AR information by the device to the driver noticing the AR information; display latency is defined as the time the device spends generating, rendering and displaying the AR information.
Fig. 1 is a flowchart illustrating a method for augmented reality for driving assistance according to an embodiment of the present invention.
Step 101, determining driving auxiliary information based on information acquired in a driving process; and 102, displaying virtual three-dimensional display information corresponding to the driving auxiliary information.
Further, step 101 includes step 1011: step 102 includes step 1021.
Step 1011, determining shielded driving auxiliary information based on information of the perceptible area acquired in the driving process; and step 1021, displaying virtual three-dimensional display information corresponding to the shielded driving auxiliary information.
Wherein the sheltered driving assistance information comprises at least one of: road surface road information, non-road surface traffic sign information and blind area information.
Wherein the road surface road information includes at least one of: lanes, lane lines, road edge lines, road traffic signs, road traffic markings.
Wherein the non-road traffic marking information comprises at least one of: roadside traffic signs, traffic signs above roads.
Wherein, the blind area information includes: information in the blind zone of the rear view mirror.
Wherein the traffic sign comprises at least one of: warning signs, prohibition signs, indication signs, road indication signs, travel zone signs, operation zone signs, auxiliary signs and notification signs.
Wherein the traffic marking comprises at least one of: indicator markings, prohibition markings, warning markings.
Further, when the traffic assistance information is blocked, the traffic assistance information includes: when road surface road information and/or non-road surface traffic sign information, the virtual three-dimensional display information corresponding to the sheltered driving auxiliary information is displayed, and the method comprises the following steps: and displaying virtual three-dimensional display information corresponding to the shielded driving auxiliary information at the position of the shielded driving auxiliary information.
Further, the determination of the obstructed driving assistance information based on the information of the perceivable area acquired during driving can be implemented in at least one of the following manners: if the shielded driving auxiliary information is only partially shielded, determining the shielded driving auxiliary information according to the sensible part of the driving auxiliary information; determining the shielded driving auxiliary information based on the position of the current vehicle and the reference object information of the perceptible area in the current driving; determining the shielded driving auxiliary information based on the multimedia information of the shielded driving auxiliary information acquired from other angles except the visual angle of the driver; based on multimedia information of the sheltered driving auxiliary information in the sensible area obtained in the driving process, the multimedia information is enhanced and/or restored, and the sheltered driving auxiliary information is determined; when the shielded driving auxiliary information comprises road surface road information, the shielded driving auxiliary information is determined according to a map by aligning the current road with the map of the current road; and determining the current sheltered driving auxiliary information according to other driving auxiliary information.
Further, after determining the obstructed driving assistance information based on the information of the perceptible area acquired in the driving process, the method further comprises: correcting the determined shielded driving auxiliary information;
wherein, show the virtual three-dimensional display information that the driving auxiliary information who is sheltered from corresponds, include: and displaying virtual three-dimensional display information corresponding to the corrected driving auxiliary information at the corrected position.
Further, the correction of the determined obstructed driving assistance information may be implemented by at least one of the following ways: when the shielded driving auxiliary information comprises lane related information, correcting the position of the shielded driving auxiliary information based on the driving tracks and/or road track information of other vehicles in the current vehicle preset range; and when the shielded driving auxiliary information comprises road surface road information, correcting the position of the shielded driving auxiliary information according to the map by aligning the current road with the map of the current road.
Further, when the traffic assistance information is blocked, the traffic assistance information includes: and when the lane related information is displayed, the displayed lane width is smaller than the actual lane width.
Wherein the lane related information comprises: at least one of a lane, a lane line, a road edge line, a road traffic sign, and a road traffic marking.
Further, when the traffic assistance information is blocked, the traffic assistance information includes: during the blind area information, show the virtual three-dimensional display information that the driving auxiliary information who is sheltered from corresponds, include: and displaying virtual three-dimensional display information corresponding to the blind area information in the extended area of the rearview mirror.
When the rearview mirror is a side rearview mirror, the virtual three-dimensional display information displayed in the extended area is generated by a real object corresponding to the virtual three-dimensional display information according to the mirror surface attribute of the side rearview mirror and the sight point of the driver.
Further, step 101 comprises: acquiring traffic rules and/or traffic police action information of a current road section, and converting the determined traffic rules and/or traffic police action information of the current road section in a presentation mode; step 102 comprises: and displaying the converted virtual three-dimensional display information corresponding to the traffic rules and/or the traffic police action information of the current road section.
Further, displaying the virtual three-dimensional display information corresponding to the driving assistance information may be implemented in at least one of the following manners: when the abnormal rut information is sensed, displaying virtual three-dimensional display information corresponding to the determined abnormal rut area and/or virtual three-dimensional display information of warning information of the abnormal rut area in the area; when the traffic sign of the road area which the current vehicle has driven through needs to be displayed, displaying the acquired virtual three-dimensional display information corresponding to the traffic sign of the road area which the current vehicle has driven through; when the traffic sign and/or the traffic signal lamp at the intersection where the current vehicle is located is sensed to be in existence, and the traffic sign and/or the traffic signal lamp meet the preset display condition, displaying virtual enhanced display information corresponding to the traffic sign and/or the traffic signal lamp at the intersection; when the key information in the operation dial needs to be displayed, displaying virtual three-dimensional display information corresponding to at least one of the following information: the method comprises the steps of obtaining position information of keys, function name information of the keys, operation indication information of the keys and the keys; when the information of the parking area needs to be displayed, virtual three-dimensional display information corresponding to at least one of the parking areas which are allowed to be parked and are suitable for the parking area, the parking areas which are allowed to be parked and are not suitable for the parking area and the parking area which is not allowed to be parked is displayed.
The sensing may include at least one of identification, detection, and detection of the machine device, which is not described herein again.
The abnormal rut information is a running print of a vehicle meeting the abnormal rut judgment condition in a running state, and the abnormal rut area is an area with the abnormal rut information.
Wherein the abnormal rut information comprises at least one of the following: rut information in which the direction of the rut edge line is not consistent with the direction of the lane line and/or the lane edge line; the rut information that the directions of the rut edge lines are inconsistent with the directions of the integral rut edge lines; rut information of brake marks exists.
Wherein, whether the generated rut edge performance is abnormal can be judged by the following method: judging whether a vector field constructed by the generated rut edge lines at a certain moment is obviously different from the direction of vector fields constructed by other/whole rut edge lines; if the difference is obvious, determining that the generated track edge line belongs to the abnormal track edge line; and/or judging whether the edge line of the rut has obvious brake marks or not; and if the obvious braking trace exists, determining that the generated track edge line belongs to the abnormal track edge line.
Wherein the predetermined display condition includes: at least one of traffic signs and/or traffic lights being damaged, traffic signs and/or traffic lights being obscured, traffic signs and/or traffic lights not being fully within a driver's current visual range, and driver instruction.
Further, determining the driving assistance information based on the information acquired during driving can be implemented in at least one of the following manners: determining whether the road rut information has abnormal rut information, and if the abnormal rut information exists, determining that an abnormal rut area exists; when the traffic sign of the road area which is driven by the current vehicle needs to be displayed, the traffic sign of the road area which is driven by the current vehicle is determined from the acquired multimedia information and/or the traffic sign database; when the parking area information needs to be displayed, at least one of a parking-permitted area which is suitable for parking, a parking-permitted area which is not suitable for parking and a parking-not-permitted area is determined according to at least one of whether a parking-prohibited mark exists in the area around the current vehicle, the volume of the current vehicle and the current road surface condition.
Further, step 102 comprises: and enhancing virtual three-dimensional display information corresponding to the rut information.
Further, displaying the acquired virtual three-dimensional display information corresponding to the traffic sign of the road area where the current vehicle has traveled includes: and according to the current position of the vehicle and the virtual three-dimensional display information corresponding to the traffic sign of the road area which has already traveled, adjusting the virtual three-dimensional display information corresponding to the traffic sign of the road area which has already traveled currently, and displaying the virtual three-dimensional display information corresponding to the adjusted traffic sign.
Further, step 102 comprises: determining a display mode corresponding to the virtual three-dimensional display information; and displaying virtual three-dimensional display information corresponding to the driving auxiliary information based on the determined display mode.
Wherein the display mode comprises at least one of the following: a display position of the virtual three-dimensional display information; displaying the posture of the virtual three-dimensional display information; the display size of the virtual three-dimensional display information; a display start time of the virtual three-dimensional display information; display end time of the virtual three-dimensional display information; displaying duration of the virtual three-dimensional display information; the detail degree of the display content of the virtual three-dimensional display information; a presentation mode of virtual three-dimensional display information; and displaying the interrelationship among the plurality of virtual three-dimensional display information.
Wherein the presentation comprises at least one of: a text mode; icon mode; an animation mode; a sound effect mode; a lighting mode; a vibration mode.
Further, the method also comprises at least one of the following modes: when a plurality of pieces of virtual three-dimensional display information to be displayed exist at the same time, combining the plurality of pieces of virtual three-dimensional display information to be displayed, and displaying the processed virtual three-dimensional display information; when a plurality of pieces of virtual three-dimensional display information to be displayed are displayed simultaneously, the plurality of pieces of virtual three-dimensional display information to be displayed are integrated based on semantics, and the processed virtual three-dimensional display information is displayed.
Further, the method also comprises at least one of the following modes: displaying virtual three-dimensional display information corresponding to the driving auxiliary information with priority higher than the first preset priority at a remarkable position of the current visual field of a driver, and adjusting the position for displaying the virtual three-dimensional display information in real time according to the sight line position of the driver; displaying virtual three-dimensional display information corresponding to the driving auxiliary information with the priority higher than the first preset priority, and pausing and/or stopping displaying the virtual three-dimensional display information corresponding to the driving auxiliary information with the priority lower than the second preset priority.
The prominent position may be at least one of a central region of the driver's current visual field, a driver visual line focusing region, a region where the driver's visual line stays long, and a region facing the driver at the front.
The first preset priority and/or the second preset priority can be defined according to instructions of a driver; the driving assistance information may also be adaptively ranked according to at least one of the perceived road condition, the vehicle condition, the driver's intention, and semantic analysis of the driving assistance information.
Further, step 102 comprises: determining at least one of display start time, display end time and display duration of the virtual three-dimensional display information according to at least one of the current state of the vehicle, the current road condition information and the system delay condition of the equipment; and displaying the virtual three-dimensional display information corresponding to the driving auxiliary information according to at least one of the determined display starting time, display ending time and display duration of the virtual three-dimensional display information.
Further, when there are a plurality of pieces of virtual three-dimensional display information corresponding to the driving assistance information to be displayed at the same time and there is a shielding relationship between the pieces of virtual three-dimensional display information to be displayed, at least one of the following methods may be further included: according to the position relation among a plurality of pieces of virtual three-dimensional display information with shielding relation, only the part which is not shielded in the virtual three-dimensional display information is displayed; respectively displaying virtual three-dimensional display information in a plurality of pieces of virtual three-dimensional display information with shielding relations at different display times; and adjusting at least one of the display position, the content detail degree and the presentation mode of at least one piece of virtual three-dimensional display information in the plurality of pieces of virtual three-dimensional display information with the shielding relation, and displaying each piece of virtual three-dimensional display information in the plurality of pieces of virtual three-dimensional display information with the shielding relation according to the adjustment mode.
Further, step 102 comprises: and displaying virtual three-dimensional display information corresponding to the driving auxiliary information to be displayed at a preset display position.
Wherein the preset display position comprises at least one of the following items:
a display position aligned with the real driving assistance information; the position of the area where the driver is not interfered with; a prominent position of the driver's current field of view; a relatively open driver view; the position of the driver with insufficient attention.
Further, the method further comprises: rendering virtual three-dimensional display information to be displayed in advance; when a preset display triggering condition is met, acquiring virtual three-dimensional display information to be displayed from pre-rendered virtual three-dimensional display information, adjusting the presentation mode of the virtual three-dimensional display information according to the current environment, and displaying the virtual three-dimensional display information according to the adjusted presentation mode; and adjusting the display mode of the virtual three-dimensional display information in real time according to the current environment, and displaying the virtual three-dimensional display information according to the adjusted display mode.
The preset display triggering condition can be defined according to the instruction of a driver; the preset display triggering condition can also be defined in a self-adaptive manner according to at least one of the sensed road condition, the vehicle condition, the intention of the driver and the semantic analysis of the driving auxiliary information.
Compared with the prior art, the embodiment of the invention determines the driving auxiliary information based on the information acquired in the driving process, then displays the virtual three-dimensional display information corresponding to the driving auxiliary information, namely determines the driving auxiliary information in the driving process according to the information acquired in the driving process of the vehicle, and presents the virtual three-dimensional display information corresponding to the driving auxiliary information in the driving process to the driver in a visual and/or auditory way to inform or warn the driver, so that the driving information in the driving process of the vehicle can be better mastered by the driver by applying the augmented reality technology in the driving process of the vehicle, and the user experience can be improved.
For the embodiment of the present invention, fig. 1 shows an overall flow diagram of a display mode of a driving assistance device (hereinafter, referred to as a device) described in the present invention, and the method may be applied to an augmented/mixed reality head-mounted display device (near-eye display device) worn by a driver during driving, such as 3D augmented reality glasses, and/or an on-vehicle display device placed on a vehicle, such as a 3D head-up display. Note that the device may include multiple display devices of the same or different types, and the implementation may vary when the display devices are different, as described in more detail below.
In the overall flow diagram of the device according to the embodiment of the present invention, the content executed by each step is summarized as follows: step S110 (not shown): the method comprises the steps that equipment determines one or more pieces of target driving auxiliary information needing to be displayed; step S120 (not shown): acquiring information, processing the information and generating target driving auxiliary information content; step S130 (not shown): determining a display mode of one or more target driving auxiliary information; step S140 (not shown): and displaying the virtual three-dimensional AR information corresponding to the one or more target driving auxiliary information.
Information which is presented by the AR information except the AR object in at least one presentation mode of a text mode, an icon mode, an animation mode, a sound effect mode, a light mode, a vibration mode and the like, such as an arrow icon provided with text; the AR object may include, but is not limited to, information presented in the form of a real object, such as a virtual traffic light.
Wherein, hereinafter, although AR information and AR objects are often referred to simultaneously, AR objects in AR information, although not all need to be aligned with real objects, generally refer to virtual objects that need to be aligned with real objects when displayed; while other AR information, when displayed, generally need not be aligned with the real object.
In step S110, the device may select zero target driving assistance information, that is, virtual three-dimensional display information corresponding to the driving assistance information is not displayed in the current scene.
In step S110, the device may adaptively determine the target driving assistance information to be displayed by identifying a scene, may obtain the target driving assistance information to be displayed in a user interaction manner, or may combine the two manners.
The target driving assistance information may include, but is not limited to, prompt information associated with information identifiers such as driving safety, driving environment, traffic and road conditions, traffic rules, in-vehicle information, and the like during driving.
Specifically, the target objects in the driving process related to the target driving assistance information may include, but are not limited to: lane lines, lane separation barriers, lanes, surrounding motor vehicles, surrounding non-motor vehicles, surrounding pedestrians, surrounding trees, surrounding buildings, road ruts, traffic and road conditions, traffic police, blind zone objects of side rearview mirrors, objects of rear seats in the vehicle, areas outside the vehicle tail, and a driving console.
In step S130, the determination of the display mode includes at least one of the following: the display position, the display posture, the display size, the display starting time, the display ending time of one or more pieces of AR information, and/or the position and/or posture when the plurality of pieces of AR information are displayed, the display starting timing, the display ending timing, the display duration, the display content detail degree, the presentation mode, and the display interrelation among the plurality of pieces of AR information.
Example one
The embodiment provides a method for displaying driving assistance information, which is used for a road with a partially covered and blocked road surface, and comprises at least one of the following steps: and displaying prompting information of information marks such as lane lines, road edge lines, road traffic signs, road traffic marking lines and the like, and displaying corresponding augmented reality auxiliary driving information, as shown in fig. 2. The road line on the road can be shielded by, but not limited to, fallen leaves, accumulated snow, accumulated water, mud, oil and the like, or a combination of the two.
The method in the embodiment comprises the following steps:
step S1101 (not shown), the apparatus determines whether the road information needs to be displayed.
This step S1101 may be one implementation of the device determining one or more target driving assistance information that needs to be displayed.
For the embodiment of the invention, the device determines the road information around the vehicle by means of image detection and recognition, which may include but is not limited to lane lines, road edge lines, road traffic signs, and road traffic markings. The device can be in a state of keeping detection and identification all the time, and can be in a function of starting display function in a self-adaptive mode under the condition of partially blocking a covered road surface, and can also start detection, identification and/or display according to the instruction of a user. The user instruction may be carried by a biological identification such as a gesture, voice, physical key, fingerprint, and the like.
Step S1201 (not shown), the device detects and identifies the road information, and generates the target driving assistance information content.
This step S1201 may be an implementation manner of acquiring information, processing the information, and generating the content of the target driving assistance information.
For the embodiment of the invention, the equipment locates and identifies the visible part comprising at least one of the lane line, the road edge line, the road traffic sign and the road traffic sign line from one or more images/videos through the image processing technology and the identification technology, namely the part which is not completely covered and blocked; and connecting the visible line segments of the lane lines into a complete lane line (if the lane line is a dotted line, the dotted line is supplemented); connecting the visible segments of the road edge lines into complete road edge lines, and identifying the type of the road traffic sign and/or the road traffic marking according to the visible portion of the road traffic sign and/or the road traffic marking.
Specifically, in a single image, visible line segments of lane lines and road edge lines can be extracted by an image edge extraction algorithm and/or a color clustering algorithm, and the prior knowledge that the lane lines and the road edge lines are usually regular straight lines or smooth arc lines can be used to remove wrong line segments; the outlines of the partially shielded road traffic signs and/or road traffic markings can be extracted through an image edge extraction algorithm and/or a color clustering algorithm, and the complete road traffic signs and/or road traffic markings are obtained in a road traffic sign database in a matching manner.
For the embodiment of the invention, the lane line, the road edge line, the road traffic sign and the road traffic marking can also be directly identified and positioned by the detection and identification algorithm. In the identification process, the detection and identification can be assisted by using the domain knowledge of road traffic and/or a road map. For example, the white single-dashed line and the white single-solid line each appear as an irregular white single-dashed line on a partially covered and blocked road surface, and the device may determine whether the detected white single-dashed line corresponds to a real white single-dashed line or a real white single-solid line by determining whether a line segment having a length exceeding a prescribed length exists in the detected white single-dashed line based on the knowledge of the field of road traffic. Further, based on the domain knowledge of road traffic and the road map, the device may determine whether it corresponds to a true white single-dashed line or a true white single-solid line according to the detected location and traffic meaning of the white single-dashed line.
For example, the detected white single-dashed line is located in the middle of the road, which should correspond to the true white single-solid line according to the domain knowledge of road traffic.
Further, in the identification process, the correct road traffic markings and/or road traffic markings may be generated using domain knowledge of road traffic, and/or road maps. For example, a straight arrow and a right-turn arrow on a road surface are both rectangular, and cannot be distinguished when the arrows partially cover the road; however, if the road map shows a right-turn lane according to the detected lane in which the rectangle is located, the device can distinguish that the road traffic sign is a right-turn direction arrow.
Specifically, when there are a plurality of images in space, and/or a plurality of images in time at the same time, the apparatus may cooperatively recognize at least one of a lane line, a road edge line, a road traffic sign, and a road traffic marking using the plurality of images, remove an error when a single image is used, and make the recognition result consistent in space and/or time.
For example, the device may recognize that the lane line corresponds to a true white solid line on a certain section of the road surface, track the same lane line in the following driving, and keep recognizing as the white solid line.
And step S1301 (not shown in the drawing), determining a display mode of the target driving assistance information content.
The step S1301 may be an implementation manner of determining a display manner of one or more target driving assistance information.
For the embodiment of the invention, the device acquires the position and the posture of the real object relative to the display device through a positioning algorithm, so that the displayed AR information can be aligned with the corresponding real object.
Specifically, to reduce the delay, the apparatus may predict, for the same real object, the relative position, posture and/or scale relationship between the current and the host vehicle of the real object based on the motion model of the host vehicle, and predict the relative position, posture and/or scale relationship between the real object and the host vehicle at a future time, thereby preparing in advance the AR information corresponding to the target driving assistance information.
Specifically, when the road information around the vehicle is acquired by using a single camera, approximate feature points can be extracted from the road image in consideration that the local road around the vehicle can be regarded as a plane, and the relative position and attitude relationship between the road and the camera can be acquired by solving a homography matrix; more precisely, the image sequence can be subjected to feature tracking in a visual odometer manner, wherein features are acquired from real objects needing to be aligned, such as lane line segments, road edge line segments, road traffic sign outlines and the like, so that the relative position and posture relation of the real objects relative to the camera is acquired. In particular, feature point extraction and tracking can be assisted by image recognition segmentation to remove false matches and speed up.
For the embodiment of the invention, when a single camera is used, the scale information of the real object can be obtained through the following three modes (the three modes can be used independently or in combination); 1) The equipment can acquire the scale information by calibrating the installation fixed height of the camera in the vehicle in advance; 2) The device may obtain the physical size of the real object according to the prior knowledge in the field of road traffic, and then obtain the scale information, for example, may obtain the prescribed width of the local lane according to the prior knowledge; 3) When the device uses the image sequence, the scale can be obtained by information such as the actual moving speed, the distance and the like of the vehicle.
For the embodiment of the present invention, when the single camera is used in cooperation with at least one of the stereo camera, the depth camera, the laser sensor, the radar sensor, and the ultrasonic sensor to acquire the road information around the vehicle, the relative position and posture relationship between the real object and the camera may be acquired in a manner similar to the above-mentioned case of using the single camera, and details thereof are not repeated here. In particular, when at least one of a calibrated stereo camera, a depth camera, a laser sensor, a radar sensor and an ultrasonic sensor is used, the dimension information of the real object can be directly obtained, and the cross validation can also be carried out by using the dimension estimation value obtained by the mode when the single camera is used.
For the embodiment of the invention, the scale information can also be obtained by matching other sensors and a single camera, for example, the scale information can be estimated by data fusion of a code disc and the single camera; as another example, inertial sensor units (including accelerometers and gyroscopes) and single-camera data fusion can estimate scale information; the device may also use the data of these sensors in combination to obtain dimensional information.
For the embodiment of the invention, by the above formula, the position, the posture and the scale relation of the real object and at least one outward-looking camera (namely, a camera for shooting the environment outside the host vehicle) on the host vehicle or on the equipment can be obtained.
For embodiments of the present invention, in order to align the AR information with the real object, the device needs to further estimate the relative position, pose, and scale relationship between the eye and the real object. The procedure and manner of estimation are related to the type of display device of the apparatus, and the following describes the case where the display device is a single head-mounted display device or a single in-vehicle display device. When the display device of the device includes a plurality of head-mounted display devices and/or a plurality of vehicle-mounted displays, the following method can be applied by relatively direct combination and adjustment, and is not described again.
1) In the case of a single head-mounted display device, the relative positions and the attitude relationships between the eyes and the display device are relatively fixed, and can be calibrated in advance (occasionally recalibration is needed during the use period, for example, the position and attitude relationship needs to be recalibrated after the user adjusts the position of the head-mounted display device);
1.1 For the case where the position, posture, and scale relationship of the real object and one of the outward-looking cameras on the device (i.e., the camera that captures the environment outside the host vehicle) have been acquired, since the one of the outward-looking cameras on the device and the display device position posture scale relationship are relatively fixed, the device can calculate the acquisition of the relative position, posture, and scale relationship between the eyes and the real object (eye ← calibration → display device ← calibration → outward-camera on the device ← estimation → real object);
1.2 For the case that the position, posture and scale relationship of the real object and one outward-looking camera on the host vehicle (i.e. the camera shooting the environment outside the host vehicle) are already acquired, it is still necessary to acquire the relative position-posture relationship between the display device and the outward-looking camera, which is different according to the hardware implementation of the device, as the following two ways 1.2.1) and 1.2.2):
1.2.1 The device may use the outward-view cameras on the device to obtain a relative position-pose-scale relationship of one outward-view camera on the host vehicle and one outward-view camera on the device. The obtaining mode can be realized by pasting a positioning marker on a position with fixed relative position and attitude dimension with the vehicle-mounted display device, and the relative position and attitude dimension relation between the external-view camera on the equipment and the external-view camera of the vehicle is calculated through the positioning marker; the acquisition mode may also be an external view camera on the host vehicle as an object in the scene, and may be an extracted feature point tracking mode based on image and/or multiple sensor fusion, and/or a detection mode, such as SLAM (Simultaneous Localization and Mapping) technology and target tracking technology. Namely (eye ← calibration → display device ← calibration → outward vision camera on device ← estimate → outward vision camera on host vehicle ← estimate → real object);
1.2.2 The device may use an in-view camera on the host vehicle (i.e., a camera that captures the environment inside the host vehicle, such as the position of the driver) to obtain a relative position-pose-scale relationship of the display device and the in-view camera on the host vehicle. The acquisition mode can be based on positioning markers on a display device, and/or based on images and/or an extracted feature point tracking mode of multiple sensor fusion, and/or detection and the like, such as SLAM (Simultaneous Localization and Mapping) technology and target tracking technology. The relative position and attitude dimensional relationship between the inward-looking camera on the host vehicle and the outward-looking camera on the host vehicle is relatively fixed, and can be calibrated in advance (the position and attitude dimensional relationship needs to be recalibrated occasionally during the use period, for example, after the vehicle bumps violently). Namely (eye ← calibration → display device ← estimation → inner camera on the host vehicle ← calibration → outer camera on the host vehicle ← estimation → real object);
2) In the case that the display device is a single vehicle-mounted display device, the relative position and attitude relationship between the display device and one exterior camera on the vehicle is relatively fixed, and can be calibrated in advance (the position, attitude and scale relationship needs to be recalibrated occasionally during the service life, for example, after the vehicle bumps violently); particularly, the relative position and attitude scale relationship between one outward-looking camera on the equipment and one outward-looking camera on the vehicle can also be considered as relatively fixed, and the two outward-looking cameras can also be the same, so that the outward-looking camera does not need to be distinguished as the outward-looking camera on the vehicle or the outward-looking camera on the equipment under the condition; in order to acquire the relative position, posture, and scale relationship between the eye and the real object, the apparatus only needs to acquire the relative position, posture, and scale relationship of the eye and the display device. According to different hardware implementation modes of the equipment, the following two modes are provided: 1.3 And 1.4), wherein the content of the first and second substances,
1.3 The device may acquire the relative position-attitude-scale relationship of the in-vehicle display apparatus and the head-mounted outward-view camera using the outward-view camera worn by the driver. The relative position and posture and scale relation of the head-mounted exterior camera and the eyes can be considered to be relatively fixed, and can be calibrated in advance (occasionally recalibration is needed during the service life, for example, the position and posture and scale relation need to be recalibrated after the user adjusts the position of the head-mounted camera). The acquisition mode can be realized by attaching a positioning marker to a position fixed relative to the vehicle-mounted display device, and the head-mounted external camera calculates the relative position and posture relation with the vehicle-mounted display device through the positioning marker; the acquisition may be performed by using the on-board display device on the vehicle as an object in the scene, and using the above-described extracted feature point tracking method based on image and/or multiple sensor fusion, and/or detection methods, such as SLAM (Simultaneous Localization and Mapping) technology and target tracking technology. Namely (eye ← calibration → external camera worn on head ← estimation → display device ← calibration → external camera ← estimation → real object);
1.4 The device may use an in-view camera on the host vehicle (i.e., a camera that captures the environment inside the host vehicle, such as the position of the driver) to obtain a relative position-pose-scale relationship of the eyes and the in-view camera on the host vehicle. The manner of obtaining may be based on the positioning markers worn by the driver, wherein the relative position posture scale relationship between the eyes and the worn positioning markers may be considered relatively fixed, and may be calibrated in advance (occasionally requiring recalibration during the use period, for example, the user needs to recalibrate the position posture and scale relationship after adjusting the position of the head-mounted positioning markers); the acquisition mode can also be an image-based head/eye/sight line positioning and tracking technology, the equipment positions the head of the driver through the image or video of an inward-looking camera on the vehicle, and positions the relative position and posture relation between the eyes and the inward-looking camera based on the positioning result of the head; the acquisition mode can also be an image-based eye positioning and tracking technology, and the device directly positions the relative position and posture relationship between the eyes and the inward-looking camera through the image or video of the inward-looking camera on the vehicle. The relative positions, postures and scale relations of the inward-looking camera on the vehicle and the vehicle-mounted display device on the vehicle are relatively fixed, and can be calibrated in advance (the recalibration is occasionally needed during the use period, for example, after the vehicle bumps violently, the position posture and scale relation need to be recalibrated). Namely (eye ← estimate → in-view camera on the host vehicle ← calibration → display device ← calibration → out-view camera on the host vehicle ← estimate → real object).
For the embodiment of the present invention, in addition to determining the display position, posture and scale of the AR information, the device still needs to determine the presentation manner of the AR information, which may include but is not limited to color, brightness, transparency, icon form, and the like. For AR information that needs to be aligned with a real object, the device preferentially presents the AR information in a color and form that is consistent with the real object.
For example, if the real lane line on the road should be a white single-dotted line, the device preferentially presents the AR lane line in the manner of the white single-dotted line; if presenting the AR information in a color and form consistent with the real object causes the AR information not to be clearly recognized and understood by the driver, the device adaptively selects a better color and form. The equipment acquires road images or videos through one or more external cameras, projects AR information to the images or videos in an expected presentation form according to the estimated position posture scale, can acquire the contrast ratio of the AR information and surrounding scenes through an image video analysis technology, further judges whether the expected presentation form is appropriate, and if not, replaces the presentation form with higher recognition degree according to scene brightness and color and the like. For example, for a situation where the road is partially obstructed by snow, presenting the AR lane line in a white single-dashed line may result in too little difference between the AR lane line and the road surface, and the device may adaptively select, for example, presenting the AR lane line in a blue single-dashed line.
Step S1401 (not shown) displays the AR information corresponding to the generated target driving assistance information.
This step S1401 may be one implementation of displaying virtual three-dimensional AR information corresponding to one or more pieces of target driving assistance information.
For the embodiment of the present invention, the device displays the AR information on the display device in the presentation manner determined in step S1301 according to the relative position, posture, and scale relationship between the eye and the real object, so that the displayed AR information and the corresponding real object can be aligned.
Example two
The embodiment of the invention provides a method for displaying driving assistance information, which is used for displaying prompting information of information marks such as lane lines, lanes, ruts, road traffic signs, road traffic markings and road conditions on a road which is completely covered and shielded on the road surface but still can be sensed by a middle lane isolation railing (or other visible traffic sign obstacles) and displaying corresponding augmented reality assistance driving information, and is shown in fig. 3. The road line on the road can be shielded by but not limited to accumulated snow, water, mud, oil and the like or a combination of the accumulated snow, the water, the mud and the oil.
The method of the embodiment of the invention comprises the following steps:
step S1102 (not shown), the device determines whether road information needs to be displayed.
This step S1102 may be one implementation of the device determining one or more target driving assistance information that needs to be displayed.
For the embodiment of the present invention, the device determines the road information around the host vehicle by means of image recognition, which may include but is not limited to lane lines, road edge lines, and information identifiers such as road traffic and road conditions. The equipment can be in a state of always keeping recognition, the display function is started in a road condition self-adaption mode when the road is completely covered and the middle lane isolation railings (or other visible traffic sign obstacles) can still sense the road surface condition, and the recognition and/or display function can also be started according to the instruction of a user. The user instruction may be carried by a biological identification such as a gesture, voice, physical key, fingerprint, and the like.
Step S1202 (not shown), the device senses the road information and generates the target driving assistance information content.
This step S1202 may be an implementation manner of acquiring information, processing the information, and generating the content of the target driving assistance information.
For the embodiment of the invention, the equipment detects, identifies and positions the middle lane isolation barrier (or other visible traffic sign obstacles), estimates the width of the road surface on one side of the driving direction of the vehicle, and estimates the positions of the lane lines and the road edges.
Specifically, firstly, the device identifies and locates the still visible middle lane isolation barrier (or other visible traffic sign obstacles) from one or more images/videos through an image processing technology and an identification technology, and uses the middle lane isolation barrier (or other visible traffic sign obstacles) as a road middle position reference object; secondly, when lane isolation railings (or other visible traffic sign obstacles) are also present at the road edges, the equipment can also be identified and located by similar techniques. The area between the road edge isolation railing (or other visible traffic sign obstacles) and the middle lane isolation railing (or other visible traffic sign obstacles) is the driving area in the driving direction of the vehicle.
For the embodiments of the present invention, when there is no lane isolation barrier (or other visible traffic sign barrier) at the road edge, the apparatus may obtain the distance between the middle lane isolation barrier (or other visible traffic sign barrier) and the out-of-lane object (e.g. bicycle, tree, house) by using at least one of a single camera, a stereo camera, a depth camera, a laser sensor, a radar sensor, an ultrasonic sensor, and correct the measured distance to the direction perpendicular to the middle lane line by taking into account the orientation of the camera and/or other sensor when obtaining the distance, i.e. obtain the robust statistical shortest distance between the middle lane isolation barrier (or other visible traffic sign barrier) and the out-of-lane object (e.g. pedestrian, bicycle, tree, house, etc.), which should be the statistical shortest distance measured multiple times at multiple positions during the traveling of the host vehicle. The robust statistical shortest distance is the upper limit of the lane width at one side of the driving direction of the vehicle. When a road map is present, this upper bound may further validate the optimization based on the road width shown by the map.
Wherein the orientation of the camera and/or other sensors may be obtained by the apparatus first identifying from the sequence of images the direction of extension of the center lane barrier (or other visible traffic sign barrier) and calculating the orientation relationship of the camera to this direction; the relative position attitude dimensional relationships of the camera and other sensors on the host vehicle may be considered fixed and may be calibrated in advance (occasionally requiring recalibration during use, for example after a vehicle jolt, requiring recalibration of the position attitude and dimensional relationships) so that the orientation of the camera and/or other sensors may be derived.
For the embodiment of the invention, in the upper limit of the lane width, based on the domain knowledge of road traffic and/or the road map, the device can predict the approximate position of each lane line and the approximate position of the edge line of the road according to the prior knowledge of the width of the lane line and/or the number of the lane lines and the like. The map range of the vehicle can be determined by using the road map in combination with ways including but not limited to wireless signal Positioning, GPS (Global Positioning System) Positioning and the like; the road map may be stored in the storage space of the device or the host vehicle in advance, or may be acquired through network communication.
For the embodiment of the invention, the device can acquire images and/or videos of surrounding vehicles and a road surface which is completely covered by the surrounding vehicles through one or more cameras, and/or stereo cameras and/or depth cameras, and analyze the driving tracks of other vehicles and/or tracks on the road surface within a preset range of the current vehicle by utilizing object detection, object recognition and tracking technologies, so as to more accurately correct the predicted positions of lane lines and road edge lines; when a high-precision GPS and a high-precision road map exist, the equipment can align the current road with the road map through the visual positioning information and the GPS positioning information, and the prediction precision of the positions of the lane lines and the road edge lines is improved. The identification and detection mode of the ruts is detailed in the fourth embodiment.
For the embodiment of the invention, when the high-precision road map exists, the equipment can align the current road with the road map by the perceived road environment through the visual positioning information and/or the GPS positioning information, so that the equipment can acquire the road information of the road surface through the road map and generate the corresponding target driving auxiliary information.
Step S1302 (not shown in the figure) determines a display mode of the target driving assistance information content.
This step S1302 may be one implementation of determining a display manner of one or more pieces of target driving assistance information.
Similar to step S1301 of the first embodiment, details are not repeated. In particular, considering that the estimation of the lane range may have errors, that is, the estimated lane range may include a part of adjacent real lanes or road edges, in order to avoid making the vehicle travel in the boundary area between two lanes or the boundary area between a lane and a road edge, the device narrows both sides of the estimated lane area inward, and prepares the target driving assistance information only for the middle area of the estimated lane area.
Step S1402 (not shown) displays the AR information corresponding to the generated target driving assistance information.
This step S1402 may be an implementation manner of displaying virtual three-dimensional AR information corresponding to one or more target driving assistance information.
Similar to step S1401 of the first embodiment, details are not repeated.
EXAMPLE III
The embodiment provides a method for displaying driving assistance information, which is used for displaying prompting information of information marks such as lane lines, lanes, ruts, road traffic signs, road traffic markings and road conditions on a road which is completely covered and shielded on the road surface and does not have middle lane isolation railings (or other visible traffic sign obstacles) and displaying corresponding augmented reality driving assistance information. The road line on the road can be covered by but not limited to accumulated snow, accumulated water, mud, oil and the like.
The method of the embodiment of the invention comprises the following steps:
step S1103 (not shown), the device determines whether road information needs to be displayed.
This step S1103 may be one implementation of the device determining one or more target driving assistance information that needs to be displayed.
For the embodiment of the present invention, the device determines the road information around the host vehicle by means of image recognition, which may include but is not limited to lane lines, road edge lines, and information identifiers such as road traffic and road conditions. The equipment can keep the recognition state all the time, and can start the recognition and/or display function according to the instruction of a user when the road is completely covered and the road surface condition of the middle lane isolation barrier (or other visible traffic sign obstacles) is not self-adaptively started. The user instruction may be carried by a biological identification such as a gesture, voice, physical key, fingerprint, and the like.
Step S1203 (not labeled in the drawing), sensing the road information by the device, and generating a target driving assistance information content.
This step S1203 may be an implementation manner of acquiring information, processing the information, and generating the target driving assistance information content.
For the embodiment of the invention, the equipment estimates the width of the road surface, estimates the position of the middle separation line separating the bidirectional lanes (if the road is not a unidirectional driving road), and estimates the positions of the lane line and the road edge.
First, the apparatus estimates the road surface width. When lane isolation railings (or other visible traffic sign obstacles) exist at the two side edges of the road, the equipment can locate and identify the road edge isolation railings (or other visible traffic sign obstacles) at the two sides from one or more images/videos through an image processing technology and an identification technology, and the area between the two is the road surface area.
When the lane isolation barrier (or other visible traffic sign obstacles) exists on only one side of the road edge, the equipment can identify and position the road edge position on the side through an image processing technology and a recognition technology to serve as a reference object; for the side where the lane isolation barrier (or other visible traffic sign obstacle) does not exist, the apparatus may acquire the distance between the reference object and an object (e.g., a bicycle, a tree, a house) outside the road edge on the other side by using at least one of a single camera, a stereo camera, a depth camera, a laser sensor, a radar sensor, and an ultrasonic sensor, correct the measured distance to a direction perpendicular to the reference object in consideration of the orientation of the camera and/or other sensor when acquiring the distance, that is, acquire a robust statistical shortest distance between the road edge on the one side where the isolation barrier (or other visible traffic sign obstacle) exists and the object (e.g., a pedestrian, a bicycle, a tree, a house, etc.) outside the road edge on the other side where the isolation barrier (or other visible traffic sign obstacle) does not exist, which should be a statistical distance measured multiple times at multiple positions during the travel of the host vehicle, and which is an upper limit of the road surface width. When a road map is present, this upper bound may further validate the optimization based on the road width shown by the map. Wherein the orientation of the camera and/or other sensors may be obtained by the apparatus first identifying from the sequence of images the direction of extension of the lane barriers (or other visible traffic sign obstacles) on one side of the road, and calculating the orientation of the camera with respect to this direction; the relative position attitude dimensional relationships of the camera and other sensors on the host vehicle may be considered fixed and may be calibrated in advance (occasionally requiring recalibration during use, for example after a vehicle jolt, requiring recalibration of the position attitude and dimensional relationships) so that the orientation of the camera and/or other sensors may be derived.
When there is no lane isolation rail (or other visible traffic sign barrier) on both sides of the road edge, as shown in fig. 4, the apparatus may obtain a sum of distances between the host vehicle and objects (e.g., pedestrians, bicycles, trees, houses, etc.) outside both sides of the road edge by using at least one of a single camera, a stereo camera, a depth camera, a laser sensor, a radar sensor, and an ultrasonic sensor, and correct the measured distance to a direction perpendicular to the road direction in consideration of the orientation of the camera and the width between the sensors on both sides of the host vehicle when obtaining the sum of distances, i.e., obtain a robust statistical shortest distance between the objects (e.g., pedestrians, bicycles, trees, houses, etc.) outside both sides of the road edge, which should be a statistical distance measured multiple times at multiple positions during travel; the robust statistical shortest distance is the upper limit of the road surface width. Wherein the orientation of the camera and/or other sensors may be obtained as follows.
Specifically, the equipment firstly identifies the arrangement of the trees at the roadside and/or the extending direction of the surface of the building from the image sequence, and calculates the orientation relation between the camera and the direction; the relative position attitude dimensional relationships of the camera and other sensors on the host vehicle may be considered fixed and may be calibrated in advance (occasionally requiring recalibration during use, for example after a vehicle jolt, requiring recalibration of the position attitude and dimensional relationships) so that the orientation of the camera and/or other sensors may be derived.
For the embodiment of the present invention, as shown in fig. 5, within the upper limit of the estimated road surface range, based on the domain knowledge of road traffic and/or the road map, the device may predict the approximate position of the middle line of the road according to the lane line width and/or the number of the lane lines in each direction, and estimate the approximate position of each lane line and the approximate position of the edge line of the road with the middle line as a reference;
considering that if the estimated lane range is correct, the vehicle or the track running in two directions does not exist in the range at the same time, the device can acquire images and/or videos of surrounding vehicles and road surfaces covered completely through one or more cameras, and/or stereo cameras, and/or depth cameras, and analyze the driving tracks and/or the tracks on the road surfaces of other vehicles within the preset range of the current vehicle by using object detection, object recognition and tracking technologies to more accurately correct the predicted positions of lane lines and road edge lines; when a high-precision GPS and a high-precision road map exist, the equipment can align the current road with the road map through the visual positioning information and the GPS positioning information, and the estimation precision of the positions of the lane lines and the road edge lines is improved. The identification and detection mode of the ruts is detailed in the fourth embodiment.
For the embodiment of the invention, when the high-precision road map exists, the equipment can align the current road with the road map by the perceived road environment through the visual positioning information and/or the GPS positioning information, so that the equipment can acquire the road information of the road surface through the road map and generate the corresponding target driving auxiliary information.
And step S1303 (not shown in the figure), determining a display mode of the target driving assistance information content.
This step S1303 may be an implementation manner of determining a display manner of one or more pieces of target driving assistance information.
Similar to step S1301 of the first embodiment, details are not repeated. In particular, in consideration of the possibility of an error in the estimation of the lane range, that is, the estimated lane range may include a part of the adjacent real lane or the road edge, in order to avoid causing the vehicle to travel in the boundary region between two lanes or the boundary region between the lane and the road edge, the apparatus narrows both sides of the estimated lane region inward, and prepares the target driving assistance information only for the middle region of the estimated lane region.
Step S1403 (not shown) displays the AR information corresponding to the generated target driving assistance information.
This step S1403 may be an implementation manner of displaying virtual three-dimensional AR information corresponding to one or more pieces of target driving assistance information.
Similar to step S1401 of the first embodiment, details are not repeated.
Example four
The embodiment of the invention provides a method for displaying driving assistance information, which is used for displaying the prompt information of a track information identifier and displaying corresponding augmented reality auxiliary driving display information on a road with a completely covered and shielded road surface. The road line on the road can be covered by but not limited to accumulated snow, accumulated water, mud, oil and the like.
The method of the embodiment of the invention comprises the following steps:
in step S1104 (not shown), the device determines whether rut related information needs to be displayed.
This step S1104 may be one implementation of the device determining one or more target driving assistance information that needs to be displayed.
For the embodiment of the present invention, the device determines the road information around the host vehicle by means of image recognition, which may include but is not limited to lane lines, road edge lines, road traffic signs, road traffic markings, road conditions, and other information identifiers. The device can be in a state of keeping detection and identification all the time, and can be in a function of starting display in a self-adaptive manner under the condition that a road is completely covered by shielding, or can start identification and/or display according to an instruction of a user. The user instruction may be carried by a biological identification such as a gesture, voice, physical key, fingerprint, and the like.
Step S1204 (not labeled in the drawing), the device detects a road rut.
This step S1204 may be an implementation manner of acquiring information, processing the information, and generating the content of the target driving assistance information.
For embodiments of the present invention, the apparatus first captures road surface images or video using one or more cameras, and detects and locates ruts on the road surface using image processing techniques and recognition techniques.
Specifically, in a single image, the device may use image recognition techniques to locate rutting areas, which may be extracted by image edge extraction algorithms and/or color clustering algorithms. The equipment can connect the detected rut edge line segments into continuous rut edge lines according to the direction, and can also construct the rut edge line segments into a vector field; when a plurality of images in space and/or a plurality of images in time exist at the same time, the plurality of images can cooperatively identify the ruts, and connect the rut edge lines in the plurality of images through a characteristic tracking and pattern matching mode, thereby removing errors when a single image is used and enabling the identification result to be consistent in space and/or time. The pattern matching may be performed by removing false matches to account for the amount of time left behind the rut (e.g., inferring the time left behind from the depth of the rut).
Step S1304 (not shown in the drawing), determining a display mode of the driving assistance information content related to the rut.
This step S1304 may be one implementation of determining a display manner of one or more pieces of target driving assistance information.
For the embodiment of the invention, the equipment judges whether the abnormal rut exists. The abnormal track can be judged by judging whether a vector field constructed by the track edge lines is smooth, and whether the general trend is consistent with the direction of the lane line and the road edge lines; whether the vector field constructed by the generated rut edge lines at a certain moment is obviously different from the direction of the vector field constructed by other/whole rut edge lines can also be judged; if the difference is obvious, determining that the generated track edge line belongs to the abnormal track edge line; abnormal ruts often have significant directional conflicts with respect to lane lines and road edges, and are likely to have skid marks. When the abnormal track exists, the equipment enhances and displays the road surface area where the abnormal track is located in a warning mode and generates driving warning AR information, and the information is shown in figure 6; when the abnormal rut does not exist, the equipment judges whether the rut is obviously visible or not according to the contrast of the color of the rut on the image or the video and/or the definition of the edge. If clearly visible, the display is not enhanced; for the road surface with all the tracks not obviously visible, the equipment selects the most obvious tracks according to the driving route of the vehicle, and performs enhanced display, as shown in fig. 7. Particularly, the device detects and observes the road surface with enough distance in front in advance according to the driving state (such as speed) of the vehicle, the road surface environment (such as whether the road surface is wet or not) and other factors, dynamically adjusts the early warning time, and reserves enough reaction time for the driver.
The AR object for enhancing and displaying the road surface area where the abnormal rut is located and enhancing and displaying the unobvious rut needs to be aligned with the real road surface and the real rut, the display mode is similar to the step S1301 in the first embodiment, and details are not repeated.
For embodiments of the present invention, the AR information for the abnormal rut warning need not be aligned with the real object. When the display device is a head-mounted display device, the equipment can directly present the AR information at the obvious position of the current visual field of the driver in a striking mode, the display mode is determined according to the focusing depth of the driver, the front of the AR information is opposite to the sight line of the driver, and the focusing depth of the eyes of the driver can be obtained in a sight line tracking mode; when the display device is an in-vehicle display device, the apparatus may present the AR information in an area of the display screen in a conspicuous manner, and set the posture and depth of the AR information to be in focus depth with respect to the driver's eyes in front of the driver's eyes according to the driver's sight line, as shown in fig. 8. And the equipment can attract the attention of the driver by matching with animation, sound effect and other modes, so that the delay is reduced.
Step S1404 (not shown) displays the AR information corresponding to the generated target driving assistance information.
This step S1404 may be one implementation of displaying virtual three-dimensional AR information corresponding to one or more pieces of target driving assistance information.
Similar to step S1401 of the first embodiment, details are not repeated.
EXAMPLE five
The embodiment of the invention provides a method for displaying driving assistance information, which is used for displaying the prompt information of a blocked traffic sign on a road with the traffic sign partially or completely covered and blocked and displaying corresponding augmented reality assisted driving display information, and is shown in fig. 9. The traffic signs can be, but are not limited to, snow, mud, oil, printed matters, leaves and the like. Wherein, the blocking can be broadly understood as damage, or falling off, or paint fading, or rain/fog/haze, or incomplete and clear visibility caused by the sensor being in an undesirable state (such as lens stain).
The method of the embodiment of the invention comprises the following steps:
step S1105 (not shown), the device determines whether the traffic sign related information needs to be displayed.
This step S1105 may be one implementation of the device determining one or more target driving assistance information that needs to be displayed.
For the embodiment of the present invention, the device determines the road information around the host vehicle by means of image recognition, which may include but is not limited to lane lines, road edge lines, and information identifiers such as road traffic and road conditions. The device can be in a state of always keeping recognition, and can start a recognition and/or display function in a self-adaptive mode when a road surface condition partially or completely sheltering a traffic sign and/or an indication sign appears, or can start the recognition and/or display function according to an instruction of a user. The user instruction may be carried by a biological identification such as a gesture, voice, physical key, fingerprint, and the like.
Step S1205 (not shown in the drawing), the device determines whether enhanced display is required.
This step S1205 may be an implementation manner of acquiring information, processing the information, and generating the target driving assistance information content.
For embodiments of the present invention, the device uses one or more cameras to capture road surface images or video, and uses image processing techniques and recognition techniques to detect traffic signs on both sides of and above the road. The equipment judges whether the content on the traffic sign is clear and complete or not through image or video analysis. Specifically, the device may detect the position of the traffic sign on the image and the bounding box (typically a rectangular box/a circular box/a triangular box, etc.) by an image detection technique. And judging whether the content is clear or not by judging whether the image in the surrounding frame is clear and sharp or not and/or the color distribution. For an unclear traffic sign, the device may obtain complete information and iconic form of the traffic sign by at least one of: the equipment can retrieve a database corresponding to the local map according to the position of the vehicle, and acquire complete information and icon forms of the traffic sign through the image and the database acquired by pattern matching; the device may utilize an image algorithm to image enhance the unclear image (e.g., for traffic signs due to fog obscuration, image enhancement may be performed using an image defogging algorithm to obtain a relatively clear image), obtain complete information and icon forms for the traffic sign; the equipment can acquire corresponding complete information and icon forms from images of the traffic signs acquired from other angles; the device can obtain complete information and icon forms through information of other traffic signs and/or driving information, for example, according to the previously encountered traffic sign content of 200 meters away from the exit and the driving information of 100 meters driven by the vehicle, the current traffic sign content can be inferred to be 100 meters away from the exit.
Step S1305 (not shown in the drawing) determines a display mode of the target driving assistance information content.
This step S1305 may be one implementation of determining a display manner of one or more pieces of target driving assistance information.
Similar to step S1301 of the first embodiment, details are not repeated. In particular, the generated AR traffic signs need to be aligned with the real traffic signs. The equipment obtains the position and the posture of a real object relative to the display device through an image positioning algorithm. Specifically, when a single camera is used for acquiring the traffic sign around the vehicle, approximate, considering that the traffic sign can be approximately regarded as a plane, feature points can be extracted from the traffic sign outline, and the relative position and posture relation between the traffic sign and the camera is acquired by solving a homography matrix; more precisely, the image sequence can be subjected to feature tracking in a visual odometer manner, wherein features are acquired from the outlines of traffic signs and other real objects needing to be aligned, so that the relative position and posture relation of the real objects relative to the camera is obtained. In particular, feature point extraction and tracking can be assisted by image recognition segmentation to remove false matches and speed up.
Step S1405 (not shown) displays the AR information corresponding to the generated target driving assistance information.
This step S1405 may be an implementation manner of displaying virtual three-dimensional AR information corresponding to one or more pieces of target driving assistance information.
Similar to step S1401 of the first embodiment, details are not repeated.
EXAMPLE six
The embodiment of the invention provides a method for displaying driving assistance information, which is used for displaying the prompt information of a traffic sign of a driven area and displaying corresponding augmented reality auxiliary driving display information.
The method of the embodiment of the invention comprises the following steps:
step S1106 (not shown), the device determines whether the traffic sign related information needs to be displayed.
This step S1106 may be one implementation of the device determining one or more target driving assistance information that needs to be displayed.
For the embodiment of the present invention, the device starts or ends the function of display by an instruction of the user. The user instruction can be borne by biological identification marks such as gestures, voice, physical keys and fingerprints; the device can also be self-adaptive ending display according to the time of the statistical user sight line staying in the AR traffic sign; a combination of the two approaches is also possible.
Step S1206 (not labeled in the figure), the device generates the content to be displayed.
This step S1206 may be an implementation manner of acquiring information, processing the information, and generating the content of the target driving assistance information.
For the embodiment of the present invention, the apparatus retrieves traffic signs that have passed by the host vehicle over a past period of time. The device can retrieve all passing traffic signs, and also can retrieve related traffic signs according to keywords in the user instruction. When a high-precision road map exists, the equipment can search the traffic signs in the area just passed through in the map according to the current positioning of the vehicle; when no map exists, the device uses historical road surface images or videos collected by one or more cameras, detects and identifies traffic signs on two sides and above a road by using an image processing technology and a detection and identification technology, and acquires the traffic signs meeting retrieval requirements. The device extracts the complete information and iconic form of the traffic sign from the retrieved traffic sign. In particular, the device may adjust the specific content in the traffic sign based on the current location. For example, the original traffic sign content is "5 km from the exit of the highway", but since the host vehicle has traveled 300 meters relative to the traffic sign, the device may change the content of the indicator sign to "4.7 km from the exit of the highway", as shown in fig. 10, to suit the current situation. In particular, the device may retrieve, and/or generate, one or more traffic signs.
Step S1306 (not shown), determining a display mode of the target driving assistance information content.
This step S1306 may be one implementation of determining a display manner of one or more pieces of target driving assistance information.
Similar to step S1301 of the first embodiment, details are not repeated. In particular, when more than one retrieved traffic sign is available, the device can display a plurality of traffic signs simultaneously or sequentially in turn. The device preferentially presents the corresponding AR traffic sign in the form of a real traffic sign. Because the device does not need to be aligned with a real object, the AR traffic sign is preferentially displayed according to the distance of the fixation point of the driver and is close to the sky part, so that the driver is prevented from being prevented from observing the real road surface.
Step S1406 (not shown) displays the AR information corresponding to the generated target driving assistance information.
The step S1406 may be an implementation manner of displaying virtual three-dimensional AR information corresponding to one or more target driving assistance information.
Similar to step S1401 of the first embodiment, details are not repeated.
EXAMPLE seven
The embodiment of the invention provides a method for displaying driving assistance information, which is used for displaying extended areas of side rearview mirrors on two sides of a vehicle and displaying corresponding augmented reality assisted driving display information.
The method of the embodiment of the invention comprises the following steps:
step S1107 (not shown), the apparatus determines whether or not information relating to the extended area of the side mirror needs to be displayed.
This step S1107 may be one implementation of the device determining one or more target driving assistance information that needs to be displayed.
For the embodiment of the present invention, the device starts or ends the function of display by an instruction of the user. The user instruction can be borne by biological identification marks such as gestures, voice, physical keys and fingerprints; the device can also adaptively judge whether to start or end the display function by detecting whether the gaze point of the driver is on the side rearview mirror and/or whether the side rearview mirror is in the visual field of the driver; a combination of the two approaches is also possible. When the used camera is an outward-looking camera on the head-mounted display device, the detection mode can be that whether the side rearview mirror is in the visual field of the driver is judged by detecting whether the side rearview mirror exists in the image, and then the equipment can obtain the current watching area of the driver in a sight tracking mode and judge whether the watching point is in the side rearview mirror area; when the camera used is an inward-looking camera of the host vehicle, the apparatus may determine whether the side rearview mirror is within the field of view of the driver and the line-of-sight focusing area of the driver by detecting the orientation of the head of the driver, and/or the orientation of the eyes, and/or the direction of the line of sight.
Step S1207 (not shown), the device generates the content to be displayed.
This step S1207 may be an implementation manner of acquiring information, processing the information, and generating the content of the target driving assistance information.
First, the apparatus estimates the relative position and attitude of the driver's head/eyes/sight line and side rearview mirrors.
Specifically, when the camera used is an outward-looking camera on the head-mounted display device, the relative position between the outward-looking camera and the head of the driver is fixed, and the relative position, posture and scale relationship with the eyes can be calibrated in advance (occasionally, during the use period, recalibration is needed, for example, after the user adjusts the position of the head-mounted display device, the position, posture and scale relationship need to be recalibrated), so that only the relative position relationship between the side rearview mirror and the outward-looking camera needs to be obtained. Specifically, the device can segment the side rearview mirror through an image recognition technology, the side rearview mirror can be regarded as a plane with a fixed scale, and therefore the relative position and attitude relation between the side rearview mirror and the camera is obtained through solving a homography matrix; the device can also perform feature tracking on the image sequence of the side rearview mirror by using a visual odometer mode, wherein the feature acquires the edge contour of the side rearview mirror so as to acquire the relative position and posture relation of the smooth side rearview mirror relative to the camera; the equipment can also paste a positioning marker on the screen of the side rearview mirror, and an outward-looking camera on the equipment calculates the relative position posture scale relation between the side rearview mirror and the outward-looking camera of the vehicle through the positioning marker; namely (eye ← calibration → display device ← calibration → outward vision camera on device ← estimation → side rearview mirror).
When the camera used is an inward-looking camera on the vehicle, the relative position and attitude relationship between the inward-looking camera and the side rearview mirror can be considered as fixed and can be calibrated in advance (the use period occasionally needs to be recalibrated, for example, after the driver adjusts the side rearview mirror, the position and attitude relationship needs to be recalibrated). Therefore, only the relative position and posture relation of the inward-looking camera and the display device is required to be obtained. Specifically, the apparatus may acquire the relative position-attitude-scale relationship of the display device and the inward-looking camera on the host vehicle using the inward-looking camera on the host vehicle (i.e., the camera that captures the environment inside the host vehicle, such as the position of the driver). The acquisition mode can be based on positioning markers on a display device, and/or an extracted feature point tracking mode based on images and/or multiple sensor fusion, and/or detection modes, such as SLAM (Simultaneous Localization and Mapping) technology and target tracking technology. The relative position and posture relation of the eyes and the display device is relatively fixed, and can be calibrated in advance (occasionally recalibration is needed during the service life, for example, the position posture and the scale relation need to be recalibrated after the user adjusts the position of the head-mounted display equipment); namely (eye ← calibration → display device ← estimation → inner view camera on the host vehicle ← calibration → side rearview mirror).
Secondly, after the device estimates the relative position relationship between the eyes of the driver's head and the side rearview mirrors, according to the mirror surface attributes of the side rearview mirrors, virtual viewpoints of the mirror surface images of the side rearview mirrors are obtained, according to the virtual viewpoints and the areas of the extended rearview mirrors, the device can acquire image information around the vehicle through one or more vehicle-mounted cameras, and/or stereo cameras, and/or depth cameras, wherein the acquisition range includes and is larger than the area covered by the side rearview mirrors, as shown in fig. 11; the equipment can obtain the content of the extended area of the rearview mirror by using the images of other vehicle-mounted cameras, and can generate the content of the extended area of the side rearview mirror by using the images of the exterior camera on the vehicle through an image based rendering technology (image based rendering) because the relative position and posture relationship between the exterior camera on the vehicle and the side rearview mirror is relatively fixed and can be calibrated in advance; when the outward-view camera of the host vehicle is a stereo camera and/or equipped with a depth camera, the device may also generate the contents of the extended area of the side rearview mirror from the image of the outward-view camera using depth image based rendering (depth image based rendering). The range of the side mirror including the extended area is shown in fig. 12.
Step 1307 (not shown), the display mode of the extended area of the side mirror is determined.
This step S1307 may be one implementation of determining a display manner of the one or more target driving assistance information.
Similar to step S1301 of the first embodiment, details are not repeated. In particular, it is possible to use, for example,
in order to make the driver look natural, the apparatus may use at least one of the following ways: virtual three-dimensional display information displayed in the expansion area of the side rearview mirror and a real object corresponding to the virtual three-dimensional display information are symmetrical relative to the expansion area; the virtual three-dimensional display information displayed in the extended area is mirror imaging of a real object corresponding to the virtual three-dimensional display information relative to the side rearview mirror; the content displayed on the side rearview mirror is continuous with the content displayed on the side rearview mirror in the expansion area; and the extended area and the content displayed by the side rearview mirror have certain overlap and/or transition.
Step S1407 (not shown) displays the AR information corresponding to the generated target driving assistance information.
This step S1407 may be an implementation manner of displaying virtual three-dimensional AR information corresponding to one or more pieces of target driving assistance information.
Similar to step S1401 of the first embodiment, details are not repeated.
Example eight
The embodiment of the invention provides a method for displaying driving assistance information, which is used for displaying an expanded area of an inner rear-view mirror inside a vehicle and displaying corresponding augmented reality assisted driving display information.
The method of the embodiment of the invention comprises the following steps:
step S1108 (not shown), the device determines whether the extended area related information of the interior mirror needs to be displayed.
This step S1108 may be one implementation of the device determining one or more target driving assistance information that needs to be displayed.
For the embodiment of the present invention, the device starts or ends the function of display by an instruction of the user. The user instruction can be borne by biological identification marks such as gestures, voice, physical keys and fingerprints; the device can also adaptively judge whether to start or end the display function by detecting whether the fixation point of the driver is on the inside rearview mirror or not; a combination of the two approaches is also possible.
When the used camera is an external view camera on the head-mounted display device, the detection mode can be that whether the interior rearview mirror is in the visual field of the driver is judged by detecting whether the interior rearview mirror exists in the image, then the equipment can obtain the current watching area of the driver in a sight tracking mode and judge whether the watching point is in the interior rearview mirror area; when the camera used is an interior camera of the host vehicle, the apparatus may determine whether the interior mirror is within the field of view of the driver and the line-of-sight focal region of the driver by detecting the orientation of the driver's head, and/or the orientation of the eyes, and/or the direction of the line of sight. The device may also be an adaptive ending display based on the time of the statistical user gaze/dwell on the AR information.
Step S1208 (not labeled in the figure), the device generates the content to be displayed.
This step S1208 may be an implementation manner of acquiring information, processing the information, and generating the content of the target driving assistance information.
First, the device estimates the relative position and attitude of the driver's head/eyes/sight line and the interior mirror.
In the case where the display device is a head-mounted display device, when the camera used is an outward-looking camera on the head-mounted display device, the relative position between the outward-looking camera and the head of the driver is fixed, and the relative position posture and scale relationship with the eyes can be calibrated in advance (occasionally, during use, recalibration is needed, for example, after the user adjusts the position of the head-mounted display device, the position posture and scale relationship needs to be recalibrated), so that only the relative position relationship between the interior rear-view mirror and the outward-looking camera needs to be obtained. Specifically, the device can divide the inside rear view mirror through an image recognition technology, and the inside rear view mirror can be regarded as a plane with a fixed scale, so that the relative position and posture relation between the inside rear view mirror and the camera is obtained by solving a homography matrix; the device can also perform feature tracking on the image sequence of the interior rearview mirror in a visual odometer mode, wherein the feature acquires the edge contour of the interior rearview mirror, so that the smooth relative position and posture relation of the interior rearview mirror relative to the camera is obtained; the equipment can also paste a positioning marker on the screen of the inside rearview mirror, and the outside camera on the equipment calculates the relative position attitude scale relation between the inside rearview mirror and the outside camera of the vehicle through the positioning marker; namely (eye ← calibration → display device ← calibration → outward vision camera on device ← estimation → in-vehicle rearview mirror).
In the case where the display device is a head-mounted display device, when the camera used is an interior camera on the vehicle, the relative position and attitude relationship between the interior camera and the interior mirror may be considered to be fixed and may be calibrated in advance (the relative position and attitude relationship may need to be recalibrated occasionally during the use period, for example, after the driver adjusts the interior mirror, the relative position and attitude relationship may need to be recalibrated). Therefore, only the relative position and posture relation of the inward-looking camera and the display device is required to be obtained. Specifically, the apparatus may acquire the relative position-attitude-scale relationship of the display device and the inward-looking camera on the host vehicle using the inward-looking camera on the host vehicle (i.e., the camera that captures the environment inside the host vehicle, such as the position of the driver). The acquisition may be based on positioning markers on the display device, and/or based on images and/or extracted feature point tracking of multiple sensor fusion, and/or detection, such as SLAM (Simultaneous Localization and Mapping) technology and target tracking technology. The relative position and posture relation of the eyes and the display device is relatively fixed, and can be calibrated in advance (occasionally recalibration is needed during the use period, for example, the position posture and the scale relation need to be recalibrated after the user adjusts the position of the head-mounted display device); namely (eye ← calibration → display device ← estimation → inside camera on the host vehicle ← calibration → inside mirror).
For the case where the display device is an in-vehicle display device, the apparatus may use an in-view camera on the host vehicle (i.e., a camera that captures the environment inside the host vehicle, such as the position of the driver) to obtain the relative position-attitude-scale relationship of the eyes and the in-view camera on the host vehicle. The acquisition mode can be based on the positioning marker worn by the driver, wherein the relative position posture scale relation between the eyes and the worn positioning marker can be considered to be relatively fixed, and the calibration can be carried out in advance (the calibration is occasionally needed during the service life, for example, the position posture and scale relation need to be recalibrated after the user adjusts the position of the head-mounted positioning marker); the acquisition mode can also be an image-based head/eye/sight line positioning and tracking technology, the equipment positions the head of the driver through the image or video of an inward-looking camera on the vehicle, and positions the relative position and posture relation between the eyes and the inward-looking camera based on the positioning result of the head; the acquisition mode can also be an image-based eye positioning and tracking technology, and the device directly positions the relative position and posture relationship between the eyes and the inward-looking camera through the image or video of the inward-looking camera on the vehicle.
Secondly, after the device estimates the relative position and posture scale relationship between the eyes of the driver and the inside rearview mirror, the device can shoot the image or video of the rear row position inside the vehicle and/or the area outside the vehicle tail through the inside rearview camera, and the image or video content is reduced/enlarged to be used as the expanded content, as shown in fig. 13.
Step S1308 (not shown), the display mode of the extended area of the interior mirror is determined.
This step S1308 may be an implementation manner of determining a display manner of one or more pieces of target driving assistance information.
Similar to step S1301 of the first embodiment, details are not repeated. In particular, the device may cause the AR extension to be adjusted within a range of angles such that the AR extension may face the driver's gaze direction. In the case where the display device is a head-mounted display device, the apparatus preferentially presents the AR information in the area of or below the interior mirror in consideration of the habit of the driver; for the case that the display device is a vehicle-mounted display device, the equipment preferentially displays the AR information according to the distance of the fixation point of the driver, and is close to the sky part, so that the driver is prevented from being prevented from observing the real road surface.
Step S1408 (not shown) displays the AR information corresponding to the generated target driving assistance information.
This step S1408 may be an implementation of displaying virtual three-dimensional AR information corresponding to one or more target driving assistance information.
Similar to step S1401 of the first embodiment, details are not repeated.
Example nine
The embodiment of the invention provides a method for displaying driving assistance information, which is used for displaying intersections without traffic lights, or damaged traffic lights, or unobvious traffic lights, or partially/completely shielded intersections, and displaying corresponding augmented reality auxiliary driving display information. Such situations are hereinafter referred to collectively as intersections without traffic lights.
The method of the embodiment of the invention comprises the following steps:
step S1109 (not shown), the device determines whether to approach the intersection related information without the traffic signal lamp.
This step S1109 may be one implementation of the device determining one or more target driving assistance information that needs to be displayed.
For the embodiment of the invention, the equipment can self-adaptively start the display function by determining the position of the vehicle and checking whether a traffic signal lamp exists at the front intersection on the map, and can also start the recognition and/or display function according to the instruction of the user. The user instruction may be carried by a biological identification such as a gesture, voice, physical key, fingerprint, and the like.
Step S1209 (not shown), the device generates the content to be displayed.
This step S1209 may be an implementation manner of acquiring information, processing the information, and generating the content of the target driving assistance information.
For the embodiment of the invention, the equipment monitors the intersection information through one or more vehicle-mounted cameras, and determines the arrival sequence of other vehicles at the intersection through an image inspection technology and an identification technology. The device obtains the traffic rules applicable to the intersection from the map, generates a virtual AR traffic light, and indicates the driving operation of the driver, as shown in fig. 14. For example, a crossroad applies traffic rules that a vehicle stops for 3 seconds after arriving, stops first and walks first; the equipment can monitor the parking sequence of vehicles at the intersection through an image technology, and generates a virtual red light which turns into a green light after 3 seconds by combining with traffic rules.
For the intersection with the traffic signal lamp, according to the instruction of the driver, the equipment can acquire the video of the real traffic signal lamp through at least one camera and generate the AR traffic signal lamp in a mode of copying the real traffic signal lamp.
Step S1309 (not shown in the figure), determining the display mode of the target driving assistance information content.
This step S1309 may be one implementation of determining a display manner of the one or more pieces of target driving assistance information.
Similar to step S1306 in the sixth embodiment, the AR traffic sign is replaced with an AR traffic signal lamp, which is not described again.
Step S1409 (not shown) displays the AR information corresponding to the generated target driving assistance information.
This step S1409 may be an implementation manner of displaying virtual three-dimensional AR information corresponding to one or more pieces of target driving assistance information.
Similar to step S1401 of the first embodiment, details are not repeated.
Example ten
The embodiment of the invention provides a method for displaying driving assistance information, which is used for displaying traffic polices and displaying corresponding augmented reality assisted driving display information.
The method of the embodiment of the invention comprises the following steps:
step S1110 (not shown), the device determines whether there is a traffic police.
This step S1110 may be one implementation of the device determining one or more target driving assistance information that needs to be displayed.
For the embodiment of the invention, the equipment acquires the surrounding environment of the vehicle through one or more cameras, judges whether a traffic police exists or not and/or whether a road limiting roadblock placed by the traffic police exists or not through an image detection and identification technology, and adaptively starts a display function; the functions of identification and/or display can also be started according to the instruction of the user; combinations of the above are also possible. The user instruction may be carried by a biological identification such as a gesture, voice, physical key, fingerprint, and the like.
Step S1210 (not shown), the device generates the content to be displayed.
This step S1210 may be an implementation manner of acquiring information, processing the information, and generating the content of the target driving assistance information.
For embodiments of the present invention, the device detects, recognizes, tracks, etc. gestures, and pointing devices such as batons, by one or more cameras. And generating a corresponding AR message according to the local traffic police gesture rule. For example, a gesture by the traffic police indicates a left turn, an AR message is generated that turns to the left, as shown in fig. 15.
Step S1310 (not shown), determining a display mode of the target driving assistance information content.
This step S1310 may be one implementation of determining a display manner of one or more target driving assistance information.
Similar to step S1301 of the first embodiment, details are not repeated.
For the embodiment of the invention, in the case that the display device is a single head-mounted display device, the equipment can display the generated AR message beside the corresponding traffic police, and the AR message can face the sight of the driver and can also keep the same direction with the body of the traffic police.
For the embodiment of the present invention, in the case where the display device is a single in-vehicle display device, the apparatus may display the generated AR message facing the driver's sight line.
For the embodiment of the invention, when a plurality of traffic polices exist, the equipment can simultaneously display a plurality of AR messages beside the corresponding traffic polices, and can also preferentially display the AR messages related to the traffic polices facing the current direction of the vehicle.
Step S1410 (not shown), displays the AR information corresponding to the generated target driving assistance information.
This step S1410 may be an implementation manner of displaying virtual three-dimensional AR information corresponding to one or more target driving assistance information.
Similar to step S1401 of the first embodiment, details are not repeated.
EXAMPLE eleven
The embodiment of the invention provides a method for displaying driving assistance information, which is used for displaying the position, the function and the operation mode of an operation dial key and displaying corresponding augmented reality auxiliary driving display information.
The method of the embodiment of the invention comprises the following steps:
step S1111 (not shown), the device determines whether or not the operation dial needs to be displayed.
This step S1111 may be one implementation of the device determining one or more target driving assistance information that needs to be displayed.
For the embodiment of the invention, the equipment determines whether the driver needs to perform certain operation by determining the surrounding environment of the vehicle and/or the environment in the vehicle, for example, the front window has fog, and the equipment can determine that the driver needs to use a windshield wiper and self-adaptively start the display function; the functions of identification and/or display can also be started according to the instruction of the user; combinations of the above are also possible. The user instruction may be carried by a biological identification such as a gesture, voice, physical key, fingerprint, and the like.
Step S1211 (not shown), the device generates the content to be displayed.
This step S1211 may be an implementation manner of acquiring information, processing the information, and generating the content of the target driving assistance information.
Similarly to step S1208 of the judgment embodiment eight, the apparatus judges whether or not the key required to be operated is within the field of view of the driver. When the key to be operated is in the field of view of the driver, the device highlights the key by means of a field highlight, or an arrow indication, or an edge wrap, and generates the function name of the key, and/or an operation indication. For example, the name of the head lamp is marked around the head lamp light knob, and/or the rotational direction of turning on the lamp and the rotational direction of turning off the lamp with arrows are shown in fig. 16.
Step S1311 (not shown), determining a display mode of the target driving assistance information content.
This step S1311 may be one implementation of determining a display manner of one or more pieces of target driving assistance information.
In particular, this embodiment can only be applied with head mounted display devices.
In the case that the display device is a single head-mounted display device, the device may display the generated AR message beside the key, the AR message may face the driver's sight, or may be consistent with the key orientation, and the operation indication may be consistent with the operation action of the real key.
And step S1411 (not shown), displaying the AR information corresponding to the generated target driving assistance information.
This step S1411 may be one implementation of displaying virtual three-dimensional AR information corresponding to one or more pieces of target driving assistance information.
Similar to step S1401 of the first embodiment, details are not repeated.
Example twelve
An embodiment of the present invention provides a method for displaying driving assistance information, which is used to display at least one of a parking-allowed region that is suitable for parking, a parking-allowed region that is not suitable for parking, and a parking-disallowed region, and display corresponding augmented reality assisted driving display information, as shown in fig. 17.
The method of the embodiment of the invention comprises the following steps:
step S1112 (not shown), the apparatus determines whether at least one of the parking allowed and suitable parking area, the parking allowed and not suitable parking area, and the parking not allowed area needs to be displayed.
This step S1112 may be one implementation of the device determining one or more target driving assistance information that needs to be displayed.
For the embodiment of the invention, the equipment judges whether the driver is seeking a parking space to prepare for parking by determining the surrounding environment of the vehicle and/or the environment in the vehicle and/or the intention of the driver, for example, the driver arrives at a specified destination or drives into the parking space, and the display function is started in a self-adaptive mode; the functions of identification and/or display can also be started according to the instruction of the user; combinations of the above are also possible. The user instruction may be carried by a biological identification such as a gesture, voice, physical key, fingerprint, and the like.
Step S1212 (not shown), the device generates the content to be displayed.
This step S1212 may be an implementation manner of acquiring information, processing the information, and generating the content of the target driving assistance information.
For the embodiment of the invention, the device detects the area around the vehicle through one or more cameras on the vehicle, detects and identifies whether the parking prohibition identification exists, and analyzes the area allowing parking and/or the area not allowing parking. Through an image processing technology and in combination with the volume of the vehicle, whether the parking-allowed area is flat or not, whether the space is enough or not, whether ponding, mud and other factors which are not beneficial to parking exist or not are judged, the parking-allowed area is screened and sorted, and the area which is allowed to park and is suitable for the parking area and/or the area which is allowed to park and is not suitable for parking is displayed.
Step S1312 (not shown) determines a display mode of the target driving assistance information content.
This step S1312 may be one implementation of determining a display manner of the one or more target driving assistance information.
Similar to step S1301 of the first embodiment, details are not repeated. In particular, the device pair is presented in different ways for parking allowed and suitable parking areas, parking allowed but not suitable parking areas, and parking not allowed areas.
Step S1412 (not shown), displays the AR information corresponding to the generated target driving assistance information.
This step S1412 may be an implementation manner of displaying virtual three-dimensional AR information corresponding to one or more pieces of target driving assistance information.
Similar to step S1401 of the first embodiment, details are not repeated.
Specifically, for all embodiments, the device adaptation is determined as follows: the method comprises the steps of starting the display time of one or more pieces of AR information, and/or stopping the display time of one or more pieces of AR information, and/or the display time length of one or more pieces of AR information. When the device adaptively determines the time and/or the time length, the device comprehensively considers at least one of the state of the vehicle, the road condition information and the system delay of the device so as to achieve better display starting and/or display stopping time and better display time length, so as to avoid the problems including but not limited to early, late, overlong or too short display and the like, and mislead to the interference of a driver.
As an example, for the warning message to warn that the safe distance is insufficient AR, the device may start displaying at a time when the distance between the host vehicle and the surrounding vehicle is smaller than the safe distance and stop displaying at a time when the distance between the host vehicle and the surrounding vehicle is larger than the safe distance. However, since the safe distance varies from case to case, when the vehicle travels on an icy or snowy road surface, the safe distance between the host vehicle and the surrounding vehicles should be greater than that when the vehicle travels on an ordinary road surface. The safe distance is related to the running speed of the vehicle and the surrounding vehicles, and the faster the speed is, the larger the required safe distance is. That is, the safe distance should be adaptively determined according to the specific situation. Therefore, based on the vehicle state (vehicle speed, etc.), road condition information (icy and snowy road surface, etc.), and the system delay of the device, the device can adaptively calculate the safe distance in the current situation. Accordingly, the device displays the stop time and the display duration of the AR warning message for warning the insufficient safe distance, and the display duration is also adaptively adjusted along with the safe distance instead of being fixed.
In particular, for all embodiments, the device adaptation reduces two kinds of delays: one is attention delay and the other is display delay. Attention delay is defined as the time delay from the display of the AR information by the device to the driver's attention to the AR information; display latency is defined as the time the device spends generating, rendering and displaying the AR information.
In order to reduce the attention delay, the device may adopt at least one of the following five ways: 1) Adaptively displaying the AR information with high priority at a significant position in the current visual field of the driver, and determining a display mode according to the current gazing depth of the driver; 2) The display form of the AR information is adjusted in a self-adaptive mode according to the real scene environment, the contrast of the AR information and the real scene environment is increased, and therefore the AR information is highlighted; 3) For the AR information with high priority, in order to highlight the AR information with high priority, stopping and/or pausing displaying the AR information with low priority around the AR information with high priority so as to highlight the AR information with high priority; 4) For AR information with high priority, the device adaptively displays the AR information at a prominent position in the visual field of the driver, and when the visual field of the driver changes (turns the head, turns the eyes and the like), the device adaptively adjusts the display position of the AR information with high priority to keep the AR information at the prominent position in the visual field of the driver; 5) The equipment is matched with at least one of sound, animation, vibration, light flicker and other modes to attract the attention of the driver.
For example, when the driver looks to the left of the host vehicle, the right side of the host vehicle explodes, and the device immediately displays the generated AR alert in the driver's current field of view, i.e., to the left of the host vehicle.
Wherein, in order to reduce the display delay, the device can adopt at least one of the following modes: 1) Preparing in advance the data to be displayed: the equipment collects the information of the vehicle and the surrounding large range, generates corresponding AR information and roughly determines the display mode of each piece of AR information; but the equipment only displays AR information in the current visual field range of the driver in a self-adaptive and selective manner; the device can render AR information in advance under the condition that the calculated amount allows, and when the AR information enters the field range of the driver, the device can directly call the AR information model rendered in advance, and display the AR information model after adjustment according to the current condition, as shown in FIG. 18; 2) Changing the presentation of the AR information to be displayed: considering that the acceptable level of delay is related to the current situation (e.g. a delay of 5 ms may be acceptable when the vehicle speed is 10 km/h, but a delay of 3 mm may be unacceptable when the vehicle speed rises to 50 km/h), the device may adaptively estimate the acceptable level of delay based on the current situation and adaptively change the presentation of the AR content. For example, as shown in fig. 19, when the vehicle speed is low, the device may completely display the driving area; however, when the vehicle speed rises, in order to reduce the delay, the device can adaptively change the appearance form of the driving area into a stripe shape, namely, the delay is reduced by reducing the display data amount.
In particular, as shown in fig. 20, when there are multiple pieces of AR information to be displayed at the same time, if the device considers only each piece of AR information separately and adaptively selects the optimal display mode for each piece of AR information, when displaying these pieces of AR information to the driver at the same time, the driver may feel that the pieces of AR information are blurred and unclear due to the mutual occlusion relationship between these pieces of AR information, which may cause trouble to the driver.
Therefore, for simultaneously displaying multiple pieces of AR information with mutual occlusion relation, the device may use at least one of the following ways: 1) For the AR information conditions which need to be displayed simultaneously, cannot be combined and/or integrated and really have the shielding relation, the equipment does not display the shielded part of the shielded AR information in a self-adaptive manner according to the current position relation so as to avoid interfering with the foreground AR information; 2) For multiple pieces of AR information which can be combined and/or integrated, the device adaptively combines and/or integrates the multiple pieces of AR information into one or more pieces of information, for example, four pieces of AR information can be reduced into two pieces of AR information; when the device merges and/or integrates multiple pieces of AR information, the device may merge simple similar information or generate new AR information according to information meanings at a higher level, as shown in fig. 20, the device may generate more simplified new AR information "cautious left and back" according to two pieces of AR information "driving school student car on the left" and "truck on the back"; 3) For AR information that may be displayed with delay, the device may defer display, e.g., non-urgent AR information about a farther location may defer to display until the host vehicle travels closer to avoid interfering with display of AR information about a closer location; similarly, the device may also delay display, pause display, stop display, and even abandon display, unimportant AR information to reduce/eliminate occlusion; 4) As shown in fig. 20, the device may change the display position, and/or content detail, and/or presentation manner of one or more pieces of AR information, and reduce or even eliminate the mutual occlusion relationship between the AR information.
Specifically, in determining the display position of the AR information, the apparatus considers at least one of the following cases: 1) The device displays the AR information at a physically correct location, i.e. aligned with the corresponding object in real space; 2) The device displays the AR information in an area that does not interfere with the driver's driving, such as a sky area; 3) The device directly displays important AR information at a prominent position of the driver's current field of view; 4) The equipment displays the AR information on one side of the driver with a relatively wide visual field; for example, for the case of the driver's seat on the left side, the driver's right side view is relatively wider; for the condition that the driver is positioned on the right side of the seat, the left side of the driver has a wider visual field; 5) The device displays the AR information in an area where the driver's attention is relatively insufficient, and the driver needs to make sufficient observations in all directions during driving in order to ensure safety. Therefore, the device displays the AR information in a more prominent mode in a certain area or a plurality of areas by counting the stay time of the sight line of the driver in each direction and if the attention of the driver is found to be remarkably insufficient in the areas so as to attract the attention of the driver and help the driver balance the attention. The way in which the attention of the driver is drawn may also incorporate voice/sound effects, animations, lights, etc. As shown in fig. 21, when the statistics of the driver's gaze show that the driver's attention is biased to the left, the device may display AR information on the right side, drawing the driver's attention to the right side, thereby helping the driver balance the attention.
An embodiment of the present invention provides a device for augmented reality to assist driving, as shown in fig. 22, including: a determination module 2201 and a display module 2202.
The determining module 2201 is configured to determine driving assistance information based on information obtained during driving.
The display module 2202 is configured to display the virtual three-dimensional display information corresponding to the driving assistance information determined by the determination module 2201.
Compared with the prior art, the embodiment of the invention determines the driving auxiliary information based on the information acquired in the driving process, then displays the virtual three-dimensional display information corresponding to the driving auxiliary information, namely determines the driving auxiliary information in the driving process according to the information acquired in the driving process of the vehicle, and presents the virtual three-dimensional display information corresponding to the driving auxiliary information in the driving process to the driver in a visual and/or auditory way to inform or warn the driver, so that the driving information in the driving process of the vehicle can be better mastered by the driver by applying the augmented reality technology in the driving process of the vehicle, and the user experience can be improved.
The augmented reality device for assisting driving provided by the embodiment of the invention is suitable for the method embodiment, and is not described again here.
Those skilled in the art will appreciate that the present invention includes apparatus directed to performing one or more of the operations described in the present application. These devices may be specially designed and manufactured for the required purposes, or they may comprise known devices in general-purpose computers. These devices have stored therein computer programs that are selectively activated or reconfigured. Such a computer program may be stored in a device (e.g., computer) readable medium, including, but not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magnetic-optical disks, ROMs (Read-Only memories), RAMs (Random Access memories), EPROMs (Erasable Programmable Read-Only memories), EEPROMs (Electrically Erasable Programmable Read-Only memories), flash memories, magnetic cards, or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a bus. That is, a readable medium includes any medium that stores or transmits information in a form readable by a device (e.g., a computer).
It will be understood by those within the art that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. Those skilled in the art will appreciate that the computer program instructions may be implemented by a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, implement the features specified in the block or blocks of the block diagrams and/or flowchart illustrations of the present disclosure.
Those of skill in the art will appreciate that various operations, methods, steps in the processes, acts, or solutions discussed in the present application may be alternated, modified, combined, or deleted. Further, various operations, methods, steps in the flows, which have been discussed in the present disclosure, may also be alternated, modified, rearranged, split, combined, or deleted. Further, steps, measures, schemes in the various operations, methods, procedures disclosed in the prior art and the present invention can also be alternated, changed, rearranged, decomposed, combined, or deleted.
The foregoing is only a partial embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (27)

1. A method for augmented reality for driving assistance, comprising:
determining driving auxiliary information based on information acquired in the driving process;
and displaying virtual three-dimensional display information corresponding to the driving auxiliary information.
2. The method of claim 1,
determining driving assistance information based on information acquired during driving, comprising: determining shielded driving auxiliary information based on information of a sensible area acquired in the driving process;
displaying virtual three-dimensional display information corresponding to the driving auxiliary information, comprising: and displaying the virtual three-dimensional display information corresponding to the shielded driving auxiliary information.
3. The method according to claim 2, wherein when the obstructed driving assistance information comprises: and when the road surface road information and/or the non-road surface traffic sign information are/is displayed, displaying virtual three-dimensional display information corresponding to the shielded driving auxiliary information, wherein the virtual three-dimensional display information comprises at least one of the following information:
displaying virtual three-dimensional display information corresponding to the shielded driving auxiliary information at the position of the shielded driving auxiliary information;
and displaying the virtual three-dimensional display information corresponding to the shielded driving auxiliary information at a preset display position.
4. The method according to claim 3, wherein the obstructed driving assistance information is determined based on information of a perceptible area acquired in the driving process, and the method comprises at least one of the following steps:
if the shielded driving auxiliary information is only partially shielded, determining the shielded driving auxiliary information according to the sensible part of the driving auxiliary information;
determining the shielded driving auxiliary information based on the position of the current vehicle and the reference object information of the perceptible area in the current driving;
determining the shielded driving auxiliary information based on the multimedia information of the shielded driving auxiliary information acquired from other angles except the visual angle of the driver;
based on multimedia information of the sheltered driving auxiliary information in the perceptible area acquired in the driving process, the multimedia information is enhanced and/or restored, and the sheltered driving auxiliary information is determined;
when the shielded driving auxiliary information comprises road surface road information, determining the shielded driving auxiliary information according to a map by aligning the current road with the map of the current road;
and determining the current sheltered driving auxiliary information according to other driving auxiliary information.
5. The method of claim 4, wherein after determining the obstructed driving assistance information based on the information of the perceivable area acquired during driving, the method further comprises:
correcting the determined shielded driving auxiliary information;
displaying virtual three-dimensional display information corresponding to the shielded driving assistance information, comprising: and displaying virtual three-dimensional display information corresponding to the corrected driving auxiliary information at the corrected position.
6. The method according to claim 5, characterized in that correcting the determined obstructed driving assistance information comprises at least one of:
when the shielded driving auxiliary information comprises lane related information, correcting the position of the shielded driving auxiliary information based on the driving tracks and/or road rut information of other vehicles in the preset range of the current vehicle;
when the shielded driving assistance information includes road surface road information, correcting the position of the shielded driving assistance information according to a map of a current road by aligning the current road with the map of the current road.
7. The method according to claim 2, wherein when the obstructed driving assistance information comprises: when the information related to the lane is received,
the displayed lane width is less than the actual lane width.
8. The method according to claim 2, wherein when the obstructed driving assistance information comprises: during the blind area information, show the virtual three-dimensional display information that is sheltered from driving auxiliary information corresponds, include:
and displaying virtual three-dimensional display information corresponding to the blind area information in the expansion area of the rearview mirror.
9. The method according to claim 8, wherein when the side mirror is a side mirror, the virtual three-dimensional display information displayed in the extended area is generated by a real object corresponding to the virtual three-dimensional display information in accordance with the mirror surface property of the side mirror and the driver's field of view.
10. The method of claim 1,
determining driving assistance information based on information acquired in the driving process, comprising: acquiring traffic rules and/or traffic police action information of a current road section, and determining a presentation mode of the traffic rules through virtual traffic light information generated based on the traffic rules and/or through generated traffic police gesture meaning information;
displaying virtual three-dimensional display information corresponding to the driving auxiliary information, comprising: and displaying the converted virtual three-dimensional display information corresponding to the traffic rules and/or the traffic police action information of the current road section.
11. The method according to claim 1, wherein the step of displaying the virtual three-dimensional display information corresponding to the driving assistance information comprises at least one of:
when the abnormal rut information is sensed, displaying virtual three-dimensional display information corresponding to the determined abnormal rut area and/or virtual three-dimensional display information of warning information of the abnormal rut area in the area;
when the traffic sign of the road area which is driven by the current vehicle needs to be displayed, displaying the acquired virtual three-dimensional display information corresponding to the traffic sign of the road area which is driven by the vehicle; the acquired traffic signs which have traveled comprise distance indications; the displayed virtual three-dimensional display information comprises the acquired traveled traffic sign and a distance indication updated based on the distance traveled by the vehicle from the traveled traffic sign;
when the traffic sign and/or the traffic light at the intersection where the current vehicle is located are sensed to be present and the traffic sign and/or the traffic light meet the preset display condition, displaying virtual three-dimensional display information corresponding to the traffic sign and/or the traffic light at the intersection; the predetermined display condition includes at least one of: traffic signs and/or traffic lights damaged; the traffic signs and/or traffic lights are not clearly displayed; the traffic signs and/or traffic lights are not completely within the driver's current visual range;
when the key information in the operation dial plate needs to be displayed, displaying virtual three-dimensional display information corresponding to at least one of the following information: the position information of the key, the function name information of the key, the operation indication information of the key and the key;
when the information of the parking area needs to be displayed, displaying virtual three-dimensional display information corresponding to at least one of the parking areas which are allowed to park and are suitable for the parking area, the parking areas which are allowed to park and are not suitable for the parking area and the parking area which is not allowed to park;
at least one of a lane range, a lane line position, a road edge line position, a road traffic sign and a non-road traffic sign is determined by sensing road environment and/or road map information, and corresponding virtual three-dimensional display information is displayed.
12. The method of claim 11, wherein the step of determining driving assistance information based on information obtained during driving comprises at least one of:
determining whether the road rut information has abnormal rut information, and if the abnormal rut information exists, determining that an abnormal rut area exists;
when the traffic sign of the road area which is driven by the current vehicle needs to be displayed, the traffic sign of the road area which is driven by the current vehicle is determined from the acquired multimedia information and/or the traffic sign database;
when the parking area information needs to be displayed, determining at least one of a parking-permitted area which is suitable for parking, a parking-permitted area which is not suitable for parking and a parking-not-permitted area according to at least one of whether a parking-prohibited mark exists in an area around the current vehicle, the volume of the current vehicle and the current road surface condition;
determining driving auxiliary information corresponding to a target object to be displayed in the driving process; and generating driving auxiliary information corresponding to the target object based on the information acquired in the driving process.
13. The method according to claim 1, wherein displaying the virtual three-dimensional display information corresponding to the driving assistance information comprises:
and highlighting virtual three-dimensional display information corresponding to the track information.
14. The method of claim 11, wherein displaying the acquired virtual three-dimensional display information corresponding to the traffic sign of the road region on which the vehicle has currently traveled comprises:
and according to the current position of the vehicle and the virtual three-dimensional display information corresponding to the traffic sign of the road area which has already traveled, adjusting the virtual three-dimensional display information corresponding to the traffic sign of the road area which has already traveled currently, and displaying the virtual three-dimensional display information corresponding to the adjusted traffic sign.
15. The method according to claim 1, wherein the step of displaying the virtual three-dimensional display information corresponding to the driving assistance information comprises:
determining a display mode corresponding to the virtual three-dimensional display information;
displaying virtual three-dimensional display information corresponding to the driving auxiliary information based on the determined display mode;
wherein the display mode comprises at least one of the following:
a display position of the virtual three-dimensional display information; displaying the posture of the virtual three-dimensional display information; the display size of the virtual three-dimensional display information; display start time of the virtual three-dimensional display information; display end time of the virtual three-dimensional display information; displaying duration of the virtual three-dimensional display information; the detail degree of the display content of the virtual three-dimensional display information; presenting the virtual three-dimensional display information; the interrelationship of display among a plurality of virtual three-dimensional display information; a contour of the virtual three-dimensional display information;
the presentation includes at least one of: a text mode; icon mode; an animation mode; a sound effect mode; a lighting mode; a vibration mode.
16. The method of claim 1, further comprising at least one of:
when a plurality of pieces of virtual three-dimensional display information to be displayed exist at the same time, combining the plurality of pieces of virtual three-dimensional display information to be displayed, and displaying the processed virtual three-dimensional display information;
when a plurality of pieces of virtual three-dimensional display information to be displayed are displayed simultaneously, the plurality of pieces of virtual three-dimensional display information to be displayed are integrated based on semantics, and the processed virtual three-dimensional display information is displayed.
17. The method of claim 1, further comprising at least one of:
displaying virtual three-dimensional display information corresponding to the driving auxiliary information with priority higher than the first preset priority at a remarkable position of the current visual field of the driver, and adjusting the position for displaying the virtual three-dimensional display information in real time according to the position of the current visual field of the driver;
displaying virtual three-dimensional display information corresponding to the driving auxiliary information with the priority higher than the first preset priority, and pausing and/or stopping displaying the virtual three-dimensional display information corresponding to the driving auxiliary information with the priority lower than the second preset priority.
18. The method according to claim 1, wherein the step of displaying the virtual three-dimensional display information corresponding to the driving assistance information comprises:
determining at least one of display start time, display end time and display duration of the virtual three-dimensional display information according to at least one of the current state of the vehicle, the current road condition information and the system delay condition of the equipment;
and displaying the virtual three-dimensional display information corresponding to the driving auxiliary information according to at least one of the determined display starting time, display ending time and display duration of the virtual three-dimensional display information.
19. The method of claim 1,
when virtual three-dimensional display information corresponding to a plurality of pieces of driving assistance information to be displayed simultaneously exists and an occlusion relationship exists among the plurality of pieces of virtual three-dimensional display information to be displayed, the method further includes at least one of:
according to the position relation among a plurality of pieces of virtual three-dimensional display information with shielding relation, only the part which is not shielded in the virtual three-dimensional display information is displayed;
respectively displaying virtual three-dimensional display information in the plurality of virtual three-dimensional display information with the shielding relation at different display time;
adjusting at least one of a display position, a content detail degree and a presentation mode of at least one piece of virtual three-dimensional display information in a plurality of pieces of virtual three-dimensional display information with an occlusion relationship, and displaying each piece of virtual three-dimensional display information in the plurality of pieces of virtual three-dimensional display information with the occlusion relationship according to the adjustment mode;
and aiming at least two road traffic signs in the virtual three-dimensional display information which cannot be distinguished by the shielding relation, displaying the direction arrows of the road traffic signs according to the current lane and the map of the current road.
20. The method according to claim 1 or 3, wherein the step of displaying the virtual three-dimensional display information corresponding to the driving assistance information comprises:
displaying virtual three-dimensional display information corresponding to the driving auxiliary information to be displayed at a preset display position;
wherein the preset display position comprises at least one of the following items:
a display position aligned with the real driving assistance information; the area position where the driver drives is not interfered; a prominent position of the driver's current field of view; a relatively open driver view; the position of the driver with insufficient attention.
21. The method of claim 1, further comprising:
rendering virtual three-dimensional display information to be displayed in advance;
when a preset display triggering condition is met, acquiring virtual three-dimensional display information to be displayed from the pre-rendered virtual three-dimensional display information, adjusting the presentation mode of the virtual three-dimensional display information according to the current environment, and displaying the virtual three-dimensional display information according to the adjusted presentation mode;
and adjusting the display mode of the virtual three-dimensional display information in real time according to the current environment, and displaying the virtual three-dimensional display information according to the adjusted display mode.
22. The method of claim 1, further comprising:
and when the display delay of the virtual three-dimensional display information exceeds the acceptable display delay estimated according to the vehicle speed in the driving process, adjusting the presentation mode of the virtual three-dimensional display information.
23. The method according to claim 22, wherein the adjusting the manner of presenting the virtual three-dimensional display information comprises:
adjusting a portion of the virtual three-dimensional display information into a pattern.
24. The method of claim 23, wherein the pattern is a striped pattern.
25. An apparatus for augmented reality for driving assistance, comprising:
the determining module is used for determining driving auxiliary information based on the information acquired in the driving process;
and the display module is used for displaying the virtual three-dimensional display information corresponding to the driving auxiliary information determined by the determination module.
26. An electronic device comprising a memory and a processor;
the memory has stored therein a computer program;
the processor, when executing the computer program, is configured to perform the method of any of claims 1 to 24.
27. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 24.
CN202211358721.0A 2017-08-24 2017-08-24 Augmented reality method and device for driving assistance Pending CN115620545A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211358721.0A CN115620545A (en) 2017-08-24 2017-08-24 Augmented reality method and device for driving assistance

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211358721.0A CN115620545A (en) 2017-08-24 2017-08-24 Augmented reality method and device for driving assistance
CN201710737404.2A CN109427199B (en) 2017-08-24 2017-08-24 Augmented reality method and device for driving assistance

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201710737404.2A Division CN109427199B (en) 2017-08-24 2017-08-24 Augmented reality method and device for driving assistance

Publications (1)

Publication Number Publication Date
CN115620545A true CN115620545A (en) 2023-01-17

Family

ID=65500433

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202211358721.0A Pending CN115620545A (en) 2017-08-24 2017-08-24 Augmented reality method and device for driving assistance
CN201710737404.2A Active CN109427199B (en) 2017-08-24 2017-08-24 Augmented reality method and device for driving assistance

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201710737404.2A Active CN109427199B (en) 2017-08-24 2017-08-24 Augmented reality method and device for driving assistance

Country Status (1)

Country Link
CN (2) CN115620545A (en)

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102628012B1 (en) * 2018-10-23 2024-01-22 삼성전자주식회사 Method and apparatus of auto calibration
CN109910744B (en) * 2019-03-18 2022-06-03 重庆睿驰智能科技有限公司 LDW lane departure early warning system
CN109931944B (en) * 2019-04-02 2021-12-07 阿波罗智联(北京)科技有限公司 AR navigation method, AR navigation device, vehicle-side equipment, server side and medium
CN110070742A (en) * 2019-05-29 2019-07-30 浙江吉利控股集团有限公司 The recognition methods of high speed ring road speed limit, system and vehicle
CN110341601B (en) * 2019-06-14 2023-02-17 江苏大学 A-pillar blind area eliminating and driving assisting device and control method thereof
US11741704B2 (en) * 2019-08-30 2023-08-29 Qualcomm Incorporated Techniques for augmented reality assistance
CN112440888B (en) * 2019-08-30 2022-07-15 比亚迪股份有限公司 Vehicle control method, control device and electronic equipment
CN110737266B (en) * 2019-09-17 2022-11-18 中国第一汽车股份有限公司 Automatic driving control method and device, vehicle and storage medium
CN111107332A (en) * 2019-12-30 2020-05-05 华人运通(上海)云计算科技有限公司 HUD projection image display method and device
CN113160548B (en) * 2020-01-23 2023-03-10 宝马股份公司 Method, device and vehicle for automatic driving of vehicle
CN111366168B (en) * 2020-02-17 2023-12-29 深圳毕加索电子有限公司 AR navigation system and method based on multisource information fusion
CN111332317A (en) * 2020-02-17 2020-06-26 吉利汽车研究院(宁波)有限公司 Driving reminding method, system and device based on augmented reality technology
CN111272182B (en) * 2020-02-20 2021-05-28 武汉科信云图信息技术有限公司 Mapping system using block chain database
CN111815863B (en) * 2020-04-17 2022-06-03 北京嘀嘀无限科技发展有限公司 Vehicle control method, storage medium, and electronic device
CN111601279A (en) * 2020-05-14 2020-08-28 大陆投资(中国)有限公司 Method for displaying dynamic traffic situation in vehicle-mounted display and vehicle-mounted system
CN111681437B (en) * 2020-06-04 2022-10-11 北京航迹科技有限公司 Surrounding vehicle reminding method and device, electronic equipment and storage medium
CN113763566A (en) * 2020-06-05 2021-12-07 光宝电子(广州)有限公司 Image generation system and image generation method
CN111736701A (en) * 2020-06-24 2020-10-02 上海商汤临港智能科技有限公司 Vehicle-mounted digital person-based driving assistance interaction method and device and storage medium
CN111738191B (en) * 2020-06-29 2022-03-11 广州橙行智动汽车科技有限公司 Processing method for parking space display and vehicle
CN111717028A (en) * 2020-06-29 2020-09-29 深圳市元征科技股份有限公司 AR system control method and related device
CN112801012B (en) * 2021-02-05 2021-12-17 腾讯科技(深圳)有限公司 Traffic element processing method and device, electronic equipment and storage medium
CN113536984B (en) * 2021-06-28 2022-04-26 北京沧沐科技有限公司 Image target identification and tracking system based on unmanned aerial vehicle
CN113247015A (en) * 2021-06-30 2021-08-13 厦门元馨智能科技有限公司 Vehicle driving auxiliary system based on somatosensory operation integrated glasses and method thereof
CN113470394A (en) * 2021-07-05 2021-10-01 浙江商汤科技开发有限公司 Augmented reality display method and related device, vehicle and storage medium
WO2023010236A1 (en) * 2021-07-31 2023-02-09 华为技术有限公司 Display method, device and system
CN113593303A (en) * 2021-08-12 2021-11-02 上海仙塔智能科技有限公司 Method for reminding safe driving between vehicles, vehicle and intelligent glasses
CN113989466B (en) * 2021-10-28 2022-09-20 江苏濠汉信息技术有限公司 Beyond-the-horizon assistant driving system based on situation cognition

Family Cites Families (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100326674B1 (en) * 1999-03-11 2002-03-02 이계안 Method for alarming beyond the lane in vehicle
ATE382492T1 (en) * 1999-07-30 2008-01-15 Pirelli METHOD AND SYSTEM FOR CONTROLLING THE BEHAVIOR OF A VEHICLE BY CONTROLING ITS TIRES
US6977630B1 (en) * 2000-07-18 2005-12-20 University Of Minnesota Mobility assist device
JP2005275691A (en) * 2004-03-24 2005-10-06 Toshiba Corp Image processing device, image processing method, and image processing program
JP2005303450A (en) * 2004-04-07 2005-10-27 Auto Network Gijutsu Kenkyusho:Kk Apparatus for monitoring vehicle periphery
JP2007030603A (en) * 2005-07-25 2007-02-08 Mazda Motor Corp Vehicle travel assisting device
KR101075615B1 (en) * 2006-07-06 2011-10-21 포항공과대학교 산학협력단 Apparatus and method for generating a auxiliary information of moving vehicles for driver
JP2008305162A (en) * 2007-06-07 2008-12-18 Aisin Aw Co Ltd Vehicle traveling support device and program
JP2010055157A (en) * 2008-08-26 2010-03-11 Panasonic Corp Intersection situation recognition system
FR2938365A1 (en) * 2008-11-10 2010-05-14 Peugeot Citroen Automobiles Sa Data e.g. traffic sign, operating device for use in motor vehicle e.g. car, has regulation unit automatically acting on assistance device when speed is lower than real speed of vehicle and when no action by driver
US8704653B2 (en) * 2009-04-02 2014-04-22 GM Global Technology Operations LLC Enhanced road vision on full windshield head-up display
US8384532B2 (en) * 2009-04-02 2013-02-26 GM Global Technology Operations LLC Lane of travel on windshield head-up display
US9406232B2 (en) * 2009-11-27 2016-08-02 Toyota Jidosha Kabushiki Kaisha Driving support apparatus and driving support method
WO2011114386A1 (en) * 2010-03-19 2011-09-22 三菱電機株式会社 Information offering apparatus
CN102012230A (en) * 2010-08-27 2011-04-13 杭州妙影微电子有限公司 Road live view navigation method
JP5576781B2 (en) * 2010-12-16 2014-08-20 株式会社メガチップス Image processing system, image processing system operation method, host device, program, and program creation method
US20140240115A1 (en) * 2011-09-26 2014-08-28 Toyota Jidosha Kabushiki Kaisha Driving assistance system for vehicle
DE102012201896A1 (en) * 2012-02-09 2013-08-14 Robert Bosch Gmbh Driver assistance system and driver assistance system for snowy roads
US9135754B2 (en) * 2012-05-07 2015-09-15 Honda Motor Co., Ltd. Method to generate virtual display surfaces from video imagery of road based scenery
JP5792678B2 (en) * 2012-06-01 2015-10-14 株式会社日本自動車部品総合研究所 Lane boundary detection device and program
US20130328867A1 (en) * 2012-06-06 2013-12-12 Samsung Electronics Co. Ltd. Apparatus and method for providing augmented reality information using three dimension map
JP5910753B2 (en) * 2012-11-21 2016-04-27 トヨタ自動車株式会社 Driving support device and driving support method
US20140267415A1 (en) * 2013-03-12 2014-09-18 Xueming Tang Road marking illuminattion system and method
DE102013210729A1 (en) * 2013-06-10 2014-12-11 Robert Bosch Gmbh Method and device for signaling a visually at least partially hidden traffic object for a driver of a vehicle
SE538984C2 (en) * 2013-07-18 2017-03-14 Scania Cv Ab Determination of lane position
DE102013219038A1 (en) * 2013-09-23 2015-03-26 Continental Teves Ag & Co. Ohg A method for detecting a traffic cop by a driver assistance system of a motor vehicle and a driver assistance system
CN103489314B (en) * 2013-09-25 2015-09-09 广东欧珀移动通信有限公司 Real-time road display packing and device
KR101478135B1 (en) * 2013-12-02 2014-12-31 현대모비스(주) Augmented reality lane change helper system using projection unit
DE102013226760A1 (en) * 2013-12-19 2015-06-25 Robert Bosch Gmbh Method and device for detecting object reflections
US9335178B2 (en) * 2014-01-28 2016-05-10 GM Global Technology Operations LLC Method for using street level images to enhance automated driving mode for vehicle
US9959838B2 (en) * 2014-03-20 2018-05-01 Toyota Motor Engineering & Manufacturing North America, Inc. Transparent display overlay systems for vehicle instrument cluster assemblies
KR20150140449A (en) * 2014-06-05 2015-12-16 팅크웨어(주) Electronic apparatus, control method of electronic apparatus and computer readable recording medium
CN104401325A (en) * 2014-11-05 2015-03-11 江苏大学 Dynamic regulation and fault tolerance method and dynamic regulation and fault tolerance system for auxiliary parking path
JP6447060B2 (en) * 2014-11-28 2019-01-09 アイシン精機株式会社 Vehicle periphery monitoring device
JP2018510373A (en) * 2015-02-10 2018-04-12 モービルアイ ビジョン テクノロジーズ リミテッド Sparse map for autonomous vehicle navigation
KR101714185B1 (en) * 2015-08-05 2017-03-22 엘지전자 주식회사 Driver Assistance Apparatus and Vehicle Having The Same
US20170161950A1 (en) * 2015-12-08 2017-06-08 GM Global Technology Operations LLC Augmented reality system and image processing of obscured objects
US20170169612A1 (en) * 2015-12-15 2017-06-15 N.S. International, LTD Augmented reality alignment system and method
KR101916993B1 (en) * 2015-12-24 2018-11-08 엘지전자 주식회사 Display apparatus for vehicle and control method thereof
CN105678316B (en) * 2015-12-29 2019-08-27 大连楼兰科技股份有限公司 Active drive manner based on multi-information fusion
US20170206426A1 (en) * 2016-01-15 2017-07-20 Ford Global Technologies, Llc Pedestrian Detection With Saliency Maps
US9625264B1 (en) * 2016-01-20 2017-04-18 Denso Corporation Systems and methods for displaying route information
GB201605137D0 (en) * 2016-03-25 2016-05-11 Jaguar Land Rover Ltd Virtual overlay system and method for occluded objects
CN105929539B (en) * 2016-05-19 2018-10-02 彭波 Automobile 3D image collections and bore hole 3D head-up display systems
CN107784864A (en) * 2016-08-26 2018-03-09 奥迪股份公司 Vehicle assistant drive method and system
CN106338828A (en) * 2016-08-31 2017-01-18 京东方科技集团股份有限公司 Vehicle-mounted augmented reality system, method and equipment
CN106494309B (en) * 2016-10-11 2019-06-11 广州视源电子科技股份有限公司 Vehicle vision blind area picture display method and device and vehicle-mounted virtual system
CN106864393A (en) * 2017-03-27 2017-06-20 深圳市精能奥天导航技术有限公司 Senior drive assistance function upgrade-system

Also Published As

Publication number Publication date
CN109427199B (en) 2022-11-18
CN109427199A (en) 2019-03-05

Similar Documents

Publication Publication Date Title
CN109427199B (en) Augmented reality method and device for driving assistance
US11767024B2 (en) Augmented reality method and apparatus for driving assistance
CN107848416B (en) Display control device, display device, and display control method
EP2851864B1 (en) Vehicular display apparatus, vehicular display method, and vehicular display program
US9267808B2 (en) Visual guidance system
US10131276B2 (en) Vehicle sightline guidance apparatus
EP3530521B1 (en) Driver assistance method and apparatus
CN111414796A (en) Adaptive transparency of virtual vehicles in analog imaging systems
US10546422B2 (en) System and method for augmented reality support using a lighting system's sensor data
JP6136565B2 (en) Vehicle display device
KR20130012629A (en) Augmented reality system for head-up display
US20230135641A1 (en) Superimposed image display device
CN111273765A (en) Display control device for vehicle, display control method for vehicle, and storage medium
CN111469763A (en) Parking assist system utilizing parking space occupancy readings
JP2012247847A (en) Information transmission control device for vehicle and information transmission control device
CN114987460A (en) Method and apparatus for blind spot assist of vehicle
JP5259277B2 (en) Driving assistance device
CN113165510B (en) Display control device, method, and computer program
US20210268961A1 (en) Display method, display device, and display system
CN109987025B (en) Vehicle driving assistance system and method for night environment
JP6102509B2 (en) Vehicle display device
US10864856B2 (en) Mobile body surroundings display method and mobile body surroundings display apparatus
CN115447600B (en) Vehicle anti-congestion method based on deep learning, controller and storage medium
JP2021149641A (en) Object presentation device and object presentation method
CN118220197A (en) Auxiliary driving method and device and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination