CN109427199A - For assisting the method and device of the augmented reality driven - Google Patents
For assisting the method and device of the augmented reality driven Download PDFInfo
- Publication number
- CN109427199A CN109427199A CN201710737404.2A CN201710737404A CN109427199A CN 109427199 A CN109427199 A CN 109427199A CN 201710737404 A CN201710737404 A CN 201710737404A CN 109427199 A CN109427199 A CN 109427199A
- Authority
- CN
- China
- Prior art keywords
- information
- display
- dimensional
- virtual
- shows
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
- G08G1/096708—Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
- G08G1/096716—Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information does not generate an automatic action on the vehicle control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Atmospheric Sciences (AREA)
- Human Computer Interaction (AREA)
- Traffic Control Systems (AREA)
- Navigation (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention provides a kind of for assisting the method and device of the augmented reality driven, applied to augmented reality field, this method comprises: determining driving auxiliary information based on the information obtained in driving conditions, then show that the corresponding virtual three-dimensional of driving auxiliary information shows information, the present invention can help driver preferably to grasp the running information in vehicle travel process in vehicle travel process using augmented reality, and can promote user experience.
Description
Technical field
The present invention relates to augmented reality fields, specifically, the present invention relates to a kind of enhancings for assisting driving
The method and device of reality.
Background technique
Dummy object and/or virtual information can be added to very by AR (Augmented Reality, augmented reality) technology
In real field scape, so that it is more than real sensory experience that user, which obtains, i.e., user can perceive one and exist simultaneously really
The scene of object and dummy object and/or virtual information.
During driving vehicle, since road conditions are complicated and some limitations of driver itself, driver have been difficult to
The full running information grasped in vehicle travel process, so as to lead to the generation of accident.By the way that AR technology is applied to vehicle
It drives, driver can be helped preferably to grasp the running information in vehicle travel process, thus safer operating motor vehicles
, reduce the generation of accident in vehicle travel process.Therefore how to drive in vehicle processes in driver becomes using AR technology
One critical issue.
Summary of the invention
To overcome above-mentioned technical problem or at least being partially solved above-mentioned technical problem, spy proposes following technical scheme:
The embodiment of the present invention is wrapped according to a kind of method for assisting the augmented reality driven on one side, is provided
It includes:
Driving auxiliary information is determined based on the information obtained in driving conditions;
Show that the corresponding virtual three-dimensional of the driving auxiliary information shows information.
The embodiment of the present invention is a kind of for assisting the dress of the augmented reality driven according on the other hand, additionally providing
It sets, comprising:
Determining module, for determining driving auxiliary information based on the information obtained in driving conditions;
Display module, for showing that the corresponding virtual three-dimensional display of driving auxiliary information that the determining module determines is believed
Breath.
The present invention provides a kind of for assisting the method and device of the augmented reality driven, compared with prior art, this
Invention determines driving auxiliary information based on the information obtained in driving conditions, then shows that driving auxiliary information is corresponding virtual
Three-dimensional Display information, the i.e. present invention determine that the driving in driving conditions is auxiliary by the information got in vehicle travel process
Supplementary information, and the corresponding virtual three-dimensional of driving auxiliary information in driving conditions is shown information passes through vision and/or the sense of hearing
Mode is presented to driver, existing using enhancing in vehicle travel process so as to realize to notify or warn driver
Real technology helps driver preferably to grasp the running information in vehicle travel process, and then can promote user experience.
The additional aspect of the present invention and advantage will be set forth in part in the description, these will become from the following description
Obviously, or practice through the invention is recognized.
Detailed description of the invention
Above-mentioned and/or additional aspect and advantage of the invention will become from the following description of the accompanying drawings of embodiments
Obviously and it is readily appreciated that, in which:
Fig. 1 is a kind of for assisting the method flow schematic diagram of the augmented reality driven of the embodiment of the present invention;
Fig. 2 is not to be completely covered in the embodiment of the present invention when road surface, determines the schematic diagram of road information;
Fig. 3 is the determination in the embodiment of the present invention when road surface is completely covered, and middle lane isolation railing is visible
The schematic diagram of road information;
Fig. 4 is that road is completely covered in the embodiment of the present invention, and the schematic diagram of road surface and road edge cannot be distinguished;
Fig. 5 is in the embodiment of the present invention when road surface is completely covered, and middle lane isolation railing is invisible, really
Determine the schematic diagram of road information;
Fig. 6 is the schematic diagram that track is shown in the embodiment of the present invention;
Fig. 7 is the schematic diagram of enhancing display track in the embodiment of the present invention when track is not known;
Fig. 8 is the relation schematic diagram that AR information Yu driver's sight are shown in the embodiment of the present invention;
Fig. 9 is in the embodiment of the present invention when traffic sign/Warning Mark is by some or all of covering, and display is complete
Traffic sign/Warning Mark schematic diagram;
Figure 10 is to determine the corresponding traffic sign in current location and/or instruction according to historical record in the embodiment of the present invention
The schematic diagram of mark;
Figure 11 is the schematic diagram that rearview mirror extended area in side is determined in the embodiment of the present invention;
Figure 12 is the visible area of the side rearview mirror of physics and side rearview mirror extended area in the embodiment of the present invention
Schematic diagram;
Figure 13 is introscope extended area schematic diagram in the embodiment of the present invention;
Figure 14 is the schematic diagram that virtual traffic lamp is shown in the embodiment of the present invention;
Figure 15 is to show the schematic diagram of corresponding AR information according to traffic-police's gesture in the embodiment of the present invention;
Figure 16 is the AR information display methods schematic diagram of key needed for operating keyboard in the embodiment of the present invention;
Figure 17 is that suitable parking and the region for being not suitable for stopping are shown and shown corresponding in the embodiment of the present invention
Augmented reality auxiliary drives the schematic diagram of display information;
Figure 18 is that larger range AR information is prepared and rendered to equipment in advance in the embodiment of the present invention, to reduce the signal of delay
Figure;
Figure 19 is in the embodiment of the present invention due to speed difference, the different schematic diagram of driving region display mode;
Figure 20 is multiple AR information display methods in the embodiment of the present invention when existing simultaneously multiple AR information and needing to show
Schematic diagram;
When Figure 21 is in the embodiment of the present invention on the left of the attention deviation of the sight of driver statistics display driver, if
The standby schematic diagram shown AR information on right side;
Figure 22 is one of embodiment of the present invention for assisting the apparatus structure schematic diagram of the augmented reality driven.
Specific embodiment
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end
Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached
The embodiment of figure description is exemplary, and for explaining only the invention, and is not construed as limiting the claims.
Those skilled in the art of the present technique are appreciated that unless expressly stated, singular " one " used herein, " one
It is a ", " single ", " described " and "the" may also comprise plural form.It is to be further understood that making in specification of the invention
Wording " comprising " refers to that there are the feature, integer, step, operation, element and/or component, but it is not excluded that in the presence of
Or add other one or more features, integer, step, operation, element, component and/or their group.It should be understood that working as me
Claim element to be " connected " or when " coupled " to another element, it can be directly connected or coupled to other elements, or can also
With there are intermediary elements.In addition, " connection " used herein or " coupling " may include being wirelessly connected or wirelessly coupling.Here make
Wording "and/or" includes one or more associated wholes for listing item or any cell and all combinations.
Those skilled in the art of the present technique are appreciated that unless otherwise defined, all terms used herein (including technology art
Language and scientific term), there is meaning identical with the general understanding of those of ordinary skill in fields of the present invention.Should also
Understand, those terms such as defined in the general dictionary, it should be understood that have in the context of the prior art
The consistent meaning of meaning, and unless idealization or meaning too formal otherwise will not be used by specific definitions as here
To explain.
Those skilled in the art of the present technique are appreciated that " terminal " used herein above, " terminal device " both include wireless communication
The equipment of number receiver, only has the equipment of the wireless signal receiver of non-emissive ability, and including receiving and emitting hardware
Equipment, have on bidirectional communication link, can carry out two-way communication reception and emit hardware equipment.This equipment
It may include: honeycomb or other communication equipments, shown with single line display or multi-line display or without multi-line
The honeycomb of device or other communication equipments;PCS (Personal Communications Service, PCS Personal Communications System), can
With combine voice, data processing, fax and/or communication ability;PDA (Personal Digital Assistant, it is personal
Digital assistants), it may include radio frequency receiver, pager, the Internet/intranet access, web browser, notepad, day
It goes through and/or GPS (Global Positioning System, global positioning system) receiver;Conventional laptop and/or palm
Type computer or other equipment, have and/or the conventional laptop including radio frequency receiver and/or palmtop computer or its
His equipment." terminal " used herein above, " terminal device " can be it is portable, can transport, be mounted on the vehicles (aviation,
Sea-freight and/or land) in, or be suitable for and/or be configured in local runtime, and/or with distribution form, operate in the earth
And/or any other position operation in space." terminal " used herein above, " terminal device " can also be communication terminal, on
Network termination, music/video playback terminal, such as can be PDA, MID (Mobile Internet Device, mobile Internet
Equipment) and/or mobile phone with music/video playing function, it is also possible to the equipment such as smart television, set-top box.
ADAS (Advanced Driver Assistance System, advanced driver's auxiliary system) is intended to help to drive
The driving maneuver vehicle for sailing people's safety, reduces accident, ADAS can based on road conditions by driver's vision, the sense of hearing or
The mode of person's tactile is fed back, and to notify or warn driver, can include but is not limited to lane departure warning, lane is kept
System etc..
Existing ADAS is directed generally to provide help under the conditions of pavement conditions are good for driver, but for challenge
The environment of property, such as snowy road surface and muddy road surface, lack effective solution scheme.
What existing advanced driver's auxiliary system using augmented reality used is usually vehicle-mounted independent screen,
And screen is smaller, the information type of display is fixed, and driver's perception is unnatural.And delay is bigger, more challenging
Driving Scene can not effectively help driver.It therefore how to be the adaptive selection object/information to be shown of driver,
If the how natural mode of perception is presented, how the display information of low latency, and how to show multiple objects/information simultaneously,
It is problem to be solved.
For the embodiment of the present invention, for the environment of challenge, such as snowy road surface and muddy road surface, the embodiment of the present invention
It is estimated that by perception road environment and/or road map information including lane range, lane line position, road edge line
At least one position, pavement marking and non-pavement marking, and generate corresponding AR information (i.e. virtual three-dimensional display letter
Breath).
Wherein, for perceiving road environment, equipment can at least one in the following manner perception road environment: using setting
Standby at least one sensor perception carried;It is perceived using at least one contained sensor of this vehicle;Using communication mode from
Other at least one: same category of device, inhomogeneity equipment, other vehicles obtain information;Use GPS (Global Positioning
System, global positioning system) obtain information.
Further, the perceived area of equipment can be the set using various mode sensing ranges described above.
For the embodiment of the present invention, AR information (i.e. virtual three-dimensional display information) can include but is not limited to it is following at least it
One: AR object, AR text, AR picture and AR animation.In embodiments of the present invention without limitation.
The embodiment of the present invention passes through adaptive selection AR information to be shown in embodiments of the present invention
Kind mode at least one of including being intended to including perception road environment, this car state and driver, adaptive judgement certain or
Whether a variety of AR information need to be shown, and generate corresponding content.
The embodiment of the present invention is passed through in embodiments of the present invention for AR information is presented in such a way that perception is natural
AR information is shown and (it is correct such as to block relationship with corresponding real-world object relative position and attitude scale in the correct position of physics
Position) and/or driver's habit position (i.e. driver have no need to change driving habit position).
The embodiment of the present invention can cooperate head-mounted display apparatus (such as 3D augmented reality/mixed reality glasses), and/or put
The car-mounted display equipment (such as 3D head up display) set on vehicle uses.Particularly, by using head-mounted display apparatus, if
It is standby AR information display space to be extended to entire three-dimensional space.
Wherein, postpone for reducing, the equipment of the embodiment of the present invention and method adaptively reduce by two kinds of delays: one is notes
Meaning force delay, another kind is display delay.Notice that force delay is defined as showing that AR information notices that AR believes to driver from equipment
The time delay of breath;Display delay is defined as equipment and is generating, and renders and show the time spent in AR information.
Fig. 1 is provided by one embodiment of the present invention for assisting the method flow schematic diagram of the augmented reality driven.
Step 101 determines driving auxiliary information based on the information obtained in driving conditions;Step 102, display driving are auxiliary
The corresponding virtual three-dimensional of supplementary information shows information.
Further, step 101 includes step 1011: step 102 includes step 1021.
Wherein, step 1011, based on the information of the perceived area obtained in driving conditions determine that the driving that is blocked is auxiliary
Supplementary information;The corresponding virtual three-dimensional of driving auxiliary information that step 1021, display are blocked shows information.
Wherein, the driving auxiliary information being blocked includes at least one of: road surface road information, non-pavement marking
Information, dead zone information.
Wherein, road surface road information includes at least one of: lane, lane line, road edge line, road traffic mark
Will, traffic stripe.
Wherein, non-pavement marking information includes at least one of: traffic mark above roadside traffic sign, road
Will.
Wherein, dead zone information includes: the information in rear view mirror blind zone.
Wherein, traffic sign includes at least one of: caution sign, prohibitory sign, Warning Mark, fingerpost, trip
Swim distinctive emblem, indication marking, auxiliary sign, bulletin mark.
Wherein, traffic marking includes at least one of: instruction graticule forbids graticule, warning graticule.
Further, when the driving auxiliary information being blocked includes: road surface road information, and/or non-pavement marking
When information, show that the corresponding virtual three-dimensional of the driving auxiliary information being blocked shows information, comprising: in the driving auxiliary being blocked
On the position of information, show that the corresponding virtual three-dimensional of the driving auxiliary information being blocked shows information.
Further, the driving auxiliary information being blocked is determined based on the information of the perceived area obtained in driving conditions
It can be realized using following at least one mode: if the driving auxiliary information being blocked only partially is blocked, according to driving
Auxiliary information perceives part, determines the driving auxiliary information being blocked;Position and current driving based on current vehicle
The object of reference information of middle perceived area determines the driving auxiliary information being blocked;Based on angles other other than driver visual angle
The multimedia messages of the driving auxiliary information being blocked got are spent, determine the driving auxiliary information being blocked;Based on driving
The multimedia messages for the driving auxiliary information being blocked in the perceived area obtained in the process, enhance multimedia messages
And/or restore, determine the driving auxiliary information being blocked;When the driving auxiliary information being blocked includes road surface road information,
By the way that present road to be aligned with the map of present road, the driving auxiliary information being blocked is determined according to the map;According to
Others driving auxiliary information, determines the driving auxiliary information being currently blocked.
Further, the information of the perceived area obtained in based on driving conditions determines the driving auxiliary letter being blocked
After breath, method further include: the determining driving auxiliary information being blocked is corrected;
Wherein, show that the corresponding virtual three-dimensional of the driving auxiliary information being blocked shows information, comprising: in the position of correction
The corresponding virtual three-dimensional of driving auxiliary information after place's display correction shows information.
Further, being corrected to the determining driving auxiliary information being blocked can be using following at least one mode
It realizes: when the driving auxiliary information being blocked includes lane relevant information, based on other vehicles in current vehicle preset range
Wheelpath and/or pavement track information, correct the position for the driving auxiliary information being blocked;When the driving auxiliary being blocked
When information includes road surface road information, by the way that present road to be aligned with the map of present road, quilt is corrected according to the map
The position for the driving auxiliary information blocked.
Further, when the driving auxiliary information being blocked includes: lane relevant information, the lane width of display is less than
Actual lane width.
Wherein, which includes: lane, lane line, road edge line, pavement marking and road surface
At least one of traffic marking.
Further, when the driving auxiliary information being blocked includes: dead zone information, the driving auxiliary letter being blocked is shown
It ceases corresponding virtual three-dimensional and shows information, comprising: show that the corresponding virtual three-dimensional of dead zone information is aobvious in the extended area of rearview mirror
Show information.
Wherein, when rearview mirror is side rearview mirror, show that information is that this is virtual in the virtual three-dimensional that extended area is shown
The corresponding real-world object of Three-dimensional Display information is generated according to the specular property and driver's viewpoint of side rearview mirror.
Further, step 101 includes: the traffic rules and/or traffic police's action message of acquisition current road segment, and to determination
Current road segment traffic rules and/or traffic police's action message carry out presentation mode conversion;Step 102 includes: display conversion
The corresponding virtual three-dimensional of the traffic rules and/or traffic police's action message of current road segment afterwards shows information.
Further, the corresponding virtual three-dimensional of display driving auxiliary information shows that information can use following at least one side
Formula is realized: when perceiving abnormal track information, showing that the corresponding virtual three-dimensional in the abnormal track region determined shows information
And/or the region is the virtual three-dimensional display information of the information warning in abnormal track region;When needing to show that current vehicle has been gone
When the traffic sign of the road area crossed, the traffic sign for the road area that the current vehicle got has been run over is shown
Corresponding virtual three-dimensional shows information;There are traffic sign and/or traffic lights at crossing where perceiving current vehicle, and
Traffic sign and/or traffic lights meet predetermined display condition, show that the crossing traffic mark and/or traffic lights are corresponding
Virtual enhancing show information;When needing the key information in display operation dial plate, show that at least one following information is corresponding
Virtual three-dimensional show information: the location information of key, the function name information of key, the operation instruction information of key and press
Key;When needing to show parking area information, Parking permitted and is suitble to parking area, Parking permitted but is not suitable for parking area for display
Domain, Parking permitted, and the corresponding virtual three-dimensional at least one region shows information.
Wherein, perception may include at least one of machinery equipment identification, detection, detection, and details are not described herein.
Wherein, abnormal track information is driving trace of the vehicle of the abnormal track Rule of judgment of satisfaction under driving status,
Abnormal track region is the region in the presence of abnormal track information.
Wherein, which includes at least one of: the direction and lane line and/or lane of track edge line
The inconsistent track information in the direction of edge line;The inconsistent track in the direction of track edge line and whole track edge line is believed
Breath;There are the track information of braking mark.
Wherein, whether the track marginality of generation can judge by the following method extremely: judge the generation at a certain moment
The building of track edge line vector field whether be substantially distinguished from other/direction of the vector field of whole track edge line building;
If there is significant difference, it is determined that the track edge line of generation belongs to abnormal track edge line;And/or judge that track edge line is
It is no to have apparent braking mark;If there is obvious braking mark, it is determined that the track edge line of generation belongs to abnormal track edge line.
Wherein, predetermined display condition includes: traffic sign and/or traffic lights damage, traffic sign and/or traffic letter
Signal lamp shows that unintelligible, traffic sign and/or traffic lights are imperfect within the scope of the current visible of driver, and
According at least one of driver's instruction.
Further, determine that driving auxiliary information can be using following at least one based on the information obtained in driving conditions
Kind mode is realized: determining that pavement track information whether there is abnormal track information, if it exists abnormal track information, it is determined that exist
Abnormal track region;When needing to show the traffic sign of road area that current vehicle has been run over, from the more matchmakers got
In body information and/or in traffic sign database, the traffic sign of the road area currently run over is determined;When needing to show
When parking area information, identified according to current vehicle peripheral region with the presence or absence of no parking, the volume of current vehicle, and work as
At least one of in preceding surface conditions, it determines that Parking permitted and is suitble to parking area, Parking permitted but is not suitable for parking area, no
At least one of in Parking permitted region.
Further, step 102 includes: that the corresponding virtual three-dimensional of enhancing display track information shows information.
Further, the traffic sign for the road area that the current vehicle that display has been got has been run over is corresponding virtual
Three-dimensional Display information, comprising: according to the traffic sign of current vehicle position and the road area run over corresponding virtual three
Dimension display information, the corresponding virtual three-dimensional of the traffic sign for the road area that adjustment has currently been run over shows information, and shows
The corresponding virtual three-dimensional of traffic sign adjusted shows information.
Further, step 102 comprises determining that virtual three-dimensional shows the corresponding display mode of information;It is shown based on determining
Show mode, the corresponding virtual three-dimensional of display driving auxiliary information shows information.
Wherein display mode includes at least one of the following: that virtual three-dimensional shows the display position of information;Virtual three-dimensional is shown
The display posture of information;Virtual three-dimensional shows the display size size of information;Virtual three-dimensional shows the display time started of information;
Virtual three-dimensional shows the display end time of information;Virtual three-dimensional shows the display duration of information;Virtual three-dimensional shows information
Show content the level of detail;Virtual three-dimensional shows the presentation mode of information;It is shown between multiple virtual three-dimensionals display information mutual
Relationship.
Wherein, presentation mode includes at least one of the following: text mode;Icon mode;Animation mode;Audio mode;Lamp
Light mode;Vibration mode.
Further, this method further includes following at least one mode: when existing simultaneously multiple virtual three-dimensionals to be shown
When showing information, multiple virtual three-dimensionals to be shown are shown that information merge processing, and display treated virtual three-dimensional
Show information;When showing that multiple virtual three-dimensionals to be shown show information simultaneously, multiple virtual three-dimensionals to be shown are shown
Information carries out integration processing based on semanteme, and shows that treated virtual three-dimensional shows information.
Further, this method further includes following at least one mode: will be above the driving auxiliary of the first pre-set priority
The corresponding virtual three-dimensional of information shows that information is shown in the significant position of driver's present viewing field, and in real time according to the view of driver
Line position, adjustment show that the virtual three-dimensional shows the position of information;Display is higher than the driving auxiliary information of the first pre-set priority
Corresponding virtual three-dimensional shows information, and pause and/or stopping display are corresponding lower than the driving auxiliary information of the second pre-set priority
Virtual three-dimensional show information.
Wherein, significant position can be the central area of driver's present viewing field, driver's focus vision region, driver
Region, the front of sight residence time length face at least one in the region of driver.
Wherein, the first pre-set priority and/or the second pre-set priority can be according to driver's instruction definitions;It can also be with
It is intended to according to the road conditions that perceive, this vehicle situation, driver, at least one the semantic analysis of driving auxiliary information, it is adaptive
That answers is classified driving auxiliary information.
Further, step 102 includes: the system according to the current state of vehicle, current traffic information and equipment
Postpone at least one of situation, determine display virtual three-dimensional show display time started of information, the display end time and
Show at least one in duration;The display time started of information, display knot are shown according to determining display virtual three-dimensional
At least one of in beam time and display duration, the corresponding virtual three-dimensional of display driving auxiliary information shows information.
Further, information is shown when existing simultaneously the corresponding virtual three-dimensional of multiple driving auxiliary informations to be shown, and
And multiple virtual three-dimensionals to be shown are shown between information there are when hiding relation, can also comprise at least one of the following mode:
According to the positional relationship shown between information there are multiple virtual three-dimensionals of hiding relation, only show that virtual three-dimensional is shown in information
The part not being blocked;In the different display time, there are multiple virtual three-dimensionals of hiding relation to show in information for display respectively
Virtual three-dimensional shows information;There are multiple virtual three-dimensionals of hiding relation to show that at least one virtual three-dimensional is shown in information for adjustment
The display position of information, content the level of detail, in presentation mode at least one of, and according to the mode of adjustment, display, which exists, to be hidden
Multiple virtual three-dimensionals of gear relationship show that each virtual three-dimensional shows information in information.
Further, step 102 includes: to show that driving auxiliary information to be shown is corresponding in preset display position
Virtual three-dimensional shows information.
Wherein, preset display position includes at least one of the following:
The display position being aligned with true driving auxiliary information;The regional location for not interfering driver to drive;Driver
The significant position of present viewing field;The relatively open position in the driver visual field;The insufficient position of driver attention.
Further, this method further include: render virtual three-dimensional to be shown in advance and show information;Display is preset when meeting
It when trigger condition, is shown in information from the virtual three-dimensional that renders in advance, obtains virtual three-dimensional to be shown and show information, and according to
Current environment adjusts the presentation mode that virtual three-dimensional shows information, and according to presentation mode adjusted, shows that virtual three-dimensional is aobvious
Show information;Adjust the display mode that virtual three-dimensional shows information in real time according to current environment, and according to display mode adjusted,
Show that virtual three-dimensional shows information.
Wherein, presetting display trigger condition can be defined according to the instruction of driver;It can also be according to perceiving
At least one road conditions, this vehicle situation, driver's intention, the semantic analysis of driving auxiliary information, adaptive definition is default
Show trigger condition.
The embodiment of the invention provides a kind of methods for assisting the augmented reality driven, compared with prior art, this
Inventive embodiments determine driving auxiliary information based on the information obtained in driving conditions, then show that driving auxiliary information is corresponding
Virtual three-dimensional show that information determines that the driving in driving conditions is auxiliary that is, by the information got in vehicle travel process
Supplementary information, and the corresponding virtual three-dimensional of driving auxiliary information in driving conditions is shown information passes through vision and/or the sense of hearing
Mode is presented to driver, existing using enhancing in vehicle travel process so as to realize to notify or warn driver
Real technology helps driver preferably to grasp the running information in vehicle travel process, and then can promote user experience.
For the embodiment of the present invention, Fig. 1 shows assistant equipments on board described in the present invention (hereinafter referred equipment)
The overall flow schematic diagram of display mode, this method can be applied to enhancing/mixed reality that driver wears in driving conditions
Head-mounted display apparatus (near-eye display device), as 3D augmented reality glasses, and/or the car-mounted display being placed on vehicle are set
It is standby, such as 3D head up display.Note that equipment may include display equipment multiple of the same race or not of the same race, when display equipment is different
When, implementation is different, as detailed below.
In the equipment of the embodiment of the present invention overall flow schematic diagram, Description of content performed by each step is as follows: step
S110 (is not marked in figure): equipment, which determines, needs one or more target driving auxiliary informations to be shown;Step S120 is (in figure not
Mark): information is obtained, information is handled, generates target driving auxiliary information content;Step S130 (is not marked in figure): determining one
The display mode of a or multiple target driving auxiliary informations;Step S140 (is not marked in figure): showing one or more target lines
The corresponding virtual three-dimensional AR information of vehicle auxiliary information.
Wherein, the AR information in addition to AR object is in a manner of text mode, icon, animation mode, audio mode, light side
The information that at least one of formula, vibration mode etc. presentation mode is presented, such as the arrow icon equipped with text;AR object can be with
The information presented including but not limited in the form of real-world object, such as a virtual traffic-control device.
Wherein, hereinafter, although AR information and AR object often refer to simultaneously, the AR object in AR information, although not
It is all to need to be aligned with real-world object, but generally refer to the dummy object needed in display and real-world object is aligned;And its
Its AR information is not usually required to be aligned with real-world object in display.
Wherein, in step S110, equipment can choose zero target driving auxiliary information, i.e., not display line under current scene
The corresponding virtual three-dimensional of vehicle auxiliary information shows information.
Wherein, in step S110, equipment can need target driving auxiliary to be shown by identification scene adaptive judgement
Information, can also be obtained by way of user's interaction needs target to be shown to drive a vehicle auxiliary information, can also group in two ways
It closes and uses.
Wherein, target driving auxiliary information can include but is not limited in driving conditions with traffic safety, environment, friendship
The associated prompt informations such as message identifications, traffic rules and the in-vehicle informations such as logical and road conditions.
Specifically, the target object in driving conditions relevant to target driving auxiliary information can include but is not limited to:
Lane line, divisional barrier, lane, surrounding motor vehicles, surrounding non power driven vehicle, surrounding pedestrian, surrounding trees, surrounding
The message identifications such as building, pavement track, traffic and road conditions, traffic-police, the blind area object of side rearview mirror, interior rear row seat
Object, tailstock perimeter and the driver behavior platform etc. of position.
Wherein, in step S130, the determination of display mode includes at least one of the following: the display of one or more AR information
When position, display posture, display size size, display time started, display end time, and/or the multiple AR information of display
Position and/or posture, machine, the end opportunity of display, display duration, show content the level of detail, presentation side at the beginning of display
The correlation shown between formula and multiple AR information.
Embodiment one
A kind of display methods of auxiliary information of driving a vehicle is present embodiments provided, this method is used to be partially covered screening on road surface
The road of gear, to including at least one of: the information such as lane line, road edge line, pavement marking, traffic stripe
The prompt information of mark is shown, and shows corresponding augmented reality auxiliary driving information, as shown in Figure 2.Wherein, it blocks
Road lane line can be but not limited to fallen leaves, accumulated snow, ponding, dirt, oil etc., or combination.
Method in the present embodiment includes:
Step S1101 (being not marked in figure), equipment determine the need for display road information.
Step S1101 can be equipment and determine a kind of reality for needing one or more target driving auxiliary informations to be shown
Existing mode.
For the embodiment of the present invention, equipment knows the road information determined around this vehicle otherwise by image detection, can
To include but is not limited to lane line, road edge line, pavement marking, traffic stripe.Wherein, equipment can be always
Keep detection identification state, partial occlusion covering surface conditions under adaptively open display function, can also according to
The instruction at family starts the function of detection identification and/or display.Wherein, user instruction can be by gesture, voice, physics by
The bio-identifications such as key, fingerprint identify to carry.
Step S1201 (being not marked in figure), equipment detection identification road information, generate target driving auxiliary information content.
Step S1201 can be acquisition information, handle information, generate a kind of realization of target driving auxiliary information content
Mode.
For the embodiment of the present invention, equipment passes through image processing techniques and identification technology from one or more image/videos
It includes the visible portions of at least one lane line, road edge line, pavement marking and traffic stripe that middle fixation and recognition, which goes out,
Point, i.e., it is not completely covered the part blocked;And complete lane line is connected into (if lane line according to the visible segment of lane line
For dotted line, then by dotted line completion);The visible segment of road edge line is connected into complete road edge line, and according to road
The visible part of face traffic sign and/or traffic stripe identifies the class of pavement marking and/or traffic stripe
Type.
Specifically, in single image, lane line, road edge line visible segment can be calculated by Edge extraction
Method and/or color cluster algorithm extract, and lane line can be used and road edge line is usually regular rectilinear or smooth arc
The line segment of the priori knowledge removal mistake of line;The profile of the pavement marking of partial occlusion and/or traffic stripe can be with
It is extracted by Edge extraction algorithm and/or color cluster algorithm, and matches and obtained in pavement marking database
Whole pavement marking and/or traffic stripe.
For the embodiment of the present invention, lane line, road edge line, pavement marking and traffic stripe can also be with
Directly positioned by detection recognizer Direct Recognition.During identification, the domain knowledge of road traffic can be used, and/or
Road-map, auxiliary detection identification.For example, white list dotted line and white single solid line are shown as partially covering the road surface blocked
The irregular single dotted line of white, the domain knowledge based on road traffic, equipment can pass through the single dotted line of the white for judging to detect
In with the presence or absence of length be more than specific length line segment, it is single empty that the single dotted line of white to judge to detect corresponds to true white
Line or true white single solid line.Further, domain knowledge and road-map based on road traffic, equipment can bases
The present position of the single dotted line of white detected and traffic meaning judge that it corresponds to the single dotted line of true white or true
White single solid line.
For example, the single dotted line of the white detected is located at road middle, according to the domain knowledge of road traffic, it should be corresponded to
In true white single solid line.
Further, during identification, the domain knowledge and/or road-map of road traffic can be used, generate just
True pavement marking and/or traffic stripe.For example, the straight trip direction arrow and right-hand bend direction arrow on road surface are in arrow
Head point covering cannot be distinguished in the case where blocking, and show as rectangle;If but according to lane where the rectangle that detects,
It is shown as right-turn lane in road-map, then it is right-hand bend direction arrow that equipment, which can distinguish the pavement marking,.
Particularly, when having multiple image and/or temporal multiple image spatially simultaneously, equipment be can be used
Multiple image collaboration identification includes lane line, road edge line, and at least one pavement marking and traffic stripe are gone
Except mistake when using single image, and recognition result is made to keep space and/or temporal consistent.
For example, equipment can identify that the lane line corresponds to true white single solid line in a certain road section surface, later
Traveling in same lane line is tracked, and keep be identified as white single solid line.
Step S1301 (being not marked in figure), the display mode for determining target driving auxiliary information content.
Step S1301 can be a kind of realization side for determining the display mode of one or more target driving auxiliary informations
Formula.
For the embodiment of the present invention, equipment obtains position and appearance of the real-world object relative to display device by location algorithm
State is aligned so as to the AR information that will be shown and corresponding real-world object.
Particularly, to reduce delay, for same real-world object, equipment can be predicted according to the motion model of this vehicle and this
Real-world object is current and the relative position, posture and/or scaling relation of this vehicle, predicts this real-world object in future time instance and Ben Che
Relative position, posture and/or scaling relation, to prepare the corresponding AR information of target driving auxiliary information in advance.
Specifically, when obtaining the road information around this vehicle using single camera, approximately, it is contemplated that office around this vehicle
Portion's road may be considered a plane, can extract characteristic point to road image, obtain road by solving homography matrix
Relative position and posture relationship with camera;More accurate, the mode that visual odometry can be used carries out image sequence
Signature tracking, wherein collection apparatus is in lane line line segment, road edge line segment, pavement marking profile and road traffic mark
The real-world object of the needs such as line profile alignment, to obtain relative position and posture relationship of the real-world object relative to camera.It is special
Other, the extraction and tracking of characteristic point can be divided with image recognition to be assisted, to remove erroneous matching and accelerate speed.
For the embodiment of the present invention, when with one camera, the dimensional information of real-world object can pass through following three kinds of modes
It obtains (three kinds of modes can be used alone and can also be applied in combination);1) equipment can be by prior calibration for cameras in this vehicle
Installation fixed height obtain dimensional information;2) equipment can obtain real-world object according to the priori knowledge in the field of road traffic
Physical size, dimensional information is then obtained, for example, the defined width in local lane can be obtained according to priori knowledge;3) when
When equipment uses image sequence, the acquisition of information scales such as the actual motion speed of vehicle, distance can be passed through.
For the embodiment of the present invention, passed when using one camera cooperation stereoscopic camera, depth camera, laser sensor, radar
At least one of sensor, ultrasonic sensor obtain this vehicle around road information when, the relative position of real-world object and camera and
Posture relationship can be obtained by mode similar with above-mentioned the case where using one camera, and details are not described herein.Particularly, when making
When at least one calibrated stereoscopic camera, depth camera, laser sensor, radar sensor, ultrasonic sensor, true object
The dimensional information of body can directly acquire, can also with it is above-mentioned using one camera when the size estimation value that obtains of mode handed over
Fork verifying.
For the embodiment of the present invention, dimensional information can also obtain with other sensors and one camera, for example, code-disc
Dimensional information is estimated that with one camera data fusion;For another example inertial sensor unit (including accelerometer and gyro
Instrument) and one camera data fusion be estimated that dimensional information;The data of these sensors can also be applied in combination in equipment, to obtain
Take dimensional information.
For the embodiment of the present invention, by such as upper type, on our available real-world objects and Ben Che or in equipment
At least one outer position, posture and scaling relation depending on camera (shooting the camera of this vehicle external environment).
For the embodiment of the present invention, in order to by AR information and real-world object alignment, equipment need further estimation eyes and
Relative position, posture and scaling relation between real-world object.Wherein, the step of estimation and the display of mode and equipment dress
The type set is related, is as follows single head-mounted display apparatus or single display device for mounting on vehicle both of these case point to display device
It is not stated.When the display device of equipment includes multiple head-mounted display apparatus and/or multiple Vehicular display devices, such as
Lower method can be applied by relatively straightforward combination, adjustment, be repeated no more.
1) for display device be single head-mounted display apparatus the case where, the relative position and appearance of eyes and display device
State relationship is relatively fixed, can calibrate that (validity period needs to recalibrate once in a while, such as user has adjusted wear-type and shows in advance
It needs to recalibrate position and attitude and scaling relation after the position of equipment);
1.1) (phase of this vehicle external environment is shot depending on camera outside one on real-world object and equipment for having obtained
Machine) position, posture and the case where scaling relation, due to regarding camera and display device position and attitude outside one in equipment
Scaling relation is relatively fixed, and equipment can calculate relative position, posture and the scale got between eyes and real-world object
Relationship (outer view camera ← estimation → real-world object in eyes ← calibration → display device ← calibration → equipment);
1.2) (phase of this vehicle external environment is shot depending on camera outside one on real-world object and Ben Che for having obtained
Machine) position, posture and the case where scaling relation, be arranged opposite between display device and outer view camera there is still a need for obtaining
Position and attitude relationship, it is different according to device hardware implementation, have following two ways 1.2.1) and 1.2.2):
1.2.1) equipment can be used outside outer one obtained on this vehicle depending on camera in equipment on view camera and equipment
The relative position and attitude scaling relation of camera is regarded outside one.Wherein, the mode of acquisition can by with display device for mounting on vehicle phase
The place patch witness marker object fixed to position and attitude scale, outer in equipment calculated depending on camera by witness marker object and
The relative position and attitude scaling relation of camera is regarded outside vehicle;The mode of acquisition is also possible to using the outer view camera on this vehicle as scene
In object, with the above-described extraction feature point tracking mode based on image and/or Multi-sensor fusion, and/or inspection
The modes such as survey, such as SLAM (Simultaneous Localization and Mapping, synchronous positioning and build figure) technology and
Target following technology.I.e. (on outer view camera ← estimation → this vehicle in eyes ← calibration → display device ← calibration → equipment
Outer view camera ← estimation → real-world object);
What 1.2.2) equipment can be used on this vehicle interior (shoots this vehicle internal environment, such as driver position depending on camera
Camera) obtain display device and the interior view camera on Ben Che relative position and attitude scaling relation.The mode wherein obtained can be with
Extraction characteristic point based on the witness marker object in display device, and/or based on image and/or Multi-sensor fusion with
The modes such as track mode, and/or detection, as (Simultaneous Localization and Mapping, it is fixed to synchronize by SLAM
Position and build figure) technology and target following technology.The relative position and attitude of interior view camera on this vehicle and the outer view camera on Ben Che
Scaling relation is relatively fixed, can calibrate that (validity period needs to recalibrate once in a while, such as after vehicle severe jolt, needs in advance
Recalibrate position and attitude and scaling relation).I.e. (interior view camera on eyes ← calibration → display device ← estimation → this vehicle ←
Outer view camera ← estimation → real-world object on calibration → this vehicle);
2) the case where being single display device for mounting on vehicle for display device, regards camera outside one in display device and Ben Che
Relative position and posture relationship it is relatively fixed, can calibrate that (validity period needs to recalibrate once in a while, such as vehicle is violent in advance
It after jolting, needs to recalibrate position and attitude and scaling relation);Particularly, it is regarded on camera and Ben Che outside one in equipment
Outside one depending on camera relative position and attitude scaling relation it is also assumed that be it is relatively fixed, two external cameras are also possible to same
It is a, therefore need not distinguish between the outer camera that regards in this case as the outer view camera on this vehicle or the outer view camera in equipment;For
Relative position, posture and the scaling relation between eyes and real-world object are obtained, equipment only needs to obtain eyes and display
Relative position, posture and the scaling relation of device.It is different according to device hardware implementation, there is following two ways: 1.3)
And 1.4), wherein
1.3) the outer outer view camera for obtaining display device for mounting on vehicle depending on camera and wearing that driver wears can be used in equipment
Relative position and attitude scaling relation.Wherein, the outer relative position and attitude scaling relation depending on camera and eyes worn can recognize
For be it is relatively-stationary, can calibrate that (validity period needs to recalibrate once in a while, such as user has adjusted wear-type camera in advance
Position after need to recalibrate position and attitude and scaling relation).Wherein, the mode of acquisition can by with car-mounted display
The fixed place patch witness marker object in device relative position, outer calculated depending on camera by witness marker object worn show with vehicle-mounted
The relative position and attitude relationship of showing device;The mode of acquisition is also possible to using the display device for mounting on vehicle on this vehicle as in scene
Object, with above-described extraction feature point tracking mode based on image and/or Multi-sensor fusion, and/or detection etc.
Mode, such as SLAM (Simultaneous Localization and Mapping, synchronous to position and build figure) technology and target
Tracking technique.That is (eyes ← calibration → wear-type external camera ← estimation → display device ← calibration → regard camera ← estimation outside
→ real-world object);
1.4) what equipment can be used on this vehicle interior (shoots this vehicle internal environment, such as the phase of driver position depending on camera
Machine) obtain eyes and the interior view camera on Ben Che relative position and attitude scaling relation.The mode wherein obtained, which can be, to be based on
The witness marker object that driver wears, wherein the relative position and attitude scaling relation of eyes and the witness marker object worn can recognize
For be it is relatively-stationary, can calibrate in advance (validity period needs to recalibrate once in a while, for example, user have adjusted wear-type positioning
It needs to recalibrate position and attitude and scaling relation after the position of marker);The mode of acquisition can also be based on image
Head/eye/sight locating and tracking technology, equipment pass through the image of the interior view camera on this vehicle or the head of video location driver
Portion, the relative position and attitude relationship between positioning result positioning eyes and interior view camera based on head;The mode of acquisition may be used also
To be the eyes locating and tracking technology based on image, equipment directly positions eye by the interior image or video depending on camera on this vehicle
Relative position, posture relationship between eyeball and interior view camera.Interior view camera on this vehicle and the display device for mounting on vehicle on Ben Che
Relative position, posture, scaling relation are relatively fixed, can calibrate that (validity period needs to recalibrate once in a while, such as vehicle is acute in advance
It is strong jolt after, need to recalibrate position and attitude and scaling relation).That is (interior view camera ← school on eyes ← estimation → this vehicle
Outer view camera ← estimation → real-world object on standard → display device ← calibration → this vehicle).
For the embodiment of the present invention, display position, posture, scale of the equipment in addition to determining AR information, it is still necessary to determine AR
The presentation mode of information can include but is not limited to color, brightness, transparency and form of icons etc..For needing and really
Object alignment AR information, equipment preferentially by with the consistent color of real-world object and in the form of present AR information.
For example, the equipment is preferentially with the side of white single dotted line if the true lane line on road should be white single dotted line
AR lane line is presented in formula;If by with the consistent color of real-world object and in the form of AR information is presented will lead to AR information and can not be driven
The clear Understanding of people, the then selection of apparatus self-adaptation more preferably color and form.Equipment passes through one or more outer view cameras
Road image or video are acquired, AR information is projected into image or view according to the position and attitude scale of estimation with estimated appearance form
On frequency, by image/video analytical technology, the contrast of available AR information and surrounding scene, and then judge to expect is in
Whether existing form is suitable, replaces the higher presentation mode of resolution according to scene brightness color etc. if improper.For example, for
The case where road is by accumulated snow partial occlusion, is presented that AR lane line will lead to AR lane line and road surface is poor in a manner of white single dotted line
Selection different too small, that equipment can be adaptive, such as AR lane line is presented in a manner of blue single dotted line.
The corresponding AR information of target driving auxiliary information that step S1401 (being not marked in figure), display generate.
Step S1401, which can be, shows the corresponding virtual three-dimensional AR information of one or more target driving auxiliary informations
A kind of implementation.
For the embodiment of the present invention, equipment according to the relative position between eyes and real-world object, posture, scaling relation,
It shows by AR information, with the presentation mode determined in step S1301 in display device, so as to the AR information that will show and right
The real-world object alignment answered.
Embodiment two
The embodiment of the invention provides a kind of display methods of auxiliary information of driving a vehicle, and this method for being covered completely on road surface
Lid blocks, but railing (or other visible traffic sign barriers) still appreciable road is isolated in middle lane, to lane
The prompt information of the message identifications such as line, lane, track, pavement marking, traffic stripe and road conditions is shown and is shown
Show corresponding augmented reality auxiliary driving information, as shown in Figure 3.Wherein, obstruct the road being but not limited to of upper lane line
Accumulated snow, ponding, dirt, oil etc., or combination.
The method of the embodiment of the present invention includes:
Step S1102 (being not marked in figure), equipment determine the need for display road information.
Step S1102 can be equipment and determine a kind of reality for needing one or more target driving auxiliary informations to be shown
Existing mode.
For the embodiment of the present invention, equipment determines the road information around this vehicle by way of image recognition, can wrap
Include but be not limited to lane line, road edge line and the message identifications such as road traffic and road conditions.Wherein, equipment can be always
The state for keeping identification is blocked covering but middle lane isolation railing (or other visible traffic signs barriers in road completely
Hinder object) still appreciable surface conditions adaptively open display function, can also be started according to the instruction of user identification and/or
The function of display.Wherein, user instruction, which can be, is identified by bio-identifications such as gesture, voice, physical button, fingerprints to carry
's.
Step S1202 (being not marked in figure), device-aware road information generate target driving auxiliary information content.
Step S1202 can be acquisition information, handle information, generate a kind of realization of target driving auxiliary information content
Mode.
For the embodiment of the present invention, equipment detection identification positioning middle lane isolation railing (or other visible traffic marks
Will barrier), estimate the width of roadway of this vehicle driving direction side, and estimate the position of lane line and road edge.
Specifically, firstly, equipment is identified from one or more image/videos by image processing techniques and identification technology
Orient still visible middle lane isolation railing (or other visible traffic sign barriers), and as road among
Position object of reference;Secondly, when railing (or other visible traffic sign barriers) are isolated there is also lane in road edge, if
It is standby can also be identified by similar technology orient come.Wherein, road edge isolation railing (or other visible traffic signs
Barrier) region that is isolated between railing (or other visible traffic sign barriers) with middle lane is this vehicle traveling side
To driving region.
For the embodiment of the present invention, when there is no lane isolation railing (or other visible traffic signs barriers for road edge
Hinder object) when, equipment can be passed by using one camera, stereoscopic camera, depth camera, laser sensor, radar sensor, ultrasound
At least one of sensor obtains object (example outside middle lane isolation railing (or other visible traffic sign barriers) and lane
Such as bicycle, tree, house) distance, obtain apart from when in view of camera and/or the direction of other sensors, will measurement away from
From the direction vertical with middle lane line is corrected to, i.e. acquisition middle lane isolation railing (or other visible traffic signs barriers
Hinder object) the Robust Statistics shortest distance with the outer object in lane (such as pedestrian, bicycle are set, house etc.), this Robust Statistics is most short
Distance should be the statistical distance that this vehicle repeatedly measures in traveling in multiple positions.This Robust Statistics shortest distance is this vehicle traveling
The lane width upper limit of direction side.When there are road-map, this upper limit can according to the map shown in road width into
The card optimization of one step.
Wherein, the direction of camera and/or other sensors can be obtained by such as under type, and equipment is first in image sequence
The extending direction for identifying middle lane isolation railing (or other visible traffic sign barriers), calculates camera and this
Direction towards relationship;The relative position and attitude scaling relation of other sensors on camera and Ben Che may be considered fixation
, it can calibrate that (validity period needs to recalibrate once in a while, such as after vehicle severe jolt, needs to recalibrate position appearance in advance
State and scaling relation), so as to obtain the direction of camera and/or other sensors.
For the embodiment of the present invention, in lane width upper limit, domain knowledge and/or road based on road traffic
Map, equipment can predict the substantially position of every lane line according to the priori knowledges such as lane line width and/or lane line quantity
It sets and the approximate location of road edge line.Wherein the use of road-map can be fixed in conjunction with including but not limited to wireless signal
Position, the modes such as GPS (Global Positioning System, global positioning system) positioning determine body of a map or chart locating for this vehicle;
Road-map can be to be stored in the memory space of equipment or this vehicle in advance, is also possible to the side by network communication
What formula obtained.
For the embodiment of the present invention, equipment can pass through one or more cameras and/or stereoscopic camera and/or depth phase
The image and/or video of pavement of road machine acquisition surrounding vehicles and be completely covered, using object detection, object identification and with
Track technology analyzes the wheelpath of other vehicles and/or the track on road surface, more accurate school in current vehicle preset range
The predicted position of positive lane line and road edge line;When having high-precision GPS and high-precision road-map, equipment can lead to
It crosses vision positioning information and GPS positioning information and is aligned present road with road-map, promote lane line and road edge line position
The precision of prediction set.The wherein recognition detection mode detailed in Example four of track.
For the embodiment of the present invention, when having high-precision road-map, equipment can be by the road that will perceive
Present road is aligned by environment by vision positioning information and/or GPS positioning information with road-map, so that equipment can pass through
Road-map obtains road surface road information, and generates corresponding target driving auxiliary information.
Step S1302 (being not marked in figure), the display mode for determining target driving auxiliary information content.
Step S1302 can be a kind of realization side for determining the display mode of one or more target driving auxiliary informations
Formula.
Similar to the step S1301 of embodiment one, repeat no more.Particularly, it is contemplated that the estimation of lane range may deposit
In error, that is, the lane range estimated may include a part of adjacent true lane or road edge, to avoid allowing
Vehicle driving is in the border region in two lanes or the border region in lane and road edge, and equipment is by the lane region of estimation
Two sides inwardly narrow, only to estimation lane region intermediate region prepare target drive a vehicle auxiliary information.
The corresponding AR information of target driving auxiliary information that step S1402 (being not marked in figure), display generate.
Step S1402, which can be, shows the corresponding virtual three-dimensional AR information of one or more target driving auxiliary informations
A kind of implementation.
Similar to the step S1401 of embodiment one, repeat no more.
Embodiment three
A kind of display methods of auxiliary information of driving a vehicle is present embodiments provided, this method is used to be completely covered screening on road surface
Gear, also there is no middle lane isolation railing (or other visible traffic sign barriers) road, to lane line, lane,
The prompt information of the message identifications such as track, pavement marking and traffic stripe and road conditions is shown and is shown corresponding
Augmented reality assist driving information.Wherein, obstruct the road the accumulated snow that can be but not limited to of upper lane line, ponding, dirt, oil
Deng.
The method of the embodiment of the present invention includes:
Step S1103 (being not marked in figure), equipment determine the need for display road information.
Step S1103 can be equipment and determine a kind of reality for needing one or more target driving auxiliary informations to be shown
Existing mode.
For the embodiment of the present invention, equipment determines the road information around this vehicle by way of image recognition, can wrap
Include but be not limited to lane line, road edge line and the message identifications such as road traffic and road conditions.Wherein, equipment can protect always
The state for holding identification, being blocked covering completely in road, also there is no middle lane isolation railing (or other visible traffic marks
Will barrier) surface conditions adaptively open display function, can also be started according to the instruction of user identify and/or show
Function.Wherein, user instruction, which can be, identifies to carry by bio-identifications such as gesture, voice, physical button, fingerprints.
Step S1203 (being not marked in figure), device-aware road information generate target driving auxiliary information content.
Step S1203 can be acquisition information, handle information, generate a kind of realization of target driving auxiliary information content
Mode.
For the embodiment of the present invention, equipment estimates width of roadway, and the intermediate isolating line institute that estimation separates two-way lane is in place
It sets (if not single flow route), estimates the position of lane line and road edge.
Firstly, equipment estimates width of roadway.When there are lane isolation railing (or other visible traffic for road both sides of the edge
Indicate barrier) when, equipment can position knowledge by image processing techniques and identification technology from one or more image/videos
Not Chu two sides road edge be isolated railing (or other visible traffic sign barriers), region between the two is road surface
Region.
When only railing (or other visible traffic sign barriers) are isolated there are lane in side road edge, equipment
The road edge position that this side can be oriented by image processing techniques and identification technology identification, as object of reference;For
There is no lane isolation railing (or other visible traffic sign barriers) side, equipment can by using one camera,
Stereoscopic camera, depth camera, laser sensor, radar sensor, at least one in ultrasonic sensor obtain object of reference and another
The distance of object (such as bicycle, tree, house) outside side line Road Edge, obtain apart from when in view of camera and/or other biographies
Measurement range correction is arrived the direction vertical with object of reference by the direction of sensor, that is, is obtained presence and railing is isolated (or other are visible
Traffic sign barrier) side road edge with there is no isolation railing (or other visible traffic sign barriers) it is another
The Robust Statistics shortest distance of the outer object of side road edge (such as pedestrian, bicycle, tree, house etc.), this Robust Statistics is most
Short distance should be the statistical distance that this vehicle repeatedly measures in traveling in multiple positions, and this Robust Statistics shortest distance is road
Face width upper limit.When there are road-map, this upper limit further can verify optimization by shown road width according to the map.
Wherein, the direction of camera and/or other sensors can be obtained by such as under type, and equipment is identified first in image sequence and engaged in this profession
The extending direction of lane isolation railing (or other visible traffic sign barriers) of road side, calculates camera and this side
To towards relationship;The relative position and attitude scaling relation of other sensors on camera and Ben Che may be considered it is fixed,
Can calibrate in advance (validity period needs to recalibrate once in a while, such as after vehicle severe jolt, need to recalibrate position and attitude and
Scaling relation), so as to obtain the direction of camera and/or other sensors.
When wherein lane isolation railing (or other visible traffic sign barriers) are not present in two sides road edge,
As shown in figure 4, equipment can be by using one camera, stereoscopic camera, depth camera, laser sensor, radar sensor, ultrasound
At least one of sensor obtains the distance of this vehicle and the outer object of two sides road edge (such as pedestrian, bicycle, tree, house etc.)
The sum of, in view of the width between two side senser of the direction of camera and this vehicle when obtaining sum of the distance, range correction will be measured
To the direction vertical with road direction, i.e., the outer object of acquisition two sides road edge (such as pedestrian, bicycle, tree, house etc.) it
Between the Robust Statistics shortest distance, this Robust Statistics shortest distance should be the system that this vehicle repeatedly measures in traveling in multiple positions
Count distance;This Robust Statistics shortest distance is width of roadway upper limit.Wherein, the direction of camera and/or other sensors can be with
It is obtained by such as under type.
Specifically, equipment identifies the extension side of the arrangement of roadside trees and/or building surface in image sequence first
To, calculate camera and this direction towards relationship;The relative position and attitude scale of other sensors on camera and Ben Che
Relationship may be considered fixed, can calibrate in advance (validity period needs to recalibrate once in a while, such as after vehicle severe jolt,
Need to recalibrate position and attitude and scaling relation), so as to obtain the direction of camera and/or other sensors.
For the embodiment of the present invention, as shown in figure 5, estimate road surface upper range limit limit in, the neck based on road traffic
Domain knowledge and/or road-map, equipment can go out road according to the lane line width and/or lane line quantitative forecast of all directions
The approximate location of medium line, and using this medium line as reference, estimate the approximate location and road edge line of every lane line
Approximate location;
In view of if the lane range estimated is that correctly, this range will not exist simultaneously both direction traveling
Vehicle or track, so equipment can pass through one or more cameras and/or stereoscopic camera and/or depth camera acquisition week
The image and/or video of vehicle with the pavement of road being completely covered are enclosed, object detection, object identification and tracking technique point are utilized
Analyse the wheelpath of other vehicles and/or the track on road surface, more accurate correction lane line in current vehicle preset range
With the predicted position of road edge line;When having high-precision GPS and high-precision road-map, equipment can be fixed by vision
Position information and GPS positioning information will be aligned on present road and road-map, and promotion lane line and road edge line position are estimated
Count precision.The wherein recognition detection mode detailed in Example four of track.
For the embodiment of the present invention, when having high-precision road-map, equipment can be by the road that will perceive
Present road is aligned by environment by vision positioning information and/or GPS positioning information with road-map, so that equipment can pass through
Road-map obtains road surface road information, and generates corresponding target driving auxiliary information.
Step S1303 (being not marked in figure), the display mode for determining target driving auxiliary information content.
Step S1303 can be a kind of realization side for determining the display mode of one or more target driving auxiliary informations
Formula.
Similar to the step S1301 of embodiment one, repeat no more.Particularly, it is contemplated that the estimation of lane range may deposit
In error, that is, estimate lane range may include a part of adjacent true lane or road edge, to avoid allowing vehicle
Travel the border region in two lanes or the border region in lane and road edge, equipment is by the two of the lane region of estimation
It is lateral interior narrowed, target driving auxiliary information only is prepared to the intermediate region in estimation lane region.
The corresponding AR information of target driving auxiliary information that step S1403 (being not marked in figure), display generate.
Step S1403, which can be, shows the corresponding virtual three-dimensional AR information of one or more target driving auxiliary informations
A kind of implementation.
Similar to the step S1401 of embodiment one, repeat no more.
Example IV
The embodiment of the invention provides a kind of display methods of auxiliary information of driving a vehicle, and this method for being covered completely on road surface
The road blocked is covered, the prompt information of track message identification is shown and shows that corresponding augmented reality auxiliary drives display
Information.Wherein, obstruct the road upper lane line can be but not limited to accumulated snow, ponding, dirt, oil etc..
The method of the embodiment of the present invention includes:
Step S1104 (being not marked in figure), equipment determine the need for display track relevant information.
Step S1104 can be equipment and determine a kind of reality for needing one or more target driving auxiliary informations to be shown
Existing mode.
For the embodiment of the present invention, equipment determines the road information around this vehicle by way of image recognition, can wrap
Include but be not limited to lane line, road edge line, pavement marking and the message identifications such as traffic stripe and road conditions.Wherein,
Equipment can be the state for being always maintained at detection identification, adaptively open display in the surface conditions that road is blocked covering completely
Function can also start the function of identifying and/or show according to the instruction of user.Wherein, user instruction can be by gesture,
The bio-identifications such as voice, physical button, fingerprint identify to carry.
Step S1204 (being not marked in figure), equipment detect pavement track.
Step S1204 can be acquisition information, handle information, generate a kind of realization of target driving auxiliary information content
Mode.
For the embodiment of the present invention, equipment first is used using one or more camera acquisition pavement images or video
Track in image processing techniques and identification technology detection positioning road surface.
Specifically, in single image, image recognition technology positioning track region is can be used in equipment, and track can pass through
Edge extraction algorithm and/or color cluster algorithm extract.The track edge line segment that equipment can will test according to
Direction connects into continuous track edge line, track edge line segment can also be built into vector field;Have spatially when simultaneously
When multiple image and/or temporal multiple image, multiple image can cooperate with identification track, and track edge line more
It is connected, mistake when removal is using single image, and made by way of signature tracking and pattern match in width image
Recognition result keeps space and/or temporal consistent.Track is considered that when pattern match leaves length of time (example
Such as the time is left from the deduction of the track depth) removal erroneous matching.
Step S1304 (being not marked in figure), the display mode for determining track correlation driving auxiliary information content.
Step S1304 can be a kind of realization side for determining the display mode of one or more target driving auxiliary informations
Formula.
For the embodiment of the present invention, equipment judges whether there is abnormal track.Wherein abnormal track can pass through track side
Whether the vector field that edge line constructs is smooth, and whether general trend and lane line direction and road edge line are consistent to judge;
Also may determine that the vector field constructed by the track edge line of the generation at a certain moment whether be substantially distinguished from other/whole vehicle
The direction of the vector field of rut edge line building;If there is significant difference, it is determined that the track edge line of generation belongs to abnormal track side
Edge line;Usually exception track has apparent direction conflict relative to lane line and road edge, and is likely that there are braking mark.
When there is abnormal track, the mode of equipment warning enhances the road surface region shown where abnormal track, and generates driving
AR information is alerted, as shown in Figure 6;When there is no abnormal track, color of the equipment according to track on image or video
Contrast and/or edge clarity, judge whether track clearly visible.It is such as high-visible, then do not enhance display;For institute
There is the unobvious visible road surface of track, equipment selects relatively the most apparent track according to the traffic route of this vehicle, increased
Strong display, as shown in Figure 7.Particularly, (such as whether road surface is wet according to this vehicle travelling state (such as speed) and road environment for equipment
It is sliding) etc. factors, the road surface of the front of detection observation in advance distance enough, dynamic adjust pre-warning time, reserved for driver enough
Reaction time.
Wherein, show that abnormal track place road surface region and enhancing show that the AR object of unobvious track needs for enhancing
It is aligned with true road surface and true track, display mode is similar to the step S1301 of embodiment one, repeats no more.
For the embodiment of the present invention, the AR information of abnormal track warning is not needed to be aligned with real-world object.Work as display
When device is head-mounted display apparatus, AR information can be presented on the present viewing field of driver by equipment directly in eye-catching mode
Significant position, and display mode is determined according to the depth of focus of driver, and AR information is just facing the sight of driver, drives
The depth of focus for sailing human eye can be obtained by way of eye tracking;When display device is display device for mounting on vehicle, equipment
AR information can be presented in the region of display screen in eye-catching mode, according to the sight of driver, by the appearance of AR information
State and depth are set as just facing the sight of driver, positioned at the depth of focus for driving human eye, as shown in Figure 8.And it sets
It is standby to attract driver attention by modes such as cooperation animation, audios, reduce delay.
The corresponding AR information of target driving auxiliary information that step S1404 (being not marked in figure), display generate.
Step S1404, which can be, shows the corresponding virtual three-dimensional AR information of one or more target driving auxiliary informations
A kind of implementation.
Similar to the step S1401 of embodiment one, repeat no more.
Embodiment five
The embodiment of the invention provides a kind of display methods of auxiliary information of driving a vehicle, this method is used in traffic sign by portion
Divide or the road blocked is completely covered, the prompt information for the traffic sign being blocked is shown, and shows corresponding enhancing
Reality auxiliary drives display information, as shown in Figure 9.Wherein, the accumulated snow that can be but not limited to of traffic sign is blocked, dirt is oily,
Printed matter, leaf etc..Wherein, blocking can be interpreted as damaging with broad sense, or fall off, or paint fades or rain/mist/haze, or pass
Sensor is in imperfect high-visible caused by undesirable state (such as camera lens spot).
The method of the embodiment of the present invention includes:
Step S1105 (being not marked in figure), equipment determine the need for display traffic sign relevant information.
Step S1105 can be equipment and determine a kind of reality for needing one or more target driving auxiliary informations to be shown
Existing mode.
For the embodiment of the present invention, equipment determines the road information around this vehicle by way of image recognition, can wrap
Include but be not limited to lane line, road edge line and the message identifications such as road traffic and road conditions.Wherein, equipment can be always
The state for keeping identification, occur being partially or completely block traffic sign and/or Warning Mark surface conditions it is adaptive
Display function is opened, the function of identifying and/or show can also be started according to the instruction of user.Wherein, user instruction can be
It identifies to carry by bio-identifications such as gesture, voice, physical button, fingerprints.
Step S1205 (being not marked in figure), equipment judge whether to need to enhance display.
Step S1205 can be acquisition information, handle information, generate a kind of realization of target driving auxiliary information content
Mode.
For the embodiment of the present invention, equipment uses image using one or more camera acquisition pavement images or video
The traffic sign of processing technique and identification technology detection both sides of the road and top.Equipment judges traffic by image or video analysis
Content on mark whether complete display.Specifically, equipment can detect traffic sign on the image by image detecting technique
Position and surround frame (generally rectangular frame/circular frame/triangle frame etc.).It is whether clear by the image for judging to surround in frame
Sharp keen and/or distribution of color judges whether content is clear.For unsharp traffic sign, equipment can be in the following way
At least one obtain traffic sign complete information and form of icons: equipment can according to this vehicle position retrieve local map
Corresponding database, the image and database that are acquired by pattern match obtain the complete information and form of icons of traffic sign;
Equipment can use image algorithm to unsharp image carry out image enhancement (for example, for the traffic sign due to mist shield,
Usable image defogging algorithm carries out image enhancement, obtains clearly image relatively), obtain the complete information and icon of traffic sign
Form;Equipment can be by obtaining corresponding complete information and icon shape from the image for the traffic sign that other angles obtain
Formula;Equipment can obtain complete information and form of icons, example by the information and/or driving information of other traffic signs
It such as, is that " 200 meters of distance outlet " and this vehicle have travelled 100 meters of driving information according to the traffic sign content encountered before, it can
Being inferred to current traffic sign content is " 100 meters of distance outlet ".
Step S1305 (being not marked in figure), the display mode for determining target driving auxiliary information content.
Step S1305 can be a kind of realization side for determining the display mode of one or more target driving auxiliary informations
Formula.
Similar to the step S1301 of embodiment one, repeat no more.Particularly, the AR traffic sign of generation needs and true
Traffic sign alignment.Equipment obtains position and posture of the real-world object relative to display device by framing algorithm.Tool
Body, when obtaining the traffic sign around this vehicle using single camera, approximately, it is contemplated that traffic sign can be approximately considered
It is a plane, traffic sign and camera can be obtained by solving homography matrix to traffic sign contours extract characteristic point
Relative position and posture relationship;It is more accurate, can be used the mode of visual odometry to image sequence carry out feature with
Track, the real-world object that wherein collection apparatus is aligned from needs such as the profiles of traffic sign, to obtain real-world object relative to phase
The relative position of machine and posture relationship.Particularly, the extraction and tracking of characteristic point can be divided with image recognition and be assisted, to go
Except erroneous matching and accelerate speed.
The corresponding AR information of target driving auxiliary information that step S1405 (being not marked in figure), display generate.
Step S1405, which can be, shows the corresponding virtual three-dimensional AR information of one or more target driving auxiliary informations
A kind of implementation.
Similar to the step S1401 of embodiment one, repeat no more.
Embodiment six
The embodiment of the invention provides a kind of display methods of auxiliary information of driving a vehicle, this method is used for having crossed region
The prompt information of traffic sign shown and show that corresponding augmented reality auxiliary drives display information.
The method of the embodiment of the present invention includes:
Step S1106 (being not marked in figure), equipment determine the need for display traffic sign relevant information.
Step S1106 can be equipment and determine a kind of reality for needing one or more target driving auxiliary informations to be shown
Existing mode.
For the embodiment of the present invention, equipment is started or is terminated the function of display by the instruction of user.Wherein, user refers to
Enabling, which can be, identifies to carry by bio-identifications such as gesture, voice, physical button, fingerprints;Equipment is also possible to according to system
Meter user's sight is shown in the adaptive end of AR traffic sign residence time;It is also possible to the combination of two ways.
Step S1206 (being not marked in figure), equipment generate need to content to be shown.
Step S1206 can be acquisition information, handle information, generate a kind of realization of target driving auxiliary information content
Mode.
For the embodiment of the present invention, equipment retrieves the traffic sign passed through in this vehicle the past period.Equipment both can be with
Retrieve the traffic sign of all processes, can also according to user instructions in the relevant traffic sign of keyword retrieval.There is height
Precision road-map when, equipment can currently be located in map retrieval just past the traffic mark in region according to this vehicle
Will;When not having map, equipment uses image procossing using the history pavement image or video of one or more camera acquisitions
The traffic sign of technology and detection identification technology detection identification both sides of the road and top, obtains and meets the traffic mark that retrieval requires
Will.Equipment extracts the complete information and form of icons of traffic sign from the traffic sign retrieved.Particularly, equipment can root
The particular content in traffic sign is adjusted according to current location.For example, original traffic sign content is " apart from expressway exit 5
Kilometer ", but since the opposite traffic sign of this vehicle has travelled 300 meters, therefore the content of Warning Mark can be changed to by equipment
" apart from 4.7 kilometers of expressway exit ", as shown in Figure 10, to adapt to present case.Particularly, equipment can be retrieved, and/or
Generate one or more traffic signs.
Step S1306 (being not marked in figure), the display mode for determining target driving auxiliary information content.
Step S1306 can be a kind of realization side for determining the display mode of one or more target driving auxiliary informations
Formula.
Similar to the step S1301 of embodiment one, repeat no more.Particularly, extra one when the traffic sign retrieved,
Equipment can both show multiple traffic signs simultaneously, can also successively show in turn.Equipment is preferentially with the shape of true traffic sign
Corresponding AR traffic sign is presented in formula.Due to not needing to be aligned with real-world object, equipment is preferentially by AR traffic sign according to driving
The blinkpunkt distance display of people, and close to sky portion, it avoids that driver is interfered to observe true road surface.
The corresponding AR information of target driving auxiliary information that step S1406 (being not marked in figure), display generate.
Step S1406, which can be, shows the corresponding virtual three-dimensional AR information of one or more target driving auxiliary informations
A kind of implementation.
Similar to the step S1401 of embodiment one, repeat no more.
Embodiment seven
The embodiment of the invention provides a kind of display methods of auxiliary information of driving a vehicle, this method is used for the side to this vehicle two sides
The extended area of square rearview mirror is shown and shows that corresponding augmented reality auxiliary drives display information.
The method of the embodiment of the present invention includes:
Step S1107 (being not marked in figure), equipment determine the need for the extended area correlation letter of display side rearview mirror
Breath.
Step S1107 can be equipment and determine a kind of reality for needing one or more target driving auxiliary informations to be shown
Existing mode.
For the embodiment of the present invention, equipment is started or is terminated the function of display by the instruction of user.Wherein, user refers to
Enabling, which can be, identifies to carry by bio-identifications such as gesture, voice, physical button, fingerprints;Equipment can also pass through detection
It is adaptive whether the blinkpunkt of driver come in the visual field of driver in side rearview mirror and/or side rearview mirror
Judge whether to start or terminate the function of showing;It is also possible to the combination of two ways.When the camera used is that wear-type is aobvious
When outer view camera on showing device, the mode of detection, which can be by whether there is side rearview mirror in detection image, judges side
Whether rearview mirror is in the visual field of driver, and then equipment can obtain that driver is current to be watched attentively by way of Eye-controlling focus
Whether region judges blinkpunkt in side rearview mirror region;When the camera used is the interior view camera of this vehicle, equipment can lead to
It crosses the direction for detecting driving head part and/or the direction of eyes and/or whether the walking direction side rearview mirror of sight is being driven
Sail people within sweep of the eye and the focus vision region of driver.
Step S1207 (being not marked in figure), equipment generate need to content to be shown.
Step S1207 can be acquisition information, handle information, generate a kind of realization of target driving auxiliary information content
Mode.
Firstly, relative position and the posture of equipment estimation driver's head/eye/sight and side rearview mirror.
Specifically, furthermore regarding camera and driver when the camera used is the outer view camera on head-mounted display apparatus
Head relative position is fixed, and can demarcate that (validity period needs school again once in a while in advance with the relative position and attitude scaling relation of eyes
Standard, for example, user have adjusted the position of head-mounted display apparatus after need to recalibrate position and attitude and scaling relation), because
This needs to obtain side rearview mirror and furthermore regards the relative positional relationship of camera.Specifically, equipment can pass through image recognition
Technology is partitioned into side rearview mirror, and side rearview mirror can be counted as the fixed plane of a scale, therefore single by solving
Relative position and the posture relationship of answering property matrix acquisition side rearview mirror and camera;The side of visual odometry also can be used in equipment
Formula carries out signature tracking to the image sequence of side rearview mirror, wherein the edge contour of collection apparatus side rearview mirror, to obtain
Relative position and posture relationship of the side rearview mirror for making even sliding relative to camera;Equipment can also be on the rearview mirror screen of side
Witness marker object is pasted, the outer phase for calculating view camera outside side rearview mirror and Ben Che by witness marker object depending on camera in equipment
To position and attitude scaling relation;That is (outer view camera ← estimation → side in eyes ← calibration → display device ← calibration → equipment
Square rearview mirror).
When the camera used is the interior view camera on this vehicle, the relative position and attitude of interior view camera and side rearview mirror is closed
System may be considered fixed, can demarcate that (validity period needs to recalibrate once in a while, such as driver adjusts side backsight in advance
It after mirror, needs to recalibrate position and attitude and scaling relation).Therefore it only needs to obtain interior view camera and shows the opposite position of equipment
Set posture relationship.Specifically, equipment can be used on this vehicle it is interior depending on camera (shoot this vehicle internal environment, as driver position
Set, camera) obtain display device and the interior view camera on Ben Che relative position and attitude scaling relation.The mode of acquisition can be with
Extraction feature point tracking side based on the witness marker object in display device, and/or based on image and/or Multi-sensor fusion
The modes such as formula, and/or detection, as (Simultaneous Localization and Mapping synchronizes positioning and builds SLAM
Figure) technology and target following technology.The relative position and posture relationship of eyes and display device are relatively fixed, can calibrate in advance
(validity period needs to recalibrate once in a while, for example, user have adjusted the position of head-mounted display apparatus after need to recalibrate
Position and attitude and scaling relation);That is (interior view camera ← calibration → side on eyes ← calibration → display device ← estimation → this vehicle
Square rearview mirror).
Secondly, after equipment estimates the relative positional relationship for driving head part's eyes and side rearview mirror, according to side
The specular property of rearview mirror obtains the virtual view of the mirror image of side rearview mirror, according to virtual view and extension rearview mirror
Area, equipment can pass through vehicle-mounted one or more cameras and/or stereoscopic camera and/or depth camera and acquire vehicle week
The image information enclosed, acquisition range include and are greater than side rearview mirror institute overlay area, as shown in figure 11;Equipment can pass through by
The images of other in-vehicle cameras obtains the content of rearview mirror extended area, due to the outer view camera and side rearview mirror on this vehicle
Relative position and attitude relationship is relatively fixed, can demarcate in advance, therefore equipment can be logical with the image of the outer view camera on this vehicle
Cross the content that image basedrendering (image based rendering) generates the extended area of side rearview mirror;When
The outer of this vehicle depending on camera is stereoscopic camera and/or when equipped with depth camera, and equipment can also use the rendering technique based on depth map
(depth image based rendering) generates the content of the extended area of side rearview mirror from the image of outer view camera.
The range of side rearview mirror comprising extended area is as shown in figure 12.
Step 1307 (being not marked in figure), determine side rearview mirror extended area display mode.
Step S1307 can be a kind of realization side for determining the display mode of one or more target driving auxiliary informations
Formula.
Similar to the step S1301 of embodiment one, repeat no more.Particularly,
In order to make driver's perception naturally, at least one following manner can be used in equipment: the expansion area of side rearview mirror
The virtual three-dimensional that domain is shown shows information real-world object corresponding with virtual three-dimensional display information relative to the extended area
It is symmetrical;Show that information is that the virtual three-dimensional shows the corresponding real-world object phase of information in the virtual three-dimensional that extended area is shown
For the mirror image of the side rearview mirror;It is continuous in the content that extended area is shown with the side rearview mirror;?
Extended area and the side rearview mirror show in have certain overlapping and/or transition.
The corresponding AR information of target driving auxiliary information that step S1407 (being not marked in figure), display generate.
Step S1407, which can be, shows the corresponding virtual three-dimensional AR information of one or more target driving auxiliary informations
A kind of implementation.
Similar to the step S1401 of embodiment one, repeat no more.
Embodiment eight
The embodiment of the invention provides a kind of display methods of auxiliary information of driving a vehicle, this method is used for in this car portion
The extended area of rearview mirror is shown and shows that corresponding augmented reality auxiliary drives display information.
The method of the embodiment of the present invention includes:
Step S1108 (being not marked in figure), equipment determine the need for the extended area correlation letter of display room mirror
Breath.
Step S1108 can be equipment and determine a kind of reality for needing one or more target driving auxiliary informations to be shown
Existing mode.
For the embodiment of the present invention, equipment is started or is terminated the function of display by the instruction of user.Wherein, user refers to
Enabling, which can be, identifies to carry by bio-identifications such as gesture, voice, physical button, fingerprints;Equipment can also pass through detection
In the car whether rearview mirror carrys out the adaptive function of judging whether to start or terminate display to the blinkpunkt of driver;It can also be with
It is the combination of two ways.
Wherein, when the camera used is the outer view camera on head-mounted display apparatus, the mode of detection, which can be, to be passed through
Whether room mirror is had in detection image to judge room mirror whether in the visual field of driver, and then equipment can lead to
Whether in the car the mode for crossing Eye-controlling focus obtains the current watching area of driver, judge blinkpunkt rearview mirror region;When making
When camera is the interior view camera of this vehicle, equipment can drive the direction of head part and/or the direction of eyes by detection,
And/or the walking direction room mirror of sight whether driver within sweep of the eye and the focus vision area of driver
Domain.Equipment is also possible to according to counting user sight watching attentively in AR information/adaptive end of residence time and shows.
Step S1208 (being not marked in figure), equipment generate need to content to be shown.
Step S1208 can be acquisition information, handle information, generate a kind of realization of target driving auxiliary information content
Mode.
Firstly, relative position and the posture of equipment estimation driver's head/eye/sight and room mirror.
The case where being head-mounted display apparatus for display device, when the camera used is outer on head-mounted display apparatus
When depending on camera, furthermore fixed depending on camera and driving head part relative position, the relative position and attitude scaling relation with eyes can thing
First calibration (validity period needs to recalibrate once in a while, for example, user have adjusted the position of head-mounted display apparatus after need weight
New calibrating position posture and scaling relation), therefore only need to obtain room mirror and furthermore regard the relative positional relationship of camera.
Specifically, equipment can be partitioned into room mirror by image recognition technology, room mirror can be counted as a ruler
Fixed plane is spent, therefore obtains relative position and the posture relationship of room mirror and camera by solving homography matrix;
The mode that visual odometry also can be used in equipment carries out signature tracking to the image sequence of room mirror, wherein collection apparatus
The edge contour of room mirror, to obtain relative position and posture relationship of the smooth room mirror relative to camera;
Equipment can also paste witness marker object on rearview mirror screen in the car, and outer in equipment is calculated depending on camera by witness marker object
Room mirror and Ben Che regard the relative position and attitude scaling relation of camera outside;That is (eyes ← calibration → display device ← calibration
Outer view camera ← estimation → room mirror in → equipment).
The case where being head-mounted display apparatus for display device, when the camera used is the interior view camera on this vehicle,
The interior relative position and attitude relationship depending on camera and room mirror may be considered fixed, can demarcate that (validity period is even in advance
It after you need to recalibrate, such as driver adjusts room mirror, needs to recalibrate position and attitude and scaling relation).Cause
This needs to obtain interior view camera and shows the relative position and attitude relationship of equipment.Specifically, equipment can be used on this vehicle
The interior interior view camera obtained depending on camera (this vehicle internal environment being shot, such as the camera of driver position) in display device and Ben Che
Relative position and attitude scaling relation.The mode of acquisition can be based on the witness marker object in display device, and/or be based on
The modes such as the extraction feature point tracking mode of image and/or Multi-sensor fusion, and/or detection, such as SLAM
(Simultaneous Localization and Mapping, synchronous to position and build figure) technology and target following technology.Eyes
It is relatively fixed with the relative position of display device and posture relationship, it can calibrate that (validity period needs to recalibrate once in a while, example in advance
It needs to recalibrate position and attitude and scaling relation after having adjusted the position of head-mounted display apparatus such as user);That is (eyes
Interior view camera ← calibration → room mirror on ← calibration → display device ← estimation → this vehicle).
The case where being display device for mounting on vehicle for display device, what equipment can be used on this vehicle interior (shoots depending on camera
This vehicle internal environment, such as the camera of driver position) obtain eyes and the interior view camera on Ben Che relative position and attitude scale
Relationship.The mode wherein obtained can be the witness marker object worn based on driver, wherein eyes and the witness marker worn
The relative position and attitude scaling relation of object may be considered relatively-stationary, can calibrate that (validity period needs again once in a while in advance
Calibration, such as user have adjusted and need to recalibrate position and attitude after the position of wear-type witness marker object and scale closes
System);The mode of acquisition can also be that the head/eye based on image/sight locating and tracking technology, equipment pass through interior on this vehicle
Depending on the image of camera or the head of video location driver, the positioning result based on head is positioned between eyes and interior view camera
Relative position and attitude relationship;The mode of acquisition can also be that the eyes locating and tracking technology based on image, equipment pass through on this vehicle
Interior view camera image or video directly position eyes and it is interior view camera between relative position, posture relationship.
Secondly, after equipment estimates the relative position and attitude scaling relation for driving human eye and room mirror, equipment
Interior back court position and/or tailstock perimeter can be shot by regarding camera in car, image or video, by image
Or video content reduce/enlarge after as extension content, as shown in figure 13.
Step S1308 (being not marked in figure), determine room mirror extended area display mode.
Step S1308 can be a kind of realization side for determining the display mode of one or more target driving auxiliary informations
Formula.
Similar to the step S1301 of embodiment one, repeat no more.Particularly, equipment can make AR extension content one
Adjustment in angular range is determined, so that AR extension content can face driver's direction of visual lines.It is that wear-type is shown for display device
The case where device, it is contemplated that AR information is preferentially presented on room mirror region or below by the habit of driver, equipment;It is right
In display device is display device for mounting on vehicle the case where, equipment preferentially by AR information according to the distance display of the blinkpunkt of driver, and
Close to sky portion, avoid that driver is interfered to observe true road surface.
The corresponding AR information of target driving auxiliary information that step S1408 (being not marked in figure), display generate.
Step S1408, which can be, shows the corresponding virtual three-dimensional AR information of one or more target driving auxiliary informations
A kind of implementation.
Similar to the step S1401 of embodiment one, repeat no more.
Embodiment nine
The embodiment of the invention provides a kind of display methods of auxiliary information of driving a vehicle, this method is used for including but not limited to
There is no the crossing of traffic lights or traffic lights damage or traffic lights unobvious, or partially completely blocked
Crossing is shown, and shows that corresponding augmented reality auxiliary drives display information.Situation as above is not handed over referred to generically hereinafter as
The crossing of ventilating signal lamp.
The method of the embodiment of the present invention includes:
Step S1109 (being not marked in figure), equipment determine whether the crossing relevant information close to not traffic lights.
Step S1109 can be equipment and determine a kind of reality for needing one or more target driving auxiliary informations to be shown
Existing mode.
For the embodiment of the present invention, equipment, which passes through, determines this vehicle position, checks whether junction ahead is deposited on map
In traffic lights, display function is adaptively opened, the function of identifying and/or show can also be started according to the instruction of user.
Wherein, user instruction, which can be, identifies to carry by bio-identifications such as gesture, voice, physical button, fingerprints.
Step S1209 (being not marked in figure), equipment generate need to content to be shown.
Step S1209 can be acquisition information, handle information, generate a kind of realization of target driving auxiliary information content
Mode.
For the embodiment of the present invention, equipment monitors intersection information by vehicle-mounted one or more cameras, is examined by image
It looks into technology and identification technology determines the order of arrival of other vehicles at outlet.Equipment obtains the applicable friendship in this crossing from map
Drift then, generates virtual AR traffic lights, indicates the driver behavior of driver, as shown in figure 14.For example, a crossroad
Being applicable in traffic rules is to stop 3 seconds after reaching, and first stops to get ahead;Equipment can monitor stopping for crossing vehicle by image technique
Vehicle sequence becomes the virtual red light of green light in conjunction with traffic rules after generating 3 seconds.
For being instructed according to driver there are the crossing of traffic lights, equipment can be obtained by least one camera
The video of true traffic lights, generates AR traffic lights by way of replicating true traffic lights.
Step S1309 (being not marked in figure), the display mode for determining target driving auxiliary information content.
Step S1309 can be a kind of realization side for determining the display mode of one or more target driving auxiliary informations
Formula.
Similar to the step S1306 of embodiment six, AR traffic sign is replaced with into AR traffic lights, is repeated no more.
The corresponding AR information of target driving auxiliary information that step S1409 (being not marked in figure), display generate.
Step S1409, which can be, shows the corresponding virtual three-dimensional AR information of one or more target driving auxiliary informations
A kind of implementation.
Similar to the step S1401 of embodiment one, repeat no more.
Embodiment ten
The embodiment of the invention provides a kind of display methods of auxiliary information of driving a vehicle, this method is used to carry out traffic-police
It shows and shows that corresponding augmented reality auxiliary drives display information.
The method of the embodiment of the present invention includes:
Step S1110 (being not marked in figure), equipment determine whether there is traffic-police.
Step S1110 can be equipment and determine a kind of reality for needing one or more target driving auxiliary informations to be shown
Existing mode.
For the embodiment of the present invention, equipment acquires this vehicle ambient enviroment by one or more cameras, passes through image detection
Identification technology judges whether there is traffic-police, and/or the road placed with the presence or absence of traffic-police limits roadblock, adaptively opens
Open display function;It can also start the function of identifying and/or show according to the instruction of user;It is also possible to the group with upper type
It closes.Wherein, user instruction, which can be, identifies to carry by bio-identifications such as gesture, voice, physical button, fingerprints.
Step S1210 (being not marked in figure), equipment generate need to content to be shown.
Step S1210 can be acquisition information, handle information, generate a kind of realization of target driving auxiliary information content
Mode.
For the embodiment of the present invention, equipment passes through one or more phase machine testings, identification, the posture for tracking traffic-police,
The marking tools such as gesture and baton.Corresponding AR message is generated according to local transit police's gesture rule.For example, traffic
The gesture instruction of police is turned to the left, then generates the AR message turned to the left, as shown in figure 15.
Step S1310 (being not marked in figure), the display mode for determining target driving auxiliary information content.
Step S1310 can be a kind of realization side for determining the display mode of one or more target driving auxiliary informations
Formula.
Similar to the step S1301 of embodiment one, repeat no more.
For the embodiment of the present invention, the case where being single head-mounted display apparatus for display device, equipment can be by life
At AR message be shown in the side of corresponding traffic-police, AR message can face driver's sight, can also and traffic-police
Body direction is consistent.
For the embodiment of the present invention, the case where being single display device for mounting on vehicle for display device, equipment can will be generated
AR message shown towards driver's sight.
For the embodiment of the present invention, when there are multiple traffic-polices, equipment can show that multiple AR message exist simultaneously
The side of corresponding traffic-police can also preferentially show AR message relevant to the traffic-police for currently facing this vehicle commander
Show.
The corresponding AR information of target driving auxiliary information that step S1410 (being not marked in figure), display generate.
Step S1410, which can be, shows the corresponding virtual three-dimensional AR information of one or more target driving auxiliary informations
A kind of implementation.
Similar to the step S1401 of embodiment one, repeat no more.
Embodiment 11
The embodiment of the invention provides a kind of display methods of auxiliary information of driving a vehicle, this method is used for operational panel key
Position, function and operation mode shown and shows that corresponding augmented reality auxiliary drives display information.
The method of the embodiment of the present invention includes:
Step S1111 (being not marked in figure), equipment determine the need for showing operational panel.
Step S1111 can be equipment and determine a kind of reality for needing one or more target driving auxiliary informations to be shown
Existing mode.
For the embodiment of the present invention, equipment, which passes through, determines this vehicle ambient enviroment and/or this environment inside car, judges that driver is
It is no to need to carry out certain operation, such as front window has fog, equipment may determine that driver needs using rain brush, adaptive to open
Display function;It can also start the function of identifying and/or show according to the instruction of user;It is also possible to the combination with upper type.
Wherein, user instruction, which can be, identifies to carry by bio-identifications such as gesture, voice, physical button, fingerprints.
Step S1211 (being not marked in figure), equipment generate need to content to be shown.
Step S1211 can be acquisition information, handle information, generate a kind of realization of target driving auxiliary information content
Mode.
Similar to the step S1208 for judging embodiment eight, the key that is operated needed for equipment judgement whether driver view
In open country.When the key of required operation is in the driver visual field, equipment is highlighted by region or arrow indicates or surrounded by edges
This prominent key of mode, and generate the function title and/or operation instruction of key.For example, around headlamp light knob
The title of headlamp, and/or the direction of rotation turned on light with arrow and the direction of rotation turned off the light are marked, as shown in figure 16.
Step S1311 (being not marked in figure), the display mode for determining target driving auxiliary information content.
Step S1311 can be a kind of realization side for determining the display mode of one or more target driving auxiliary informations
Formula.
Particularly, the present embodiment can only cooperate wear-type display device applications.
The case where being single head-mounted display apparatus for display device, the AR message of generation can be shown and pressed by equipment
The side of key, AR message can face driver's sight, can also be consistent with key direction, operation instruction and real key
Operational motion be consistent.
The corresponding AR information of target driving auxiliary information that step S1411 (being not marked in figure), display generate.
Step S1411, which can be, shows the corresponding virtual three-dimensional AR information of one or more target driving auxiliary informations
A kind of implementation.
Similar to the step S1401 of embodiment one, repeat no more.
Embodiment 12
The embodiment of the invention provides a kind of display methods of auxiliary information of driving a vehicle, this method is used for Parking permitted and suitable
Close parking area, Parking permitted but is not suitable for parking area, and Parking permitted that at least one of region is shown and shown
Show that corresponding augmented reality auxiliary drives display information, as shown in figure 17.
The method of the embodiment of the present invention includes:
Step S1112 (being not marked in figure), equipment determine the need for Parking permitted and be suitble to parking area, allow to stop
Vehicle but be not suitable for parking area, and Parking permitted that at least one of region is shown.
Step S1112 can be equipment and determine a kind of reality for needing one or more target driving auxiliary informations to be shown
Existing mode.
For the embodiment of the present invention, equipment, which passes through, determines this vehicle ambient enviroment and/or this environment inside car and/or driving people's will
Figure, judges whether driver is prepared to stop on searching parking stall, such as driver reaches specified destination or drives into parking lot, from
It adapts to open display function;It can also start the function of identifying and/or show according to the instruction of user;It is also possible to upper type
Combination.Wherein, user instruction, which can be, identifies to carry by bio-identifications such as gesture, voice, physical button, fingerprints.
Step S1212 (being not marked in figure), equipment generate need to content to be shown.
Step S1212 can be acquisition information, handle information, generate a kind of realization of target driving auxiliary information content
Mode.
For the embodiment of the present invention, equipment passes through the region around vehicle-mounted this vehicle of one or more phase machine testings, detection
It recognizes whether that no parking to identify, analyzes the region that Parking permitted and/or the region that Parking permitted.At image
Reason technology, in conjunction with this car body product, Parking permitted whether region is flat for judgement, and whether space is enough, if there are ponding, mirinesss etc.
It is unfavorable for the factor of parking, region carries out screening and sequencing to Parking permitted, and Parking permitted and is suitble to parking area and/or permits for display
Perhaps stop but be not suitable for the region of parking.
Step S1312 (being not marked in figure), the display mode for determining target driving auxiliary information content.
Step S1312 can be a kind of realization side for determining the display mode of one or more target driving auxiliary informations
Formula.
Similar to the step S1301 of embodiment one, repeat no more.Particularly, equipment is to Parking permitted and is suitble to parking area
Domain, Parking permitted but is not suitable for parking area, and Parking permitted region, is presented with different modes.
The corresponding AR information of target driving auxiliary information that step S1412 (being not marked in figure), display generate.
Step S1412, which can be, shows the corresponding virtual three-dimensional AR information of one or more target driving auxiliary informations
A kind of implementation.
Similar to the step S1401 of embodiment one, repeat no more.
Particularly, for all embodiments, the determination of apparatus self-adaptation following three: starting to show one or more AR letters
The opportunity of breath, and/or stop the opportunity of display one or more AR information, and/or the display duration of one or more AR information.
When the determination of apparatus self-adaptation opportunity as above and/or duration, equipment complex is considered the state of vehicle, traffic information, is set
At least one of standby system delay more preferably shows beginning and/or display stopping opportunity to reach, and when more preferably showing
It is long, to avoid include but is not limited to too early, too late, too long or too short display the problems such as, mislead interference driver.
It is exemplified below, for warning safe distance less than AR alert message, equipment can be in Ben Che and surrounding vehicles
Distance be less than safe distance at the time of start to show, Ben Che at a distance from surrounding vehicles be greater than safe distance at the time of stop
Display.However, due to safe distance as situation is different and different, when vehicle driving is in ice and snow road, Ben Che and surrounding vehicle
Safe distance should than travel it is big in prevailing roadway.And the travel speed phase of safe distance and Ben Che and surrounding vehicles
It closes, speed is faster, and required safe distance is bigger.That is, safe distance should adaptive determination as the case may be.
Therefore, it is based on vehicle-state (speed etc.), the system delay of traffic information (ice and snow road etc.) and equipment, equipment can be certainly
Safe distance under the calculating present case of adaptation.Therefore, correspondingly, equipment disappears for warning safe apart from insufficient AR warning
The display time started of breath, display dwell time, the adjustment adaptive also with safe distance of display duration, and on-fixed.
Particularly, for all embodiments, apparatus self-adaptation reduces by two kinds of delays: one is pay attention to force delay, another kind
It is display delay.Notice that force delay is defined as showing that AR information notices the time delay of AR information to driver from equipment;It is aobvious
Show that delay is defined as equipment and is generating, renders and show the time spent in AR information.
Wherein, be to reduce to pay attention to force delay, equipment can take at least one of following five kinds of modes: 1) it is adaptive general
The AR information of high priority is shown in the significant position in driver's present viewing field, and according to driver it is current watch depth attentively
Determine display mode;2) the adaptive display format that AR information is adjusted according to real scene environment increases AR information and true
The contrast of scene environment, to highlight AR information;3) for the AR information of high priority, for the AR information for highlighting high priority,
Stop and/or pause shows the AR information of surrounding low priority to highlight high priority AR information;4) for high priority
AR information, apparatus self-adaptation it will be shown in the significant positions in the driver visual field, and the driver visual field variation (turn
Head, eyeball etc. in an instant) when adaptive adjustment high priority AR information display position, keep it in aobvious in the driver visual field
Position;5) equipment is used cooperatively sound, animation, vibration, at least one of other modes such as light flash attraction driver's note
Meaning power.
For example, exploding on the right side of this vehicle, equipment at once warns the AR of generation when driver sees to this vehicle left
Report is shown in the current visual field of driver, i.e. this Che Zuofang.
It wherein, is to reduce display delay, equipment can at least one in the following way: 1) preparing number to be shown in advance
According to: equipment acquires this vehicle and the large range of information of surrounding, generates corresponding AR information, and substantially determine each AR information
Display mode;But only adaptive selectivity is shown in the AR information within the scope of driver's present viewing field to equipment;Permit in calculation amount
Perhaps equipment can render AR information in advance in the case where, and when AR information enters driver's field range, equipment can be direct
The AR information model rendered in advance is transferred, is shown after being adjusted according to present case, as shown in figure 18;2) change AR to be shown
The presentation mode of information: in view of the acceptable degree and present case of delay are related (such as when speed is 10,000 ms/h
When, 5 milliseconds of delay may be acceptable, but when speed rises to 50,000 ms/h, 3 millimeters of delay
Just can not may receive), the degree that acceptable delay is estimated according to present case that equipment can be adaptive, and it is adaptive
The presentation mode for the change AR content answered.For example, as shown in figure 19, when this vehicle speed is lower, equipment can be complete
Whole display driving region;But after this vehicle speed rises, to reduce delay, what equipment can be adaptive is in by region of driving a vehicle
Existing form is changed to striated, i.e., reduces delay by reducing display data volume.
Particularly, as shown in figure 20, when existing simultaneously multiple AR information and needing to show, if equipment is only to each
AR information considers respectively, is the optimal display mode of the selection of each AR information self-adapting, then shows these AR information at the same time
When to driver, driver may feel that AR information fuzzy is unclear there are mutual hiding relation because of between these AR information
It is clear, to cause to perplex to driver.
Therefore, multiple there are the multiple AR information of mutual hiding relation to showing simultaneously, following manner can be used extremely in equipment
It is one of few: 1) to must show simultaneously, and can not to merge and/or integrate, and should have the AR information feelings of hiding relation really
Condition, the part that is blocked for not showing the AR information that is blocked according to current Ubiety of apparatus self-adaptation, to avoid interference prospect
AR information;2) to a plurality of AR information that can merge and/or integrate, apparatus self-adaptation a plurality of AR information is merged and/or it is whole
It is combined into one or more information, can be such as two AR information by four AR information reductions;Equipment merge and/or integrate it is a plurality of
When AR information, it can be simply with category information merger, be also possible to higher level generates new AR according to Meaning of Information
Information, as shown in figure 20, equipment can generate more according to " there is driving school's student's vehicle on the left side " and " there is lorry at rear " two AR information
For the new AR information " careful left and rear " simplified;3) for the AR information that can postpone to show, equipment can postpone display,
More nearby start to show again for example, can postpone till this garage about non-emergent AR information remotely and sail to, to avoid interference
Display about AR information more nearby;Similarly, equipment can also postpone display, pause display, stop display, or even abandon showing
Show, unessential AR information is blocked with reducing/eliminating;4) as shown in figure 20, equipment can change one or more AR information
Display position and/or content the level of detail and/or presentation mode, decrease, which even is eliminated between AR information, mutually blocks pass
System.
Particularly, when determining the display position of AR information, equipment is in view of at least one of following situation: 1) equipment will
AR information is shown in the correct position of physics, i.e., is aligned with object corresponding in real space;2) equipment shows AR information
The region for not interfering driver to drive, such as sky areas;3) important AR information is directly displayed at working as driver by equipment
The significant position of forward view;4) AR information is shown the side relatively open in the driver visual field by equipment;For example, for driver
The left side the case where, the visual field is relatively more open on the right side of driver at seat;For driver seat the right side the case where, driver
Left visual field is relatively more open;5) equipment shows AR information in the region of driver attention relative deficiency, in driving,
To ensure safety, driver needs adequately to observe all directions.Therefore, equipment is existed by counting driver's sight
The residence time of all directions, if it find that the attention of driver is significantly insufficient in some or multiple regions, then by AR information
These regions are shown in a manner of more significant, to attract the attention of driver, driver are helped to balance attention.Wherein
Attract the mode of the attention of driver can also be in conjunction with voice/audio, animation, light etc..As shown in figure 21, when driver's
When the attention of sight statistics display driver is biased to left side, equipment can be shown AR information on right side, by the note of driver
Gravitational attraction anticipate to right side, so that driver be helped to balance attention.
The embodiment of the invention provides a kind of for assisting the device of the augmented reality driven, as shown in figure 22, comprising: really
Cover half block 2201, display module 2202.
Determining module 2201, for determining driving auxiliary information based on the information obtained in driving conditions.
Display module 2202, the corresponding virtual three-dimensional of driving auxiliary information for showing that determining module 2201 determines are shown
Information.
The embodiment of the invention provides a kind of for assisting the device of the augmented reality driven, compared with prior art, this
Inventive embodiments determine driving auxiliary information based on the information obtained in driving conditions, then show that driving auxiliary information is corresponding
Virtual three-dimensional show that information determines that the driving in driving conditions is auxiliary that is, by the information got in vehicle travel process
Supplementary information, and the corresponding virtual three-dimensional of driving auxiliary information in driving conditions is shown information passes through vision and/or the sense of hearing
Mode is presented to driver, existing using enhancing in vehicle travel process so as to realize to notify or warn driver
Real technology helps driver preferably to grasp the running information in vehicle travel process, and then can promote user experience.
The device of augmented reality provided in an embodiment of the present invention for assisting driving, is suitable for above method embodiment,
Details are not described herein.
Those skilled in the art of the present technique are appreciated that the present invention includes being related to for executing in operation described herein
One or more equipment.These equipment can specially design and manufacture for required purpose, or also may include general
Known device in computer.These equipment have the computer program being stored in it, these computer programs are selectively
Activation or reconstruct.Such computer program can be stored in equipment (for example, computer) readable medium or be stored in
It e-command and is coupled in any kind of medium of bus respectively suitable for storage, the computer-readable medium includes but not
Be limited to any kind of disk (including floppy disk, hard disk, CD, CD-ROM and magneto-optic disk), ROM (Read-Only Memory, only
Read memory), RAM (Random Access Memory, immediately memory), EPROM (Erasable Programmable
Read-Only Memory, Erarable Programmable Read only Memory), EEPROM (Electrically Erasable
Programmable Read-Only Memory, Electrically Erasable Programmable Read-Only Memory), flash memory, magnetic card or light card
Piece.It is, readable medium includes by equipment (for example, computer) with any Jie for the form storage or transmission information that can be read
Matter.
Those skilled in the art of the present technique be appreciated that can be realized with computer program instructions these structure charts and/or
The combination of each frame and these structure charts and/or the frame in block diagram and/or flow graph in block diagram and/or flow graph.This technology neck
Field technique personnel be appreciated that these computer program instructions can be supplied to general purpose computer, special purpose computer or other
The processor of programmable data processing method is realized, to pass through the processing of computer or other programmable data processing methods
The scheme specified in frame or multiple frames of the device to execute structure chart and/or block diagram and/or flow graph disclosed by the invention.
Those skilled in the art of the present technique have been appreciated that in the present invention the various operations crossed by discussion, method, in process
Steps, measures, and schemes can be replaced, changed, combined or be deleted.Further, each with having been crossed by discussion in the present invention
Kind of operation, method, other steps, measures, and schemes in process may also be alternated, changed, rearranged, decomposed, combined or deleted.
Further, in the prior art to have and the step in various operations, method disclosed in the present invention, process, measure, scheme
It may also be alternated, changed, rearranged, decomposed, combined or deleted.
The above is only some embodiments of the invention, it is noted that for the ordinary skill people of the art
For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered
It is considered as protection scope of the present invention.
Claims (22)
1. a kind of method for assisting the augmented reality driven characterized by comprising
Driving auxiliary information is determined based on the information obtained in driving conditions;
Show that the corresponding virtual three-dimensional of the driving auxiliary information shows information.
2. the method according to claim 1, wherein
Driving auxiliary information is determined based on the information obtained in driving conditions, comprising: feel based on what is obtained in driving conditions
Know that the information in region determines the driving auxiliary information being blocked;
Show that the corresponding virtual three-dimensional of the driving auxiliary information shows information, comprising: the driving auxiliary being blocked described in display
The corresponding virtual three-dimensional of information shows information.
3. according to the method described in claim 2, it is characterized in that, when the driving auxiliary information being blocked includes: road surface
When road information, and/or non-pavement marking information, the corresponding virtual three-dimensional of driving auxiliary information that is blocked described in display
Show information, comprising:
On the position of the driving auxiliary information being blocked, the driving auxiliary information being blocked described in display is corresponding virtual
Three-dimensional Display information.
4. according to the method described in claim 3, the information determination based on the perceived area obtained in driving conditions is blocked
Driving auxiliary information, including at least one of:
If the driving auxiliary information being blocked only partially is blocked, according to it is described driving auxiliary information can sense part
Point, the driving auxiliary information being blocked described in determination;
The object of reference information of perceived area in position and current driving based on current vehicle determines that the driving being blocked is auxiliary
Supplementary information;
Multimedia messages based on the driving auxiliary information being blocked that angles other other than driver visual angle are got determine
The driving auxiliary information being blocked;
Based on the multimedia messages for the driving auxiliary information being blocked in the perceived area obtained in driving conditions, to described more
Media information is enhanced and/or is restored, and determines the driving auxiliary information being blocked;
When the driving auxiliary information being blocked includes road surface road information, by the way that the map of present road and present road is carried out
Alignment determines the driving auxiliary information being blocked according to the map;
According to others driving auxiliary information, the driving auxiliary information being currently blocked is determined.
5. according to the method described in claim 4, it is characterized in that, the letter of the perceived area obtained in based on driving conditions
After breath determines the driving auxiliary information being blocked, the method also includes:
The determining driving auxiliary information being blocked is corrected;
The corresponding virtual three-dimensional of the driving auxiliary information being blocked described in display shows information, comprising: shows at the position of correction
The corresponding virtual three-dimensional of driving auxiliary information after showing correction shows information.
6. according to the method described in claim 5, it is characterized in that, being corrected to the determining driving auxiliary information being blocked
It includes at least one of the following:
When the driving auxiliary information being blocked includes lane relevant information, based on other vehicles in current vehicle preset range
Wheelpath and/or pavement track information correct the position for the driving auxiliary information being blocked;
When the driving auxiliary information being blocked includes road surface road information, by by the map of present road and present road into
Row alignment, according to the position for the driving auxiliary information that the map rectification is blocked.
7. according to the described in any item methods of claim 2-4, which is characterized in that when the driving auxiliary information packet being blocked
It includes: when the relevant information of lane,
The lane width of display is less than actual lane width.
8. according to the method described in claim 2, it is characterized in that, when the driving auxiliary information being blocked includes: blind area
When information, the corresponding virtual three-dimensional of the driving auxiliary information being blocked described in display shows information, comprising:
Show that the corresponding virtual three-dimensional of dead zone information shows information in the extended area of rearview mirror.
9. according to the method described in claim 8, it is characterized in that, being shown when rearview mirror is side rearview mirror in extended area
The virtual three-dimensional shown shows that information is that the virtual three-dimensional shows the corresponding real-world object of information according to the mirror of the side rearview mirror
What face attribute and driver's viewpoint generated.
10. the method according to claim 1, wherein
Driving auxiliary information is determined based on the information obtained in driving conditions, comprising: obtain current road segment traffic rules and/
Or traffic police's action message, and turn of the traffic rules and/or traffic police's action message progress presentation mode to determining current road segment
It changes;
Show that the corresponding virtual three-dimensional of the driving auxiliary information shows information, comprising: the current road segment after display conversion
Traffic rules and/or the corresponding virtual three-dimensional of traffic police's action message show information.
11. -10 described in any item methods according to claim 1, which is characterized in that the display driving auxiliary information pair
The virtual three-dimensional answered shows the step of information, including at least one of:
When perceiving abnormal track information, show the corresponding virtual three-dimensional in the abnormal track region determined show information and/
Or the region is the virtual three-dimensional display information of the information warning in abnormal track region;
When needing to show the traffic sign of road area that current vehicle has been run over, the current vehicle got has been shown
The corresponding virtual three-dimensional of the traffic sign of the road area run over shows information;
There are traffic sign and/or traffic lights at crossing where perceiving current vehicle, and traffic sign and/or traffic are believed
Signal lamp meets predetermined display condition, shows that the crossing traffic mark and/or the corresponding virtual three-dimensional of traffic lights show information;
When needing the key information in display operation dial plate, the corresponding virtual three-dimensional display letter of at least one following information is shown
Breath: it the location information of the key, the function name information of the key, the operation instruction information of the key and described presses
Key;
When needing to show parking area information, Parking permitted and is suitble to parking area, Parking permitted but is not suitable for parking for display
Region, Parking permitted, and the corresponding virtual three-dimensional at least one region shows information.
12. according to the method for claim 11, which is characterized in that described to be determined based on the information obtained in driving conditions
The step of auxiliary information of driving a vehicle, including at least one of:
Determine that pavement track information whether there is abnormal track information, if it exists abnormal track information, it is determined that there are abnormal vehicles
Rut region;
When needing to show the traffic sign of road area that current vehicle has been run over, from the multimedia messages got
And/or in traffic sign database, the traffic sign of the road area currently run over is determined;
When needing to show parking area information, is identified according to current vehicle peripheral region with the presence or absence of no parking, work as front truck
Volume and current road situation at least one of, determine that Parking permitted and be suitble to that parking area, Parking permitted but not
Be suitble to parking area, in Parking permitted region at least one of.
13. -12 described in any item methods according to claim 1, which is characterized in that the display driving auxiliary information is corresponding
Virtual three-dimensional shows that information includes:
The corresponding virtual three-dimensional of enhancing display track information shows information.
14. according to the method for claim 11, which is characterized in that the road that the current vehicle that display has been got has been run over
The corresponding virtual three-dimensional of the traffic sign in road region shows information, comprising:
Information, adjustment are shown according to the corresponding virtual three-dimensional of the traffic sign of current vehicle position and the road area run over
The corresponding virtual three-dimensional of the traffic sign of the road area currently run over shows information, and shows traffic sign adjusted
Corresponding virtual three-dimensional shows information.
15. -14 described in any item methods according to claim 1, which is characterized in that the display driving auxiliary information pair
The virtual three-dimensional answered shows the step of information, comprising:
Determine that virtual three-dimensional shows the corresponding display mode of information;
Based on determining display mode, show that the corresponding virtual three-dimensional of the driving auxiliary information shows information;
Wherein the display mode includes at least one of the following:
Virtual three-dimensional shows the display position of information;Virtual three-dimensional shows the display posture of information;Virtual three-dimensional shows information
Display size size;Virtual three-dimensional shows the display time started of information;Virtual three-dimensional shows the display end time of information;It is empty
The display duration of quasi- Three-dimensional Display information;Virtual three-dimensional shows display content the level of detail of information;Virtual three-dimensional shows information
Presentation mode;Multiple virtual three-dimensionals show the correlation shown between information;
The presentation mode includes at least one of the following: text mode;Icon mode;Animation mode;Audio mode;Light side
Formula;Vibration mode.
16. -15 described in any item methods according to claim 1, which is characterized in that the method also includes following at least one
:
When existing simultaneously multiple virtual three-dimensionals to be shown and showing information, the multiple virtual three-dimensional to be shown is shown into letter
Breath merges processing, and the virtual three-dimensional display information that shows that treated;
When showing that multiple virtual three-dimensionals to be shown show information simultaneously, the multiple virtual three-dimensional to be shown is shown into letter
Breath carries out integration processing based on semanteme, and shows that treated virtual three-dimensional shows information.
17. -16 described in any item methods according to claim 1, which is characterized in that the method also includes following at least one
:
Will be above the corresponding virtual three-dimensional of driving auxiliary information of the first pre-set priority, to show that information is shown in driver current
The significant position in the visual field, and in real time according to the eye position of driver, adjustment shows that the virtual three-dimensional shows the position of information;
The corresponding virtual three-dimensional of driving auxiliary information that display is higher than the first pre-set priority shows information, pause and/or stopping
Display virtual three-dimensional corresponding lower than the driving auxiliary information of the second pre-set priority shows information.
18. -17 described in any item methods according to claim 1, which is characterized in that the display driving auxiliary information pair
The virtual three-dimensional answered shows the step of information, comprising:
According at least one of the system delay situation of the current state of vehicle, current traffic information and equipment, determine
Display virtual three-dimensional shows at least one in the display time started, display end time and display duration of information;
The display time started, display end time and display duration of information are shown according to determining display virtual three-dimensional
At least one of in, show that the corresponding virtual three-dimensional of the driving auxiliary information shows information.
19. -18 described in any item methods according to claim 1, which is characterized in that
Information is shown when existing simultaneously the corresponding virtual three-dimensional of multiple driving auxiliary informations to be shown, and to be shown multiple
Virtual three-dimensional shows that the method also includes at least one of following there are when hiding relation between information:
According to the positional relationship shown between information there are multiple virtual three-dimensionals of hiding relation, virtual three-dimensional display letter is only shown
The part not being blocked in breath;
In the different display time, show respectively it is described there are multiple virtual three-dimensionals of hiding relation show information in virtual three-dimensional
Show information;
There are the display positions that multiple virtual three-dimensionals of hiding relation show at least one virtual three-dimensional display information in information for adjustment
Set, in content the level of detail, presentation mode at least one of, and according to the mode of adjustment, show described there are hiding relation
Multiple virtual three-dimensionals show that each virtual three-dimensional shows information in information.
20. -19 described in any item methods according to claim 1, which is characterized in that the display driving auxiliary information pair
The virtual three-dimensional answered shows the step of information, comprising:
In preset display position, show that the corresponding virtual three-dimensional of the driving auxiliary information to be shown shows information;
Wherein, preset display position includes at least one of the following:
The display position being aligned with true driving auxiliary information;The regional location for not interfering driver to drive;Driver is current
The significant position in the visual field;The relatively open position in the driver visual field;The insufficient position of driver attention.
21. -20 described in any item methods according to claim 1, which is characterized in that the method also includes:
Virtual three-dimensional to be shown is rendered in advance shows information;
It when meeting default display trigger condition, shows in information, obtains to be shown virtual from the virtual three-dimensional rendered in advance
Three-dimensional Display information, and the presentation mode that the virtual three-dimensional shows information is adjusted according to current environment, and according to adjusted
Presentation mode shows that the virtual three-dimensional shows information;
It adjusts the display mode that virtual three-dimensional shows information in real time according to current environment, and according to display mode adjusted, shows
Show that the virtual three-dimensional shows information.
22. a kind of for assisting the device of the augmented reality driven characterized by comprising
Determining module, for determining driving auxiliary information based on the information obtained in driving conditions;
Display module, for showing that the corresponding virtual three-dimensional of driving auxiliary information that the determining module determines shows information.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710737404.2A CN109427199B (en) | 2017-08-24 | 2017-08-24 | Augmented reality method and device for driving assistance |
CN202211358721.0A CN115620545A (en) | 2017-08-24 | 2017-08-24 | Augmented reality method and device for driving assistance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710737404.2A CN109427199B (en) | 2017-08-24 | 2017-08-24 | Augmented reality method and device for driving assistance |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211358721.0A Division CN115620545A (en) | 2017-08-24 | 2017-08-24 | Augmented reality method and device for driving assistance |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109427199A true CN109427199A (en) | 2019-03-05 |
CN109427199B CN109427199B (en) | 2022-11-18 |
Family
ID=65500433
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710737404.2A Active CN109427199B (en) | 2017-08-24 | 2017-08-24 | Augmented reality method and device for driving assistance |
CN202211358721.0A Pending CN115620545A (en) | 2017-08-24 | 2017-08-24 | Augmented reality method and device for driving assistance |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211358721.0A Pending CN115620545A (en) | 2017-08-24 | 2017-08-24 | Augmented reality method and device for driving assistance |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN109427199B (en) |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109910744A (en) * | 2019-03-18 | 2019-06-21 | 重庆睿驰智能科技有限公司 | LDW Lane Departure Warning System |
CN109931944A (en) * | 2019-04-02 | 2019-06-25 | 百度在线网络技术(北京)有限公司 | A kind of AR air navigation aid, device, vehicle end equipment, server-side and medium |
CN110070742A (en) * | 2019-05-29 | 2019-07-30 | 浙江吉利控股集团有限公司 | The recognition methods of high speed ring road speed limit, system and vehicle |
CN110341601A (en) * | 2019-06-14 | 2019-10-18 | 江苏大学 | A kind of pillar A blind is eliminated and auxiliary driving device and its control method |
CN110737266A (en) * | 2019-09-17 | 2020-01-31 | 中国第一汽车股份有限公司 | automatic driving control method, device, vehicle and storage medium |
CN111107332A (en) * | 2019-12-30 | 2020-05-05 | 华人运通(上海)云计算科技有限公司 | HUD projection image display method and device |
CN111272182A (en) * | 2020-02-20 | 2020-06-12 | 程爱军 | Mapping system using block chain database |
CN111332317A (en) * | 2020-02-17 | 2020-06-26 | 吉利汽车研究院(宁波)有限公司 | Driving reminding method, system and device based on augmented reality technology |
CN111366168A (en) * | 2020-02-17 | 2020-07-03 | 重庆邮电大学 | AR navigation system and method based on multi-source information fusion |
CN111601279A (en) * | 2020-05-14 | 2020-08-28 | 大陆投资(中国)有限公司 | Method for displaying dynamic traffic situation in vehicle-mounted display and vehicle-mounted system |
CN111681437A (en) * | 2020-06-04 | 2020-09-18 | 北京航迹科技有限公司 | Surrounding vehicle reminding method and device, electronic equipment and storage medium |
CN111717028A (en) * | 2020-06-29 | 2020-09-29 | 深圳市元征科技股份有限公司 | AR system control method and related device |
CN111738191A (en) * | 2020-06-29 | 2020-10-02 | 广州小鹏车联网科技有限公司 | Processing method for parking space display and vehicle |
CN111815863A (en) * | 2020-04-17 | 2020-10-23 | 北京嘀嘀无限科技发展有限公司 | Vehicle control method, storage medium, and electronic device |
CN112440888A (en) * | 2019-08-30 | 2021-03-05 | 比亚迪股份有限公司 | Vehicle control method, control device and electronic equipment |
CN112801012A (en) * | 2021-02-05 | 2021-05-14 | 腾讯科技(深圳)有限公司 | Traffic element processing method and device, electronic equipment and storage medium |
CN113160548A (en) * | 2020-01-23 | 2021-07-23 | 宝马股份公司 | Method, device and vehicle for automatic driving of vehicle |
CN113247015A (en) * | 2021-06-30 | 2021-08-13 | 厦门元馨智能科技有限公司 | Vehicle driving auxiliary system based on somatosensory operation integrated glasses and method thereof |
CN113470394A (en) * | 2021-07-05 | 2021-10-01 | 浙江商汤科技开发有限公司 | Augmented reality display method and related device, vehicle and storage medium |
CN113536984A (en) * | 2021-06-28 | 2021-10-22 | 北京沧沐科技有限公司 | Image target identification and tracking system based on unmanned aerial vehicle |
CN113593303A (en) * | 2021-08-12 | 2021-11-02 | 上海仙塔智能科技有限公司 | Method for reminding safe driving between vehicles, vehicle and intelligent glasses |
CN113763566A (en) * | 2020-06-05 | 2021-12-07 | 光宝电子(广州)有限公司 | Image generation system and image generation method |
WO2021258671A1 (en) * | 2020-06-24 | 2021-12-30 | 上海商汤临港智能科技有限公司 | Assisted driving interaction method and apparatus based on vehicle-mounted digital human, and storage medium |
CN113989466A (en) * | 2021-10-28 | 2022-01-28 | 江苏濠汉信息技术有限公司 | Beyond-the-horizon assistant driving system based on situation cognition |
CN114341961A (en) * | 2019-08-30 | 2022-04-12 | 高通股份有限公司 | Techniques for augmented reality assistance |
US11341754B2 (en) * | 2018-10-23 | 2022-05-24 | Samsung Electronics Co., Ltd. | Method and apparatus for auto calibration |
WO2023010236A1 (en) * | 2021-07-31 | 2023-02-09 | 华为技术有限公司 | Display method, device and system |
Citations (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1267032A (en) * | 1999-03-11 | 2000-09-20 | 现代自动车株式会社 | Method for monitoring vehicle position on lane of road |
CN1433360A (en) * | 1999-07-30 | 2003-07-30 | 倍耐力轮胎公司 | Method and system for controlling behaviour of vehicle by controlling its types |
JP2005275691A (en) * | 2004-03-24 | 2005-10-06 | Toshiba Corp | Image processing device, image processing method, and image processing program |
JP2005303450A (en) * | 2004-04-07 | 2005-10-27 | Auto Network Gijutsu Kenkyusho:Kk | Apparatus for monitoring vehicle periphery |
US6977630B1 (en) * | 2000-07-18 | 2005-12-20 | University Of Minnesota | Mobility assist device |
JP2007030603A (en) * | 2005-07-25 | 2007-02-08 | Mazda Motor Corp | Vehicle travel assisting device |
CN101101333A (en) * | 2006-07-06 | 2008-01-09 | 三星电子株式会社 | Apparatus and method for producing assistant information of driving vehicle for driver |
JP2008305162A (en) * | 2007-06-07 | 2008-12-18 | Aisin Aw Co Ltd | Vehicle traveling support device and program |
FR2938365A1 (en) * | 2008-11-10 | 2010-05-14 | Peugeot Citroen Automobiles Sa | Data e.g. traffic sign, operating device for use in motor vehicle e.g. car, has regulation unit automatically acting on assistance device when speed is lower than real speed of vehicle and when no action by driver |
CN101866049A (en) * | 2009-04-02 | 2010-10-20 | 通用汽车环球科技运作公司 | Traveling lane on the windscreen head-up display |
CN101915990A (en) * | 2009-04-02 | 2010-12-15 | 通用汽车环球科技运作公司 | Enhancing road vision on the full-windscreen head-up display |
CN102012230A (en) * | 2010-08-27 | 2011-04-13 | 杭州妙影微电子有限公司 | Road live view navigation method |
CN102089794A (en) * | 2008-08-26 | 2011-06-08 | 松下电器产业株式会社 | Intersection situation recognition system |
CN102667888A (en) * | 2009-11-27 | 2012-09-12 | 丰田自动车株式会社 | Drive assistance device and drive assistance method |
CN102770891A (en) * | 2010-03-19 | 2012-11-07 | 三菱电机株式会社 | Information offering apparatus |
CN103202030A (en) * | 2010-12-16 | 2013-07-10 | 株式会社巨晶片 | Image processing system, method of operating image processing system, host apparatus, program, and method of making program |
DE102012201896A1 (en) * | 2012-02-09 | 2013-08-14 | Robert Bosch Gmbh | Driver assistance system and driver assistance system for snowy roads |
US20130293582A1 (en) * | 2012-05-07 | 2013-11-07 | Victor Ng-Thow-Hing | Method to generate virtual display surfaces from video imagery of road based scenery |
CN103489314A (en) * | 2013-09-25 | 2014-01-01 | 广东欧珀移动通信有限公司 | Method and device for displaying real-time road conditions |
CN103513951A (en) * | 2012-06-06 | 2014-01-15 | 三星电子株式会社 | Apparatus and method for providing augmented reality information using three dimension map |
CN103827937A (en) * | 2011-09-26 | 2014-05-28 | 丰田自动车株式会社 | Vehicle driving assistance system |
US20140267415A1 (en) * | 2013-03-12 | 2014-09-18 | Xueming Tang | Road marking illuminattion system and method |
WO2015009218A1 (en) * | 2013-07-18 | 2015-01-22 | Scania Cv Ab | Determination of lane position |
CN104401325A (en) * | 2014-11-05 | 2015-03-11 | 江苏大学 | Dynamic regulation and fault tolerance method and dynamic regulation and fault tolerance system for auxiliary parking path |
WO2015039654A2 (en) * | 2013-09-23 | 2015-03-26 | Conti Temic Microelectronic Gmbh | Method for detecting a traffic police officer by means of a driver assistance system of a motor vehicle, and driver assistance system |
CN104670091A (en) * | 2013-12-02 | 2015-06-03 | 现代摩比斯株式会社 | Augmented reality lane change assistant system using projection unit |
US20150169966A1 (en) * | 2012-06-01 | 2015-06-18 | Denso Corporation | Apparatus for detecting boundary line of vehicle lane and method thereof |
US20150178588A1 (en) * | 2013-12-19 | 2015-06-25 | Robert Bosch Gmbh | Method and apparatus for recognizing object reflections |
CN104798124A (en) * | 2012-11-21 | 2015-07-22 | 丰田自动车株式会社 | Driving-assistance device and driving-assistance method |
CN104809901A (en) * | 2014-01-28 | 2015-07-29 | 通用汽车环球科技运作有限责任公司 | Method for using street level images to enhance automated driving mode for vehicle |
US20150268465A1 (en) * | 2014-03-20 | 2015-09-24 | Toyota Motor Engineering & Manufacturing North America, Inc. | Transparent Display Overlay Systems for Vehicle Instrument Cluster Assemblies |
CN105185134A (en) * | 2014-06-05 | 2015-12-23 | 星克跃尔株式会社 | Electronic Apparatus, Control Method Of Electronic Apparatus And Computer Readable Recording Medium |
CN105378815A (en) * | 2013-06-10 | 2016-03-02 | 罗伯特·博世有限公司 | Method and device for signalling traffic object that is at least partially visually concealed to driver of vehicle |
CN105644438A (en) * | 2014-11-28 | 2016-06-08 | 爱信精机株式会社 | A vehicle circumference monitoring apparatus |
CN105678316A (en) * | 2015-12-29 | 2016-06-15 | 大连楼兰科技股份有限公司 | Active driving method based on multi-information fusion |
CN105929539A (en) * | 2016-05-19 | 2016-09-07 | 彭波 | Automobile or mobile device 3D image acquisition and naked-eye 3D head-up display system and 3D image processing method |
US20170010120A1 (en) * | 2015-02-10 | 2017-01-12 | Mobileye Vision Technologies Ltd. | Systems and methods for identifying landmarks |
CN106338828A (en) * | 2016-08-31 | 2017-01-18 | 京东方科技集团股份有限公司 | Vehicle-mounted augmented reality system, method and equipment |
CN106448260A (en) * | 2015-08-05 | 2017-02-22 | Lg电子株式会社 | Driver assistance apparatus and vehicle including the same |
CN106494309A (en) * | 2016-10-11 | 2017-03-15 | 广州视源电子科技股份有限公司 | The picture display process of the vision dead zone of vehicle, device and Vehicular virtual system |
US9625264B1 (en) * | 2016-01-20 | 2017-04-18 | Denso Corporation | Systems and methods for displaying route information |
GB201704607D0 (en) * | 2016-03-25 | 2017-05-10 | Jaguar Land Rover Ltd | Virtual overlay system and method for occluded objects |
US20170169612A1 (en) * | 2015-12-15 | 2017-06-15 | N.S. International, LTD | Augmented reality alignment system and method |
CN106855656A (en) * | 2015-12-08 | 2017-06-16 | 通用汽车环球科技运作有限责任公司 | The image procossing of augmented reality system and shielded object |
CN106864393A (en) * | 2017-03-27 | 2017-06-20 | 深圳市精能奥天导航技术有限公司 | Senior drive assistance function upgrade-system |
CN106915302A (en) * | 2015-12-24 | 2017-07-04 | Lg电子株式会社 | For the display device and its control method of vehicle |
US20170206426A1 (en) * | 2016-01-15 | 2017-07-20 | Ford Global Technologies, Llc | Pedestrian Detection With Saliency Maps |
CN107784864A (en) * | 2016-08-26 | 2018-03-09 | 奥迪股份公司 | Vehicle assistant drive method and system |
-
2017
- 2017-08-24 CN CN201710737404.2A patent/CN109427199B/en active Active
- 2017-08-24 CN CN202211358721.0A patent/CN115620545A/en active Pending
Patent Citations (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1267032A (en) * | 1999-03-11 | 2000-09-20 | 现代自动车株式会社 | Method for monitoring vehicle position on lane of road |
CN1433360A (en) * | 1999-07-30 | 2003-07-30 | 倍耐力轮胎公司 | Method and system for controlling behaviour of vehicle by controlling its types |
US6977630B1 (en) * | 2000-07-18 | 2005-12-20 | University Of Minnesota | Mobility assist device |
JP2005275691A (en) * | 2004-03-24 | 2005-10-06 | Toshiba Corp | Image processing device, image processing method, and image processing program |
JP2005303450A (en) * | 2004-04-07 | 2005-10-27 | Auto Network Gijutsu Kenkyusho:Kk | Apparatus for monitoring vehicle periphery |
JP2007030603A (en) * | 2005-07-25 | 2007-02-08 | Mazda Motor Corp | Vehicle travel assisting device |
CN101101333A (en) * | 2006-07-06 | 2008-01-09 | 三星电子株式会社 | Apparatus and method for producing assistant information of driving vehicle for driver |
JP2008305162A (en) * | 2007-06-07 | 2008-12-18 | Aisin Aw Co Ltd | Vehicle traveling support device and program |
CN102089794A (en) * | 2008-08-26 | 2011-06-08 | 松下电器产业株式会社 | Intersection situation recognition system |
FR2938365A1 (en) * | 2008-11-10 | 2010-05-14 | Peugeot Citroen Automobiles Sa | Data e.g. traffic sign, operating device for use in motor vehicle e.g. car, has regulation unit automatically acting on assistance device when speed is lower than real speed of vehicle and when no action by driver |
CN101866049A (en) * | 2009-04-02 | 2010-10-20 | 通用汽车环球科技运作公司 | Traveling lane on the windscreen head-up display |
CN101915990A (en) * | 2009-04-02 | 2010-12-15 | 通用汽车环球科技运作公司 | Enhancing road vision on the full-windscreen head-up display |
CN102667888A (en) * | 2009-11-27 | 2012-09-12 | 丰田自动车株式会社 | Drive assistance device and drive assistance method |
CN102770891A (en) * | 2010-03-19 | 2012-11-07 | 三菱电机株式会社 | Information offering apparatus |
CN102012230A (en) * | 2010-08-27 | 2011-04-13 | 杭州妙影微电子有限公司 | Road live view navigation method |
CN103202030A (en) * | 2010-12-16 | 2013-07-10 | 株式会社巨晶片 | Image processing system, method of operating image processing system, host apparatus, program, and method of making program |
CN103827937A (en) * | 2011-09-26 | 2014-05-28 | 丰田自动车株式会社 | Vehicle driving assistance system |
US20130211720A1 (en) * | 2012-02-09 | 2013-08-15 | Volker NIEMZ | Driver-assistance method and driver-assistance system for snow-covered roads |
DE102012201896A1 (en) * | 2012-02-09 | 2013-08-14 | Robert Bosch Gmbh | Driver assistance system and driver assistance system for snowy roads |
US20130293582A1 (en) * | 2012-05-07 | 2013-11-07 | Victor Ng-Thow-Hing | Method to generate virtual display surfaces from video imagery of road based scenery |
US20150169966A1 (en) * | 2012-06-01 | 2015-06-18 | Denso Corporation | Apparatus for detecting boundary line of vehicle lane and method thereof |
CN103513951A (en) * | 2012-06-06 | 2014-01-15 | 三星电子株式会社 | Apparatus and method for providing augmented reality information using three dimension map |
CN104798124A (en) * | 2012-11-21 | 2015-07-22 | 丰田自动车株式会社 | Driving-assistance device and driving-assistance method |
US20140267415A1 (en) * | 2013-03-12 | 2014-09-18 | Xueming Tang | Road marking illuminattion system and method |
CN105378815A (en) * | 2013-06-10 | 2016-03-02 | 罗伯特·博世有限公司 | Method and device for signalling traffic object that is at least partially visually concealed to driver of vehicle |
WO2015009218A1 (en) * | 2013-07-18 | 2015-01-22 | Scania Cv Ab | Determination of lane position |
WO2015039654A2 (en) * | 2013-09-23 | 2015-03-26 | Conti Temic Microelectronic Gmbh | Method for detecting a traffic police officer by means of a driver assistance system of a motor vehicle, and driver assistance system |
CN103489314A (en) * | 2013-09-25 | 2014-01-01 | 广东欧珀移动通信有限公司 | Method and device for displaying real-time road conditions |
CN104670091A (en) * | 2013-12-02 | 2015-06-03 | 现代摩比斯株式会社 | Augmented reality lane change assistant system using projection unit |
US20150178588A1 (en) * | 2013-12-19 | 2015-06-25 | Robert Bosch Gmbh | Method and apparatus for recognizing object reflections |
CN104809901A (en) * | 2014-01-28 | 2015-07-29 | 通用汽车环球科技运作有限责任公司 | Method for using street level images to enhance automated driving mode for vehicle |
US20150268465A1 (en) * | 2014-03-20 | 2015-09-24 | Toyota Motor Engineering & Manufacturing North America, Inc. | Transparent Display Overlay Systems for Vehicle Instrument Cluster Assemblies |
CN105185134A (en) * | 2014-06-05 | 2015-12-23 | 星克跃尔株式会社 | Electronic Apparatus, Control Method Of Electronic Apparatus And Computer Readable Recording Medium |
CN104401325A (en) * | 2014-11-05 | 2015-03-11 | 江苏大学 | Dynamic regulation and fault tolerance method and dynamic regulation and fault tolerance system for auxiliary parking path |
CN105644438A (en) * | 2014-11-28 | 2016-06-08 | 爱信精机株式会社 | A vehicle circumference monitoring apparatus |
US20170010120A1 (en) * | 2015-02-10 | 2017-01-12 | Mobileye Vision Technologies Ltd. | Systems and methods for identifying landmarks |
CN106448260A (en) * | 2015-08-05 | 2017-02-22 | Lg电子株式会社 | Driver assistance apparatus and vehicle including the same |
CN106855656A (en) * | 2015-12-08 | 2017-06-16 | 通用汽车环球科技运作有限责任公司 | The image procossing of augmented reality system and shielded object |
US20170169612A1 (en) * | 2015-12-15 | 2017-06-15 | N.S. International, LTD | Augmented reality alignment system and method |
CN106915302A (en) * | 2015-12-24 | 2017-07-04 | Lg电子株式会社 | For the display device and its control method of vehicle |
CN105678316A (en) * | 2015-12-29 | 2016-06-15 | 大连楼兰科技股份有限公司 | Active driving method based on multi-information fusion |
CN106980814A (en) * | 2016-01-15 | 2017-07-25 | 福特全球技术公司 | With the pedestrian detection of conspicuousness map |
US20170206426A1 (en) * | 2016-01-15 | 2017-07-20 | Ford Global Technologies, Llc | Pedestrian Detection With Saliency Maps |
US9625264B1 (en) * | 2016-01-20 | 2017-04-18 | Denso Corporation | Systems and methods for displaying route information |
GB201704607D0 (en) * | 2016-03-25 | 2017-05-10 | Jaguar Land Rover Ltd | Virtual overlay system and method for occluded objects |
CN105929539A (en) * | 2016-05-19 | 2016-09-07 | 彭波 | Automobile or mobile device 3D image acquisition and naked-eye 3D head-up display system and 3D image processing method |
CN107784864A (en) * | 2016-08-26 | 2018-03-09 | 奥迪股份公司 | Vehicle assistant drive method and system |
CN106338828A (en) * | 2016-08-31 | 2017-01-18 | 京东方科技集团股份有限公司 | Vehicle-mounted augmented reality system, method and equipment |
CN106494309A (en) * | 2016-10-11 | 2017-03-15 | 广州视源电子科技股份有限公司 | The picture display process of the vision dead zone of vehicle, device and Vehicular virtual system |
CN106864393A (en) * | 2017-03-27 | 2017-06-20 | 深圳市精能奥天导航技术有限公司 | Senior drive assistance function upgrade-system |
Non-Patent Citations (4)
Title |
---|
FRANÇOIS RAMEAU等: "A Real-Time Augmented Reality System to See-Through Cars", 《IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS》, vol. 22, no. 11, 27 July 2016 (2016-07-27), pages 2395 - 2404, XP011623601, DOI: 10.1109/TVCG.2016.2593768 * |
大嘴: "雪天安全驾驶技巧全攻略", 《车世界》 * |
大嘴: "雪天安全驾驶技巧全攻略", 《车世界》, no. 12, 31 December 2006 (2006-12-31) * |
身临其境: "预测追踪如何降低VR***延迟,有效减少眩晕感?", 《HTTPS://M.SOHU.COM/A/141266726_747619/?PVID=000115_3W_A》, 17 May 2017 (2017-05-17) * |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11341754B2 (en) * | 2018-10-23 | 2022-05-24 | Samsung Electronics Co., Ltd. | Method and apparatus for auto calibration |
CN109910744B (en) * | 2019-03-18 | 2022-06-03 | 重庆睿驰智能科技有限公司 | LDW lane departure early warning system |
CN109910744A (en) * | 2019-03-18 | 2019-06-21 | 重庆睿驰智能科技有限公司 | LDW Lane Departure Warning System |
CN109931944A (en) * | 2019-04-02 | 2019-06-25 | 百度在线网络技术(北京)有限公司 | A kind of AR air navigation aid, device, vehicle end equipment, server-side and medium |
CN110070742A (en) * | 2019-05-29 | 2019-07-30 | 浙江吉利控股集团有限公司 | The recognition methods of high speed ring road speed limit, system and vehicle |
CN110341601B (en) * | 2019-06-14 | 2023-02-17 | 江苏大学 | A-pillar blind area eliminating and driving assisting device and control method thereof |
CN110341601A (en) * | 2019-06-14 | 2019-10-18 | 江苏大学 | A kind of pillar A blind is eliminated and auxiliary driving device and its control method |
CN114341961A (en) * | 2019-08-30 | 2022-04-12 | 高通股份有限公司 | Techniques for augmented reality assistance |
CN112440888B (en) * | 2019-08-30 | 2022-07-15 | 比亚迪股份有限公司 | Vehicle control method, control device and electronic equipment |
CN112440888A (en) * | 2019-08-30 | 2021-03-05 | 比亚迪股份有限公司 | Vehicle control method, control device and electronic equipment |
CN110737266A (en) * | 2019-09-17 | 2020-01-31 | 中国第一汽车股份有限公司 | automatic driving control method, device, vehicle and storage medium |
CN110737266B (en) * | 2019-09-17 | 2022-11-18 | 中国第一汽车股份有限公司 | Automatic driving control method and device, vehicle and storage medium |
CN111107332A (en) * | 2019-12-30 | 2020-05-05 | 华人运通(上海)云计算科技有限公司 | HUD projection image display method and device |
CN113160548B (en) * | 2020-01-23 | 2023-03-10 | 宝马股份公司 | Method, device and vehicle for automatic driving of vehicle |
CN113160548A (en) * | 2020-01-23 | 2021-07-23 | 宝马股份公司 | Method, device and vehicle for automatic driving of vehicle |
CN111366168B (en) * | 2020-02-17 | 2023-12-29 | 深圳毕加索电子有限公司 | AR navigation system and method based on multisource information fusion |
CN111366168A (en) * | 2020-02-17 | 2020-07-03 | 重庆邮电大学 | AR navigation system and method based on multi-source information fusion |
CN111332317A (en) * | 2020-02-17 | 2020-06-26 | 吉利汽车研究院(宁波)有限公司 | Driving reminding method, system and device based on augmented reality technology |
CN111272182B (en) * | 2020-02-20 | 2021-05-28 | 武汉科信云图信息技术有限公司 | Mapping system using block chain database |
CN111272182A (en) * | 2020-02-20 | 2020-06-12 | 程爱军 | Mapping system using block chain database |
CN111815863A (en) * | 2020-04-17 | 2020-10-23 | 北京嘀嘀无限科技发展有限公司 | Vehicle control method, storage medium, and electronic device |
CN111601279A (en) * | 2020-05-14 | 2020-08-28 | 大陆投资(中国)有限公司 | Method for displaying dynamic traffic situation in vehicle-mounted display and vehicle-mounted system |
CN111681437A (en) * | 2020-06-04 | 2020-09-18 | 北京航迹科技有限公司 | Surrounding vehicle reminding method and device, electronic equipment and storage medium |
CN111681437B (en) * | 2020-06-04 | 2022-10-11 | 北京航迹科技有限公司 | Surrounding vehicle reminding method and device, electronic equipment and storage medium |
CN113763566A (en) * | 2020-06-05 | 2021-12-07 | 光宝电子(广州)有限公司 | Image generation system and image generation method |
WO2021258671A1 (en) * | 2020-06-24 | 2021-12-30 | 上海商汤临港智能科技有限公司 | Assisted driving interaction method and apparatus based on vehicle-mounted digital human, and storage medium |
CN111738191A (en) * | 2020-06-29 | 2020-10-02 | 广州小鹏车联网科技有限公司 | Processing method for parking space display and vehicle |
WO2022002055A1 (en) * | 2020-06-29 | 2022-01-06 | 广州橙行智动汽车科技有限公司 | Parking space display processing method and vehicle |
CN111717028A (en) * | 2020-06-29 | 2020-09-29 | 深圳市元征科技股份有限公司 | AR system control method and related device |
CN112801012A (en) * | 2021-02-05 | 2021-05-14 | 腾讯科技(深圳)有限公司 | Traffic element processing method and device, electronic equipment and storage medium |
CN113536984A (en) * | 2021-06-28 | 2021-10-22 | 北京沧沐科技有限公司 | Image target identification and tracking system based on unmanned aerial vehicle |
CN113247015A (en) * | 2021-06-30 | 2021-08-13 | 厦门元馨智能科技有限公司 | Vehicle driving auxiliary system based on somatosensory operation integrated glasses and method thereof |
CN113470394A (en) * | 2021-07-05 | 2021-10-01 | 浙江商汤科技开发有限公司 | Augmented reality display method and related device, vehicle and storage medium |
WO2023010236A1 (en) * | 2021-07-31 | 2023-02-09 | 华为技术有限公司 | Display method, device and system |
CN113593303A (en) * | 2021-08-12 | 2021-11-02 | 上海仙塔智能科技有限公司 | Method for reminding safe driving between vehicles, vehicle and intelligent glasses |
CN113989466A (en) * | 2021-10-28 | 2022-01-28 | 江苏濠汉信息技术有限公司 | Beyond-the-horizon assistant driving system based on situation cognition |
Also Published As
Publication number | Publication date |
---|---|
CN109427199B (en) | 2022-11-18 |
CN115620545A (en) | 2023-01-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109427199A (en) | For assisting the method and device of the augmented reality driven | |
US11767024B2 (en) | Augmented reality method and apparatus for driving assistance | |
US8970451B2 (en) | Visual guidance system | |
EP2936065B1 (en) | A system for a vehicle | |
US9267808B2 (en) | Visual guidance system | |
US9760782B2 (en) | Method for representing objects surrounding a vehicle on the display of a display device | |
CN107449440A (en) | The display methods and display device for prompt message of driving a vehicle | |
WO2019097763A1 (en) | Superposed-image display device and computer program | |
US20140107911A1 (en) | User interface method for terminal for vehicle and apparatus thereof | |
US10732420B2 (en) | Head up display with symbols positioned to augment reality | |
CN111595357B (en) | Visual interface display method and device, electronic equipment and storage medium | |
CN107111742A (en) | To track limitation and the identification and prediction of construction area in navigation | |
US11525694B2 (en) | Superimposed-image display device and computer program | |
CN210139859U (en) | Automobile collision early warning system, navigation and automobile | |
US11972616B2 (en) | Enhanced navigation instructions with landmarks under difficult driving conditions | |
JP2014181927A (en) | Information provision device, and information provision program | |
US12013254B2 (en) | Control device | |
US20230135641A1 (en) | Superimposed image display device | |
CN107408338A (en) | Driver assistance system | |
CN111354222A (en) | Driving assisting method and system | |
US11590902B2 (en) | Vehicle display system for displaying surrounding event information | |
CN109927629A (en) | For controlling the display control apparatus, display control method and vehicle of projection device | |
KR20150051671A (en) | A display control device using vehicles and user motion recognition and its method of operation | |
US20210268961A1 (en) | Display method, display device, and display system | |
KR20130119144A (en) | Method and device for displaying object using transparent display panel |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |